Quality and Growth Implications of Incremental Costing Models for Distance Education Units
ERIC Educational Resources Information Center
Crawford, C. B.; Gould, Lawrence V.; King, Dennis; Parker, Carl
2010-01-01
The purpose of this article is to explore quality and growth implications emergent from various incremental costing models applied to distance education units. Prior research relative to costing models and three competing costing models useful in the current distance education environment are discussed. Specifically, the simple costing model, unit…
SUSTAIN: A Network Model of Category Learning
ERIC Educational Resources Information Center
Love, Bradley C.; Medin, Douglas L.; Gureckis, Todd M.
2004-01-01
SUSTAIN (Supervised and Unsupervised STratified Adaptive Incremental Network) is a model of how humans learn categories from examples. SUSTAIN initially assumes a simple category structure. If simple solutions prove inadequate and SUSTAIN is confronted with a surprising event (e.g., it is told that a bat is a mammal instead of a bird), SUSTAIN…
Simulation Study on Fit Indexes in CFA Based on Data with Slightly Distorted Simple Structure
ERIC Educational Resources Information Center
Beauducel, Andre; Wittmann, Werner W.
2005-01-01
Fit indexes were compared with respect to a specific type of model misspecification. Simple structure was violated with some secondary loadings that were present in the true models that were not specified in the estimated models. The c2 test, Comparative Fit Index, Goodness-of-Fit Index, Incremental Fit Index, Nonnormed Fit Index, root mean…
Asymmetry in power-law magnitude correlations.
Podobnik, Boris; Horvatić, Davor; Tenenbaum, Joel N; Stanley, H Eugene
2009-07-01
Time series of increments can be created in a number of different ways from a variety of physical phenomena. For example, in the phenomenon of volatility clustering-well-known in finance-magnitudes of adjacent increments are correlated. Moreover, in some time series, magnitude correlations display asymmetry with respect to an increment's sign: the magnitude of |x_{i}| depends on the sign of the previous increment x_{i-1} . Here we define a model-independent test to measure the statistical significance of any observed asymmetry. We propose a simple stochastic process characterized by a an asymmetry parameter lambda and a method for estimating lambda . We illustrate both the test and process by analyzing physiological data.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-04-16
... comments on the proposed rule change from interested persons. \\1\\ 15 U.S.C. 78s(b)(1). \\2\\ 17 CFR 240.19b-4... applicable to simple orders in the options class under Exchange Rule 6.42--Minimum Increments of Bids and..., with the increment of trading being the standard trading increment applicable to simple orders in the...
Observers for Systems with Nonlinearities Satisfying an Incremental Quadratic Inequality
NASA Technical Reports Server (NTRS)
Acikmese, Ahmet Behcet; Corless, Martin
2004-01-01
We consider the problem of state estimation for nonlinear time-varying systems whose nonlinearities satisfy an incremental quadratic inequality. These observer results unifies earlier results in the literature; and extend it to some additional classes of nonlinearities. Observers are presented which guarantee that the state estimation error exponentially converges to zero. Observer design involves solving linear matrix inequalities for the observer gain matrices. Results are illustrated by application to a simple model of an underwater.
Dynamics of intrinsic axial flows in unsheared, uniform magnetic fields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, J. C.; Diamond, P. H.; Xu, X. Q.
2016-05-15
A simple model for the generation and amplification of intrinsic axial flow in a linear device, controlled shear decorrelation experiment, is proposed. This model proposes and builds upon a novel dynamical symmetry breaking mechanism, using a simple theory of drift wave turbulence in the presence of axial flow shear. This mechanism does not require complex magnetic field structure, such as shear, and thus is also applicable to intrinsic rotation generation in tokamaks at weak or zero magnetic shear, as well as to linear devices. This mechanism is essentially the self-amplification of the mean axial flow profile, i.e., a modulational instability.more » Hence, the flow development is a form of negative viscosity phenomenon. Unlike conventional mechanisms where the residual stress produces an intrinsic torque, in this dynamical symmetry breaking scheme, the residual stress induces a negative increment to the ambient turbulent viscosity. The axial flow shear is then amplified by this negative viscosity increment. The resulting mean axial flow profile is calculated and discussed by analogy with the problem of turbulent pipe flow. For tokamaks, the negative viscosity is not needed to generate intrinsic rotation. However, toroidal rotation profile gradient is enhanced by the negative increment in turbulent viscosity.« less
NASA Astrophysics Data System (ADS)
Ham, Yoo-Geun; Song, Hyo-Jong; Jung, Jaehee; Lim, Gyu-Ho
2017-04-01
This study introduces a altered version of the incremental analysis updates (IAU), called the nonstationary IAU (NIAU) method, to enhance the assimilation accuracy of the IAU while retaining the continuity of the analysis. Analogous to the IAU, the NIAU is designed to add analysis increments at every model time step to improve the continuity in the intermittent data assimilation. Still, unlike the IAU, the NIAU method applies time-evolved forcing employing the forward operator as rectifications to the model. The solution of the NIAU is better than that of the IAU, of which analysis is performed at the start of the time window for adding the IAU forcing, in terms of the accuracy of the analysis field. It is because, in the linear systems, the NIAU solution equals that in an intermittent data assimilation method at the end of the assimilation interval. To have the filtering property in the NIAU, a forward operator to propagate the increment is reconstructed with only dominant singular vectors. An illustration of those advantages of the NIAU is given using the simple 40-variable Lorenz model.
Local short-term variability in solar irradiance
NASA Astrophysics Data System (ADS)
Lohmann, Gerald M.; Monahan, Adam H.; Heinemann, Detlev
2016-05-01
Characterizing spatiotemporal irradiance variability is important for the successful grid integration of increasing numbers of photovoltaic (PV) power systems. Using 1 Hz data recorded by as many as 99 pyranometers during the HD(CP)2 Observational Prototype Experiment (HOPE), we analyze field variability of clear-sky index k* (i.e., irradiance normalized to clear-sky conditions) and sub-minute k* increments (i.e., changes over specified intervals of time) for distances between tens of meters and about 10 km. By means of a simple classification scheme based on k* statistics, we identify overcast, clear, and mixed sky conditions, and demonstrate that the last of these is the most potentially problematic in terms of short-term PV power fluctuations. Under mixed conditions, the probability of relatively strong k* increments of ±0.5 is approximately twice as high compared to increment statistics computed without conditioning by sky type. Additionally, spatial autocorrelation structures of k* increment fields differ considerably between sky types. While the profiles for overcast and clear skies mostly resemble the predictions of a simple model published by , this is not the case for mixed conditions. As a proxy for the smoothing effects of distributed PV, we finally show that spatial averaging mitigates variability in k* less effectively than variability in k* increments, for a spatial sensor density of 2 km-2.
Dynamic Constraint Satisfaction with Reasonable Global Constraints
NASA Technical Reports Server (NTRS)
Frank, Jeremy
2003-01-01
Previously studied theoretical frameworks for dynamic constraint satisfaction problems (DCSPs) employ a small set of primitive operators to modify a problem instance. They do not address the desire to model problems using sophisticated global constraints, and do not address efficiency questions related to incremental constraint enforcement. In this paper, we extend a DCSP framework to incorporate global constraints with flexible scope. A simple approach to incremental propagation after scope modification can be inefficient under some circumstances. We characterize the cases when this inefficiency can occur, and discuss two ways to alleviate this problem: adding rejection variables to the scope of flexible constraints, and adding new features to constraints that permit increased control over incremental propagation.
Aerosol Complexity and Implications for Predictability and Short-Term Forecasting
NASA Technical Reports Server (NTRS)
Colarco, Peter
2016-01-01
There are clear NWP and climate impacts from including aerosol radiative and cloud interactions. Changes in dynamics and cloud fields affect aerosol lifecycle, plume height, long-range transport, overall forcing of the climate system, etc. Inclusion of aerosols in NWP systems has benefit to surface field biases (e.g., T2m, U10m). Including aerosol affects has impact on analysis increments and can have statistically significant impacts on, e.g., tropical cyclogenesis. Above points are made especially with respect to aerosol radiative interactions, but aerosol-cloud interaction is a bigger signal on the global system. Many of these impacts are realized even in models with relatively simple (bulk) aerosol schemes (approx.10 -20 tracers). Simple schemes though imply simple representation of aerosol absorption and importantly for aerosol-cloud interaction particle-size distribution. Even so, more complex schemes exhibit a lot of diversity between different models, with issues such as size selection both for emitted particles and for modes. Prospects for complex sectional schemes to tune modal (and even bulk) schemes toward better selection of size representation. I think this is a ripe topic for more research -Systematic documentation of benefits of no vs. climatological vs. interactive (direct and then direct+indirect) aerosols. Document aerosol impact on analysis increments, inclusion in NWP data assimilation operator -Further refinement of baseline assumptions in model design (e.g., absorption, particle size distribution). Did not get into model resolution and interplay of other physical processes with aerosols (e.g., moist physics, obviously important), chemistry
A financial market model with two discontinuities: Bifurcation structures in the chaotic domain
NASA Astrophysics Data System (ADS)
Panchuk, Anastasiia; Sushko, Iryna; Westerhoff, Frank
2018-05-01
We continue the investigation of a one-dimensional piecewise linear map with two discontinuity points. Such a map may arise from a simple asset-pricing model with heterogeneous speculators, which can help us to explain the intricate bull and bear behavior of financial markets. Our focus is on bifurcation structures observed in the chaotic domain of the map's parameter space, which is associated with robust multiband chaotic attractors. Such structures, related to the map with two discontinuities, have been not studied before. We show that besides the standard bandcount adding and bandcount incrementing bifurcation structures, associated with two partitions, there exist peculiar bandcount adding and bandcount incrementing structures involving all three partitions. Moreover, the map's three partitions may generate intriguing bistability phenomena.
NASA Astrophysics Data System (ADS)
Most, S.; Nowak, W.; Bijeljic, B.
2014-12-01
Transport processes in porous media are frequently simulated as particle movement. This process can be formulated as a stochastic process of particle position increments. At the pore scale, the geometry and micro-heterogeneities prohibit the commonly made assumption of independent and normally distributed increments to represent dispersion. Many recent particle methods seek to loosen this assumption. Recent experimental data suggest that we have not yet reached the end of the need to generalize, because particle increments show statistical dependency beyond linear correlation and over many time steps. The goal of this work is to better understand the validity regions of commonly made assumptions. We are investigating after what transport distances can we observe: A statistical dependence between increments, that can be modelled as an order-k Markov process, boils down to order 1. This would be the Markovian distance for the process, where the validity of yet-unexplored non-Gaussian-but-Markovian random walks would start. A bivariate statistical dependence that simplifies to a multi-Gaussian dependence based on simple linear correlation (validity of correlated PTRW). Complete absence of statistical dependence (validity of classical PTRW/CTRW). The approach is to derive a statistical model for pore-scale transport from a powerful experimental data set via copula analysis. The model is formulated as a non-Gaussian, mutually dependent Markov process of higher order, which allows us to investigate the validity ranges of simpler models.
Code of Federal Regulations, 2010 CFR
2010-01-01
...) Daily simple interest formula. (1) To calculate daily simple interest the following formula may be used... a payment is due on April 1 and the payment is not made until April 11, a simple interest... equation calculates simple interest on any additional days beyond a monthly increment. (3) For example, if...
Oppenheim, Gary M; Dell, Gary S; Schwartz, Myrna F
2010-02-01
Naming a picture of a dog primes the subsequent naming of a picture of a dog (repetition priming) and interferes with the subsequent naming of a picture of a cat (semantic interference). Behavioral studies suggest that these effects derive from persistent changes in the way that words are activated and selected for production, and some have claimed that the findings are only understandable by positing a competitive mechanism for lexical selection. We present a simple model of lexical retrieval in speech production that applies error-driven learning to its lexical activation network. This model naturally produces repetition priming and semantic interference effects. It predicts the major findings from several published experiments, demonstrating that these effects may arise from incremental learning. Furthermore, analysis of the model suggests that competition during lexical selection is not necessary for semantic interference if the learning process is itself competitive. Copyright 2009 Elsevier B.V. All rights reserved.
SUSTAIN: a network model of category learning.
Love, Bradley C; Medin, Douglas L; Gureckis, Todd M
2004-04-01
SUSTAIN (Supervised and Unsupervised STratified Adaptive Incremental Network) is a model of how humans learn categories from examples. SUSTAIN initially assumes a simple category structure. If simple solutions prove inadequate and SUSTAIN is confronted with a surprising event (e.g., it is told that a bat is a mammal instead of a bird), SUSTAIN recruits an additional cluster to represent the surprising event. Newly recruited clusters are available to explain future events and can themselves evolve into prototypes-attractors-rules. SUSTAIN's discovery of category substructure is affected not only by the structure of the world but by the nature of the learning task and the learner's goals. SUSTAIN successfully extends category learning models to studies of inference learning, unsupervised learning, category construction, and contexts in which identification learning is faster than classification learning.
2013-09-30
accuracy of the analysis . Root mean square difference ( RMSD ) is much smaller for RIP than for either Simple Ocean Data Assimilation or Incremental... Analysis Update globally for temperature as well as salinity. Regionally the same results were found, with only one exception in which the salinity RMSD ...short-term forecast using a numerical model with the observations taken within the forecast time window. The resulting state is the so-called “ analysis
Spacecraft Internal Acoustic Environment Modeling
NASA Technical Reports Server (NTRS)
Chu, Shao-Sheng R.; Allen Christopher S.
2010-01-01
Acoustic modeling can be used to identify key noise sources, determine/analyze sub-allocated requirements, keep track of the accumulation of minor noise sources, and to predict vehicle noise levels at various stages in vehicle development, first with estimates of noise sources, later with experimental data. This paper describes the implementation of acoustic modeling for design purposes by incrementally increasing model fidelity and validating the accuracy of the model while predicting the noise of sources under various conditions. During FY 07, a simple-geometry Statistical Energy Analysis (SEA) model was developed and validated using a physical mockup and acoustic measurements. A process for modeling the effects of absorptive wall treatments and the resulting reverberation environment were developed. During FY 08, a model with more complex and representative geometry of the Orion Crew Module (CM) interior was built, and noise predictions based on input noise sources were made. A corresponding physical mockup was also built. Measurements were made inside this mockup, and comparisons were made with the model and showed excellent agreement. During FY 09, the fidelity of the mockup and corresponding model were increased incrementally by including a simple ventilation system. The airborne noise contribution of the fans was measured using a sound intensity technique, since the sound power levels were not known beforehand. This is opposed to earlier studies where Reference Sound Sources (RSS) with known sound power level were used. Comparisons of the modeling result with the measurements in the mockup showed excellent results. During FY 10, the fidelity of the mockup and the model were further increased by including an ECLSS (Environmental Control and Life Support System) wall, associated closeout panels, and the gap between ECLSS wall and mockup wall. The effect of sealing the gap and adding sound absorptive treatment to ECLSS wall were also modeled and validated.
Sutherland, John C.
2017-04-15
Linear dichroism provides information on the orientation of chromophores part of, or bound to, an orientable molecule such as DNA. For molecular alignment induced by hydrodynamic shear, the principal axes orthogonal to the direction of alignment are not equivalent. Thus, the magnitude of the flow-induced change in absorption for light polarized parallel to the direction of flow can be more than a factor of two greater than the corresponding change for light polarized perpendicular to both that direction and the shear axis. The ratio of the two flow-induced changes in absorption, the dichroic increment ratio, is characterized using the orthogonalmore » orientation model, which assumes that each absorbing unit is aligned parallel to one of the principal axes of the apparatus. The absorption of the alienable molecules is characterized by components parallel and perpendicular to the orientable axis of the molecule. The dichroic increment ratio indicates that for the alignment of DNA in rectangular flow cells, average alignment is not uniaxial, but for higher shear, as produced in a Couette cell, it can be. The results from the simple model are identical to tensor models for typical experimental configuration. Approaches for measuring the dichroic increment ratio with modern dichrometers are further discussed.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sutherland, John C.
Linear dichroism provides information on the orientation of chromophores part of, or bound to, an orientable molecule such as DNA. For molecular alignment induced by hydrodynamic shear, the principal axes orthogonal to the direction of alignment are not equivalent. Thus, the magnitude of the flow-induced change in absorption for light polarized parallel to the direction of flow can be more than a factor of two greater than the corresponding change for light polarized perpendicular to both that direction and the shear axis. The ratio of the two flow-induced changes in absorption, the dichroic increment ratio, is characterized using the orthogonalmore » orientation model, which assumes that each absorbing unit is aligned parallel to one of the principal axes of the apparatus. The absorption of the alienable molecules is characterized by components parallel and perpendicular to the orientable axis of the molecule. The dichroic increment ratio indicates that for the alignment of DNA in rectangular flow cells, average alignment is not uniaxial, but for higher shear, as produced in a Couette cell, it can be. The results from the simple model are identical to tensor models for typical experimental configuration. Approaches for measuring the dichroic increment ratio with modern dichrometers are further discussed.« less
Sutherland, John C
2017-04-15
Linear dichroism provides information on the orientation of chromophores part of, or bound to, an orientable molecule such as DNA. For molecular alignment induced by hydrodynamic shear, the principal axes orthogonal to the direction of alignment are not equivalent. Thus, the magnitude of the flow-induced change in absorption for light polarized parallel to the direction of flow can be more than a factor of two greater than the corresponding change for light polarized perpendicular to both that direction and the shear axis. The ratio of the two flow-induced changes in absorption, the dichroic increment ratio, is characterized using the orthogonal orientation model, which assumes that each absorbing unit is aligned parallel to one of the principal axes of the apparatus. The absorption of the alienable molecules is characterized by components parallel and perpendicular to the orientable axis of the molecule. The dichroic increment ratio indicates that for the alignment of DNA in rectangular flow cells, average alignment is not uniaxial, but for higher shear, as produced in a Couette cell, it can be. The results from the simple model are identical to tensor models for typical experimental configurations. Approaches for measuring the dichroic increment ratio with modern dichrometers are discussed. Copyright © 2017. Published by Elsevier Inc.
Constraints and Opportunities in GCM Model Development
NASA Technical Reports Server (NTRS)
Schmidt, Gavin; Clune, Thomas
2010-01-01
Over the past 30 years climate models have evolved from relatively simple representations of a few atmospheric processes to complex multi-disciplinary system models which incorporate physics from bottom of the ocean to the mesopause and are used for seasonal to multi-million year timescales. Computer infrastructure over that period has gone from punchcard mainframes to modern parallel clusters. Constraints of working within an ever evolving research code mean that most software changes must be incremental so as not to disrupt scientific throughput. Unfortunately, programming methodologies have generally not kept pace with these challenges, and existing implementations now present a heavy and growing burden on further model development as well as limiting flexibility and reliability. Opportunely, advances in software engineering from other disciplines (e.g. the commercial software industry) as well as new generations of powerful development tools can be incorporated by the model developers to incrementally and systematically improve underlying implementations and reverse the long term trend of increasing development overhead. However, these methodologies cannot be applied blindly, but rather must be carefully tailored to the unique characteristics of scientific software development. We will discuss the need for close integration of software engineers and climate scientists to find the optimal processes for climate modeling.
Power-law confusion: You say incremental, I say differential
NASA Technical Reports Server (NTRS)
Colwell, Joshua E.
1993-01-01
Power-law distributions are commonly used to describe the frequency of occurrences of crater diameters, stellar masses, ring particle sizes, planetesimal sizes, and meteoroid masses to name a few. The distributions are simple, and this simplicity has led to a number of misstatements in the literature about the kind of power-law that is being used: differential, cumulative, or incremental. Although differential and cumulative power-laws are mathematically trivial, it is a hybrid incremental distribution that is often used and the relationship between the incremental distribution and the differential or cumulative distributions is not trivial. In many cases the slope of an incremental power-law will be nearly identical to the slope of the cumulative power-law of the same distribution, not the differential slope. The discussion that follows argues for a consistent usage of these terms and against the oft-made implicit claim that incremental and differential distributions are indistinguishable.
Modeling nonstructural carbohydrate reserve dynamics in forest trees
NASA Astrophysics Data System (ADS)
Richardson, Andrew; Keenan, Trevor; Carbone, Mariah; Pederson, Neil
2013-04-01
Understanding the factors influencing the availability of nonstructural carbohydrate (NSC) reserves is essential for predicting the resilience of forests to climate change and environmental stress. However, carbon allocation processes remain poorly understood and many models either ignore NSC reserves, or use simple and untested representations of NSC allocation and pool dynamics. Using model-data fusion techniques, we combined a parsimonious model of forest ecosystem carbon cycling with novel field sampling and laboratory analyses of NSCs. Simulations were conducted for an evergreen conifer forest and a deciduous broadleaf forest in New England. We used radiocarbon methods based on the 14C "bomb spike" to estimate the age of NSC reserves, and used this to constrain the mean residence time of modeled NSCs. We used additional data, including tower-measured fluxes of CO2, soil and biomass carbon stocks, woody biomass increment, and leaf area index and litterfall, to further constrain the model's parameters and initial conditions. Incorporation of fast- and slow-cycling NSC pools improved the ability of the model to reproduce the measured interannual variability in woody biomass increment. We show how model performance varies according to model structure and total pool size, and we use novel diagnostic criteria, based on autocorrelation statistics of annual biomass growth, to evaluate the model's ability to correctly represent lags and memory effects.
Land, K C; Guralnik, J M; Blazer, D G
1994-05-01
A fundamental limitation of current multistate life table methodology-evident in recent estimates of active life expectancy for the elderly-is the inability to estimate tables from data on small longitudinal panels in the presence of multiple covariates (such as sex, race, and socioeconomic status). This paper presents an approach to such an estimation based on an isomorphism between the structure of the stochastic model underlying a conventional specification of the increment-decrement life table and that of Markov panel regression models for simple state spaces. We argue that Markov panel regression procedures can be used to provide smoothed or graduated group-specific estimates of transition probabilities that are more stable across short age intervals than those computed directly from sample data. We then join these estimates with increment-decrement life table methods to compute group-specific total, active, and dependent life expectancy estimates. To illustrate the methods, we describe an empirical application to the estimation of such life expectancies specific to sex, race, and education (years of school completed) for a longitudinal panel of elderly persons. We find that education extends both total life expectancy and active life expectancy. Education thus may serve as a powerful social protective mechanism delaying the onset of health problems at older ages.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lara-Castells, María Pilar de, E-mail: Pilar.deLara.Castells@csic.es; Mitrushchenkov, Alexander O.; Stoll, Hermann
2015-09-14
A combined density functional (DFT) and incremental post-Hartree-Fock (post-HF) approach, proven earlier to calculate He-surface potential energy surfaces [de Lara-Castells et al., J. Chem. Phys. 141, 151102 (2014)], is applied to describe the van der Waals dominated Ag{sub 2}/graphene interaction. It extends the dispersionless density functional theory developed by Pernal et al. [Phys. Rev. Lett. 103, 263201 (2009)] by including periodic boundary conditions while the dispersion is parametrized via the method of increments [H. Stoll, J. Chem. Phys. 97, 8449 (1992)]. Starting with the elementary cluster unit of the target surface (benzene), continuing through the realistic cluster model (coronene), andmore » ending with the periodic model of the extended system, modern ab initio methodologies for intermolecular interactions as well as state-of-the-art van der Waals-corrected density functional-based approaches are put together both to assess the accuracy of the composite scheme and to better characterize the Ag{sub 2}/graphene interaction. The present work illustrates how the combination of DFT and post-HF perspectives may be efficient to design simple and reliable ab initio-based schemes in extended systems for surface science applications.« less
Simulation of fatigue crack growth under large scale yielding conditions
NASA Astrophysics Data System (ADS)
Schweizer, Christoph; Seifert, Thomas; Riedel, Hermann
2010-07-01
A simple mechanism based model for fatigue crack growth assumes a linear correlation between the cyclic crack-tip opening displacement (ΔCTOD) and the crack growth increment (da/dN). The objective of this work is to compare analytical estimates of ΔCTOD with results of numerical calculations under large scale yielding conditions and to verify the physical basis of the model by comparing the predicted and the measured evolution of the crack length in a 10%-chromium-steel. The material is described by a rate independent cyclic plasticity model with power-law hardening and Masing behavior. During the tension-going part of the cycle, nodes at the crack-tip are released such that the crack growth increment corresponds approximately to the crack-tip opening. The finite element analysis performed in ABAQUS is continued for so many cycles until a stabilized value of ΔCTOD is reached. The analytical model contains an interpolation formula for the J-integral, which is generalized to account for cyclic loading and crack closure. Both simulated and estimated ΔCTOD are reasonably consistent. The predicted crack length evolution is found to be in good agreement with the behavior of microcracks observed in a 10%-chromium steel.
A General Interface Method for Aeroelastic Analysis of Aircraft
NASA Technical Reports Server (NTRS)
Tzong, T.; Chen, H. H.; Chang, K. C.; Wu, T.; Cebeci, T.
1996-01-01
The aeroelastic analysis of an aircraft requires an accurate and efficient procedure to couple aerodynamics and structures. The procedure needs an interface method to bridge the gap between the aerodynamic and structural models in order to transform loads and displacements. Such an interface method is described in this report. This interface method transforms loads computed by any aerodynamic code to a structural finite element (FE) model and converts the displacements from the FE model to the aerodynamic model. The approach is based on FE technology in which virtual work is employed to transform the aerodynamic pressures into FE nodal forces. The displacements at the FE nodes are then converted back to aerodynamic grid points on the aircraft surface through the reciprocal theorem in structural engineering. The method allows both high and crude fidelities of both models and does not require an intermediate modeling. In addition, the method performs the conversion of loads and displacements directly between individual aerodynamic grid point and its corresponding structural finite element and, hence, is very efficient for large aircraft models. This report also describes the application of this aero-structure interface method to a simple wing and an MD-90 wing. The results show that the aeroelastic effect is very important. For the simple wing, both linear and nonlinear approaches are used. In the linear approach, the deformation of the structural model is considered small, and the loads from the deformed aerodynamic model are applied to the original geometry of the structure. In the nonlinear approach, the geometry of the structure and its stiffness matrix are updated in every iteration and the increments of loads from the previous iteration are applied to the new structural geometry in order to compute the displacement increments. Additional studies to apply the aero-structure interaction procedure to more complicated geometry will be conducted in the second phase of the present contract.
Proton beam therapy and accountable care: the challenges ahead.
Elnahal, Shereef M; Kerstiens, John; Helsper, Richard S; Zietman, Anthony L; Johnstone, Peter A S
2013-03-15
Proton beam therapy (PBT) centers have drawn increasing public scrutiny for their high cost. The behavior of such facilities is likely to change under the Affordable Care Act. We modeled how accountable care reform may affect the financial standing of PBT centers and their incentives to treat complex patient cases. We used operational data and publicly listed Medicare rates to model the relationship between financial metrics for PBT center performance and case mix (defined as the percentage of complex cases, such as pediatric central nervous system tumors). Financial metrics included total daily revenues and debt coverage (daily revenues - daily debt payments). Fee-for-service (FFS) and accountable care (ACO) reimbursement scenarios were modeled. Sensitivity analyses were performed around the room time required to treat noncomplex cases: simple (30 minutes), prostate (24 minutes), and short prostate (15 minutes). Sensitivity analyses were also performed for total machine operating time (14, 16, and 18 h/d). Reimbursement under ACOs could reduce daily revenues in PBT centers by up to 32%. The incremental revenue gained by replacing 1 complex case with noncomplex cases was lowest for simple cases and highest for short prostate cases. ACO rates reduced this incremental incentive by 53.2% for simple cases and 41.7% for short prostate cases. To cover daily debt payments after ACO rates were imposed, 26% fewer complex patients were allowable at varying capital costs and interest rates. Only facilities with total machine operating times of 18 hours per day would cover debt payments in all scenarios. Debt-financed PBT centers will face steep challenges to remain financially viable after ACO implementation. Paradoxically, reduced reimbursement for noncomplex cases will require PBT centers to treat more such cases over cases for which PBT has demonstrated superior outcomes. Relative losses will be highest for those facilities focused primarily on treating noncomplex cases. Copyright © 2013 Elsevier Inc. All rights reserved.
Proton Beam Therapy and Accountable Care: The Challenges Ahead
DOE Office of Scientific and Technical Information (OSTI.GOV)
Elnahal, Shereef M., E-mail: selnahal@partners.org; Kerstiens, John; Helsper, Richard S.
2013-03-15
Purpose: Proton beam therapy (PBT) centers have drawn increasing public scrutiny for their high cost. The behavior of such facilities is likely to change under the Affordable Care Act. We modeled how accountable care reform may affect the financial standing of PBT centers and their incentives to treat complex patient cases. Methods and Materials: We used operational data and publicly listed Medicare rates to model the relationship between financial metrics for PBT center performance and case mix (defined as the percentage of complex cases, such as pediatric central nervous system tumors). Financial metrics included total daily revenues and debt coveragemore » (daily revenues − daily debt payments). Fee-for-service (FFS) and accountable care (ACO) reimbursement scenarios were modeled. Sensitivity analyses were performed around the room time required to treat noncomplex cases: simple (30 minutes), prostate (24 minutes), and short prostate (15 minutes). Sensitivity analyses were also performed for total machine operating time (14, 16, and 18 h/d). Results: Reimbursement under ACOs could reduce daily revenues in PBT centers by up to 32%. The incremental revenue gained by replacing 1 complex case with noncomplex cases was lowest for simple cases and highest for short prostate cases. ACO rates reduced this incremental incentive by 53.2% for simple cases and 41.7% for short prostate cases. To cover daily debt payments after ACO rates were imposed, 26% fewer complex patients were allowable at varying capital costs and interest rates. Only facilities with total machine operating times of 18 hours per day would cover debt payments in all scenarios. Conclusions: Debt-financed PBT centers will face steep challenges to remain financially viable after ACO implementation. Paradoxically, reduced reimbursement for noncomplex cases will require PBT centers to treat more such cases over cases for which PBT has demonstrated superior outcomes. Relative losses will be highest for those facilities focused primarily on treating noncomplex cases.« less
Investigation of electrical and magnetic properties of ferro-nanofluid on transformers
2011-01-01
This study investigated a simple model of transformers that have liquid magnetic cores with different concentrations of ferro-nanofluids. The simple model was built on a capillary by enamel-insulated wires and with ferro-nanofluid loaded in the capillary. The ferro-nanofluid was fabricated by a chemical co-precipitation method. The performances of the transformers with either air core or ferro-nanofluid at different concentrations of nanoparticles of 0.25, 0.5, 0.75, and 1 M were measured and simulated at frequencies ranging from 100 kHz to 100 MHz. The experimental results indicated that the inductance and coupling coefficient of coils grew with the increment of the ferro-nanofluid concentration. The presence of ferro-nanofluid increased resistance, yielding to the decrement of the quality factor, owing to the phase lag between the external magnetic field and the magnetization of the material. PMID:21711784
Investigation of electrical and magnetic properties of ferro-nanofluid on transformers.
Tsai, Tsung-Han; Chen, Ping-Hei; Lee, Da-Sheng; Yang, Chin-Ting
2011-03-28
This study investigated a simple model of transformers that have liquid magnetic cores with different concentrations of ferro-nanofluids. The simple model was built on a capillary by enamel-insulated wires and with ferro-nanofluid loaded in the capillary. The ferro-nanofluid was fabricated by a chemical co-precipitation method. The performances of the transformers with either air core or ferro-nanofluid at different concentrations of nanoparticles of 0.25, 0.5, 0.75, and 1 M were measured and simulated at frequencies ranging from 100 kHz to 100 MHz. The experimental results indicated that the inductance and coupling coefficient of coils grew with the increment of the ferro-nanofluid concentration. The presence of ferro-nanofluid increased resistance, yielding to the decrement of the quality factor, owing to the phase lag between the external magnetic field and the magnetization of the material.
Morphology of residually stressed tubular tissues: Beyond the elastic multiplicative decomposition
NASA Astrophysics Data System (ADS)
Ciarletta, P.; Destrade, M.; Gower, A. L.; Taffetani, M.
2016-05-01
Many interesting shapes appearing in the biological world are formed by the onset of mechanical instability. In this work we consider how the build-up of residual stress can cause a solid to buckle. In all past studies a fictitious (virtual) stress-free state was required to calculate the residual stress. In contrast, we use a model which is simple and allows the prescription of any residual stress field. We specialize the analysis to an elastic tube subject to a two-dimensional residual stress, and find that incremental wrinkles can appear on its inner or its outer face, depending on the location of the highest value of the residual hoop stress. We further validate the predictions of the incremental theory with finite element simulations, which allow us to go beyond this threshold and predict the shape, number and amplitude of the resulting creases.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Siranart, Nopphon; Blakely, Eleanor A.; Cheng, Alden
Complex mixed radiation fields exist in interplanetary space, and not much is known about their latent effects on space travelers. In silico synergy analysis default predictions are useful when planning relevant mixed-ion-beam experiments and interpreting their results. These predictions are based on individual dose-effect relationships (IDER) for each component of the mixed-ion beam, assuming no synergy or antagonism. For example, a default hypothesis of simple effect additivity has often been used throughout the study of biology. However, for more than a century pharmacologists interested in mixtures of therapeutic drugs have analyzed conceptual, mathematical and practical questions similar to those thatmore » arise when analyzing mixed radiation fields, and have shown that simple effect additivity often gives unreasonable predictions when the IDER are curvilinear. Various alternatives to simple effect additivity proposed in radiobiology, pharmacometrics, toxicology and other fields are also known to have important limitations. In this work, we analyze upcoming murine Harderian gland (HG) tumor prevalence mixed-beam experiments, using customized open-source software and published IDER from past single-ion experiments. The upcoming experiments will use acute irradiation and the mixed beam will include components of high atomic number and energy (HZE). We introduce a new alternative to simple effect additivity, "incremental effect additivity", which is more suitable for the HG analysis and perhaps for other end points. We use incremental effect additivity to calculate default predictions for mixture dose-effect relationships, including 95% confidence intervals. We have drawn three main conclusions from this work. 1. It is important to supplement mixed-beam experiments with single-ion experiments, with matching end point(s), shielding and dose timing. 2. For HG tumorigenesis due to a mixed beam, simple effect additivity and incremental effect additivity sometimes give default predictions that are numerically close. However, if nontargeted effects are important and the mixed beam includes a number of different HZE components, simple effect additivity becomes unusable and another method is needed such as incremental effect additivity. 3. Eventually, synergy analysis default predictions of the effects of mixed radiation fields will be replaced by more mechanistic, biophysically-based predictions. However, optimizing synergy analyses is an important first step. If mixed-beam experiments indicate little synergy or antagonism, plans by NASA for further experiments and possible missions beyond low earth orbit will be substantially simplified.« less
Load Adaptability in Patients With Pulmonary Arterial Hypertension.
Amsallem, Myriam; Boulate, David; Aymami, Marie; Guihaire, Julien; Selej, Mona; Huo, Jennie; Denault, Andre Y; McConnell, Michael V; Schnittger, Ingela; Fadel, Elie; Mercier, Olaf; Zamanian, Roham T; Haddad, Francois
2017-09-01
Right ventricular (RV) adaptation to pressure overload is a major prognostic factor in patients with pulmonary arterial hypertension (PAH). The objectives were first to define the relation between RV adaptation and load using allometric modeling, then to compare the prognostic value of different indices of load adaptability in PAH. Both a derivation (n = 85) and a validation cohort (n = 200) were included. Load adaptability was assessed using 3 approaches: (1) surrogates of ventriculo-arterial coupling (e.g., RV area change/end-systolic area), (2) simple ratio of function and load (e.g., tricuspid annular plane systolic excursion/right ventricular systolic pressure), and (3) indices assessing the proportionality of adaptation using allometric pressure-function or size modeling. Proportional hazard modeling was used to compare the hazard ratio for the outcome of death or lung transplantation. The mean age of the derivation cohort was 44 ± 11 years, with 80% female and 74% in New York Heart Association class III or IV. Mean pulmonary vascular resistance index (PVRI) was 24 ± 11 with a wide distribution (1.6 to 57.5 WU/m 2 ). Allometric relations were observed between PVRI and RV fractional area change (R 2 = 0.53, p < 0.001) and RV end-systolic area indexed to body surface area right ventricular end-systolic area index (RVESAI) (R 2 = 0.29, p < 0.001), allowing the derivation of simple ratiometric load-specific indices of RV adaptation. In right heart parameters, RVESAI was the strongest predictor of outcomes (hazard ratio per SD = 1.93, 95% confidence interval 1.37 to 2.75, p < 0.001). Although RVESAI/PVRI 0.35 provided small incremental discrimination on multivariate modeling, none of the load-adaptability indices provided stronger discrimination of outcome than simple RV adaptation metrics in either the derivation or the validation cohort. In conclusion, allometric modeling enables quantification of the proportionality of RV load adaptation but offers small incremental prognostic value to RV end-systolic dimension in PAH. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Langenfeld, K.; Junker, P.; Mosler, J.
2018-05-01
This paper deals with a constitutive model suitable for the analysis of quasi-brittle damage in structures. The model is based on incremental energy relaxation combined with a viscous-type regularization. A similar approach—which also represents the inspiration for the improved model presented in this paper—was recently proposed in Junker et al. (Contin Mech Thermodyn 29(1):291-310, 2017). Within this work, the model introduced in Junker et al. (2017) is critically analyzed first. This analysis leads to an improved model which shows the same features as that in Junker et al. (2017), but which (i) eliminates unnecessary model parameters, (ii) can be better interpreted from a physics point of view, (iii) can capture a fully softened state (zero stresses), and (iv) is characterized by a very simple evolution equation. In contrast to the cited work, this evolution equation is (v) integrated fully implicitly and (vi) the resulting time-discrete evolution equation can be solved analytically providing a numerically efficient closed-form solution. It is shown that the final model is indeed well-posed (i.e., its tangent is positive definite). Explicit conditions guaranteeing this well-posedness are derived. Furthermore, by additively decomposing the stress rate into deformation- and purely time-dependent terms, the functionality of the model is explained. Illustrative numerical examples confirm the theoretical findings.
Knowledge acquisition for a simple expert controller
NASA Technical Reports Server (NTRS)
Bieker, B.
1987-01-01
A method is presented for process control which has the properties of being incremental, cyclic and top-down. It is described on the basis of the development of an expert controller for a simple, but nonlinear control route. A quality comparison between expert controller and process operator shows the ability of the method for knowledge acquisition.
ERIC Educational Resources Information Center
Witzel, Jeffrey; Witzel, Naoko
2016-01-01
This study investigates preverbal structural and semantic processing in Japanese, a head-final language, using the maze task. Two sentence types were tested--simple scrambled sentences (Experiment 1) and control sentences (Experiment 2). Experiment 1 showed that even for simple, mono-clausal Japanese sentences, (1) there are online processing…
A Simple and Accurate Analysis of Conductivity Loss in Millimeter-Wave Helical Slow-Wave Structures
NASA Astrophysics Data System (ADS)
Datta, S. K.; Kumar, Lalit; Basu, B. N.
2009-04-01
Electromagnetic field analysis of a helix slow-wave structure was carried out and a closed form expression was derived for the inductance per unit length of the transmission-line equivalent circuit of the structure, taking into account the actual helix tape dimensions and surface current on the helix over the actual metallic area of the tape. The expression of the inductance per unit length, thus obtained, was used for estimating the increment in the inductance per unit length caused due to penetration of the magnetic flux into the conducting surfaces following Wheeler’s incremental inductance rule, which was subsequently interpreted for the attenuation constant of the propagating structure. The analysis was computationally simple and accurate, and accrues the accuracy of 3D electromagnetic analysis by allowing the use of dispersion characteristics obtainable from any standard electromagnetic modeling. The approach was benchmarked against measurement for two practical structures, and excellent agreement was observed. The analysis was subsequently applied to demonstrate the effects of conductivity on the attenuation constant of a typical broadband millimeter-wave helical slow-wave structure with respect to helix materials and copper plating on the helix, surface finish of the helix, dielectric loading effect and effect of high temperature operation - a comparative study of various such aspects are covered.
Formal Semantics and Implementation of BPMN 2.0 Inclusive Gateways
NASA Astrophysics Data System (ADS)
Christiansen, David Raymond; Carbone, Marco; Hildebrandt, Thomas
We present the first direct formalization of the semantics of inclusive gateways as described in the Business Process Modeling Notation (BPMN) 2.0 Beta 1 specification. The formal semantics is given for a minimal subset of BPMN 2.0 containing just the inclusive and exclusive gateways and the start and stop events. By focusing on this subset we achieve a simple graph model that highlights the particular non-local features of the inclusive gateway semantics. We sketch two ways of implementing the semantics using algorithms based on incrementally updated data structures and also discuss distributed communication-based implementations of the two algorithms.
NASA Astrophysics Data System (ADS)
Flores, P.; Duchêne, L.; Lelotte, T.; Bouffioux, C.; El Houdaigui, F.; Van Bael, A.; He, S.; Duflou, J.; Habraken, A. M.
2005-08-01
The bi-axial experimental equipment developed by Flores enables to perform Baushinger shear tests and successive or simultaneous simple shear tests and plane-strain tests. Such experiments and classical tensile tests investigate the material behavior in order to identify the yield locus and the hardening models. With tests performed on two steel grades, the methods applied to identify classical yield surfaces such as Hill or Hosford ones as well as isotropic Swift type hardening or kinematic Armstrong-Frederick hardening models are explained. Comparison with the Taylor-Bishop-Hill yield locus is also provided. The effect of both yield locus and hardening model choice will be presented for two applications: Single Point Incremental Forming (SPIF) and a cup deep drawing.
Martingales, nonstationary increments, and the efficient market hypothesis
NASA Astrophysics Data System (ADS)
McCauley, Joseph L.; Bassler, Kevin E.; Gunaratne, Gemunu H.
2008-06-01
We discuss the deep connection between nonstationary increments, martingales, and the efficient market hypothesis for stochastic processes x(t) with arbitrary diffusion coefficients D(x,t). We explain why a test for a martingale is generally a test for uncorrelated increments. We explain why martingales look Markovian at the level of both simple averages and 2-point correlations. But while a Markovian market has no memory to exploit and cannot be beaten systematically, a martingale admits memory that might be exploitable in higher order correlations. We also use the analysis of this paper to correct a misstatement of the ‘fair game’ condition in terms of serial correlations in Fama’s paper on the EMH. We emphasize that the use of the log increment as a variable in data analysis generates spurious fat tails and spurious Hurst exponents.
NASA Technical Reports Server (NTRS)
Griesel, Martha Ann
1988-01-01
Several Laboratory software development projects that followed nonstandard development processes, which were hybrids of incremental development and prototyping, are being studied. Factors in the project environment leading to the decision to use a nonstandard development process and affecting its success are analyzed. A simple characterization of project environment based on this analysis is proposed, together with software development approaches which have been found effective for each category. These approaches include both documentation and review requirements.
A simple method for quantitating the propensity for calcium oxalate crystallization in urine
NASA Technical Reports Server (NTRS)
Wabner, C. L.; Pak, C. Y.
1991-01-01
To assess the propensity for spontaneous crystallization of calcium oxalate in urine, the permissible increment in oxalate is calculated. The previous method required visual observation of crystallization with the addition of oxalate, this warranted the need for a large volume of urine and a sacrifice in accuracy in defining differences between small incremental changes of added oxalate. Therefore, this method has been miniaturized and spontaneous crystallization is detected from the depletion of radioactive oxalate. The new "micro" method demonstrated a marked decrease (p < 0.001) in the permissible increment in oxalate in urine of stone formers versus normal subjects. Moreover, crystallization inhibitors added to urine, in vitro (heparin or diphosphonate) or in vivo (potassium citrate administration), substantially increased the permissible increment in oxalate. Thus, the "micro" method has proven reliable and accurate in discriminating stone forming from control urine and in distinguishing changes of inhibitory activity.
Linear regression metamodeling as a tool to summarize and present simulation model results.
Jalal, Hawre; Dowd, Bryan; Sainfort, François; Kuntz, Karen M
2013-10-01
Modelers lack a tool to systematically and clearly present complex model results, including those from sensitivity analyses. The objective was to propose linear regression metamodeling as a tool to increase transparency of decision analytic models and better communicate their results. We used a simplified cancer cure model to demonstrate our approach. The model computed the lifetime cost and benefit of 3 treatment options for cancer patients. We simulated 10,000 cohorts in a probabilistic sensitivity analysis (PSA) and regressed the model outcomes on the standardized input parameter values in a set of regression analyses. We used the regression coefficients to describe measures of sensitivity analyses, including threshold and parameter sensitivity analyses. We also compared the results of the PSA to deterministic full-factorial and one-factor-at-a-time designs. The regression intercept represented the estimated base-case outcome, and the other coefficients described the relative parameter uncertainty in the model. We defined simple relationships that compute the average and incremental net benefit of each intervention. Metamodeling produced outputs similar to traditional deterministic 1-way or 2-way sensitivity analyses but was more reliable since it used all parameter values. Linear regression metamodeling is a simple, yet powerful, tool that can assist modelers in communicating model characteristics and sensitivity analyses.
NASA Astrophysics Data System (ADS)
Liu, Peng; Yang, Yong-qing; Li, Zhi-guo; Han, Jun-feng; Wei, Yu; Jing, Feng
2018-02-01
Aiming at the shortage of the incremental encoder with simple process to change along the count "in the presence of repeatability and anti disturbance ability, combined with its application in a large project in the country, designed an electromechanical switch for generating zero, zero crossing signal. A mechanical zero electric and zero coordinate transformation model is given to meet the path optimality, single, fast and accurate requirements of adaptive fast change algorithm, the proposed algorithm can effectively solve the contradiction between the accuracy and the change of the time change. A test platform is built to verify the effectiveness and robustness of the proposed algorithm. The experimental data show that the effect of the algorithm accuracy is not influenced by the change of the speed of change, change the error of only 0.0013. Meet too fast, the change of system accuracy, and repeated experiments show that this algorithm has high robustness.
Incremental Contingency Planning
NASA Technical Reports Server (NTRS)
Dearden, Richard; Meuleau, Nicolas; Ramakrishnan, Sailesh; Smith, David E.; Washington, Rich
2003-01-01
There has been considerable work in AI on planning under uncertainty. However, this work generally assumes an extremely simple model of action that does not consider continuous time and resources. These assumptions are not reasonable for a Mars rover, which must cope with uncertainty about the duration of tasks, the energy required, the data storage necessary, and its current position and orientation. In this paper, we outline an approach to generating contingency plans when the sources of uncertainty involve continuous quantities such as time and resources. The approach involves first constructing a "seed" plan, and then incrementally adding contingent branches to this plan in order to improve utility. The challenge is to figure out the best places to insert contingency branches. This requires an estimate of how much utility could be gained by building a contingent branch at any given place in the seed plan. Computing this utility exactly is intractable, but we outline an approximation method that back propagates utility distributions through a graph structure similar to that of a plan graph.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ismail, Norilmi Amilia, E-mail: aenorilmi@usm.my
The motorized momentum exchange tether (MMET) is capable of generating useful velocity increments through spin–orbit coupling. This study presents a comparative study of the velocity increments between the rigid body and flexible models of MMET. The equations of motions of both models in the time domain are transformed into a function of true anomaly. The equations of motion are integrated, and the responses in terms of the velocity increment of the rigid body and flexible models are compared and analysed. Results show that the initial conditions, eccentricity, and flexibility of the tether have significant effects on the velocity increments ofmore » the tether.« less
NASA Astrophysics Data System (ADS)
Guadagnini, A.; Riva, M.; Neuman, S. P.
2016-12-01
Environmental quantities such as log hydraulic conductivity (or transmissivity), Y(x) = ln K(x), and their spatial (or temporal) increments, ΔY, are known to be generally non-Gaussian. Documented evidence of such behavior includes symmetry of increment distributions at all separation scales (or lags) between incremental values of Y with sharp peaks and heavy tails that decay asymptotically as lag increases. This statistical scaling occurs in porous as well as fractured media characterized by either one or a hierarchy of spatial correlation scales. In hierarchical media one observes a range of additional statistical ΔY scaling phenomena, all of which are captured comprehensibly by a novel generalized sub-Gaussian (GSG) model. In this model Y forms a mixture Y(x) = U(x) G(x) of single- or multi-scale Gaussian processes G having random variances, U being a non-negative subordinator independent of G. Elsewhere we developed ways to generate unconditional and conditional random realizations of isotropic or anisotropic GSG fields which can be embedded in numerical Monte Carlo flow and transport simulations. Here we present and discuss expressions for probability distribution functions of Y and ΔY as well as their lead statistical moments. We then focus on a simple flow setting of mean uniform steady state flow in an unbounded, two-dimensional domain, exploring ways in which non-Gaussian heterogeneity affects stochastic flow and transport descriptions. Our expressions represent (a) lead order autocovariance and cross-covariance functions of hydraulic head, velocity and advective particle displacement as well as (b) analogues of preasymptotic and asymptotic Fickian dispersion coefficients. We compare them with corresponding expressions developed in the literature for Gaussian Y.
NASA Astrophysics Data System (ADS)
Almog, Assaf; Garlaschelli, Diego
2014-09-01
The dynamics of complex systems, from financial markets to the brain, can be monitored in terms of multiple time series of activity of the constituent units, such as stocks or neurons, respectively. While the main focus of time series analysis is on the magnitude of temporal increments, a significant piece of information is encoded into the binary projection (i.e. the sign) of such increments. In this paper we provide further evidence of this by showing strong nonlinear relations between binary and non-binary properties of financial time series. These relations are a novel quantification of the fact that extreme price increments occur more often when most stocks move in the same direction. We then introduce an information-theoretic approach to the analysis of the binary signature of single and multiple time series. Through the definition of maximum-entropy ensembles of binary matrices and their mapping to spin models in statistical physics, we quantify the information encoded into the simplest binary properties of real time series and identify the most informative property given a set of measurements. Our formalism is able to accurately replicate, and mathematically characterize, the observed binary/non-binary relations. We also obtain a phase diagram allowing us to identify, based only on the instantaneous aggregate return of a set of multiple time series, a regime where the so-called ‘market mode’ has an optimal interpretation in terms of collective (endogenous) effects, a regime where it is parsimoniously explained by pure noise, and a regime where it can be regarded as a combination of endogenous and exogenous factors. Our approach allows us to connect spin models, simple stochastic processes, and ensembles of time series inferred from partial information.
Don C. Bragg
2002-01-01
This article is an introduction to the computer software used by the Potential Relative Increment (PRI) approach to optimal tree diameter growth modeling. These DOS programs extract qualified tree and plot data from the Eastwide Forest Inventory Data Base (EFIDB), calculate relative tree increment, sort for the highest relative increments by diameter class, and...
NASA Astrophysics Data System (ADS)
Kuhn, Matthew R.; Daouadji, Ali
2018-05-01
The paper addresses a common assumption of elastoplastic modeling: that the recoverable, elastic strain increment is unaffected by alterations of the elastic moduli that accompany loading. This assumption is found to be false for a granular material, and discrete element (DEM) simulations demonstrate that granular materials are coupled materials at both micro- and macro-scales. Elasto-plastic coupling at the macro-scale is placed in the context of thermomechanics framework of Tomasz Hueckel and Hans Ziegler, in which the elastic moduli are altered by irreversible processes during loading. This complex behavior is explored for multi-directional loading probes that follow an initial monotonic loading. An advanced DEM model is used in the study, with non-convex non-spherical particles and two different contact models: a conventional linear-frictional model and an exact implementation of the Hertz-like Cattaneo-Mindlin model. Orthotropic true-triaxial probes were used in the study (i.e., no direct shear strain), with tiny strain increments of 2 ×10-6 . At the micro-scale, contact movements were monitored during small increments of loading and load-reversal, and results show that these movements are not reversed by a reversal of strain direction, and some contacts that were sliding during a loading increment continue to slide during reversal. The probes show that the coupled part of a strain increment, the difference between the recoverable (elastic) increment and its reversible part, must be considered when partitioning strain increments into elastic and plastic parts. Small increments of irreversible (and plastic) strain and contact slipping and frictional dissipation occur for all directions of loading, and an elastic domain, if it exists at all, is smaller than the strain increment used in the simulations.
Petrology and age of alkalic lava from the Ratak Chain of the Marshall Islands
Davis, A.S.; Pringle, M.S.; Pickthorn, L.-B.G.; Clague, D.A.; Schwab, W.C.
1989-01-01
Volcanic rock dredged from the flanks of four volcanic edifices in the Ratak chain of the Marshall Islands consist of alkalic lava that erupted above sea level or in shallow water. Compositions of recovered samples are predominantly differentiated alkalic basalt and hawaiite but include strongly alkalic melilitite. Whole rock 40Ar/39Ar total fusion and incremental heating ages of 87.3 ?? 0.6 Ma and 82.2 ?? 1.6 Ma determined for samples from Erikub Seamount and Ratak Guyot, respectively, are within the range predicted by plate rotation models but show no age progression consistent with a simple hot spot model. Variations in isotopic and some incompatible element ratios suggest interisland heterogeneity. -from Authors
NASA Astrophysics Data System (ADS)
Wong, Pak-kin; Vong, Chi-man; Wong, Hang-cheong; Li, Ke
2010-05-01
Modern automotive spark-ignition (SI) power performance usually refers to output power and torque, and they are significantly affected by the setup of control parameters in the engine management system (EMS). EMS calibration is done empirically through tests on the dynamometer (dyno) because no exact mathematical engine model is yet available. With an emerging nonlinear function estimation technique of Least squares support vector machines (LS-SVM), the approximate power performance model of a SI engine can be determined by training the sample data acquired from the dyno. A novel incremental algorithm based on typical LS-SVM is also proposed in this paper, so the power performance models built from the incremental LS-SVM can be updated whenever new training data arrives. With updating the models, the model accuracies can be continuously increased. The predicted results using the estimated models from the incremental LS-SVM are good agreement with the actual test results and with the almost same average accuracy of retraining the models from scratch, but the incremental algorithm can significantly shorten the model construction time when new training data arrives.
Incremental isometric embedding of high-dimensional data using connected neighborhood graphs.
Zhao, Dongfang; Yang, Li
2009-01-01
Most nonlinear data embedding methods use bottom-up approaches for capturing the underlying structure of data distributed on a manifold in high dimensional space. These methods often share the first step which defines neighbor points of every data point by building a connected neighborhood graph so that all data points can be embedded to a single coordinate system. These methods are required to work incrementally for dimensionality reduction in many applications. Because input data stream may be under-sampled or skewed from time to time, building connected neighborhood graph is crucial to the success of incremental data embedding using these methods. This paper presents algorithms for updating $k$-edge-connected and $k$-connected neighborhood graphs after a new data point is added or an old data point is deleted. It further utilizes a simple algorithm for updating all-pair shortest distances on the neighborhood graph. Together with incremental classical multidimensional scaling using iterative subspace approximation, this paper devises an incremental version of Isomap with enhancements to deal with under-sampled or unevenly distributed data. Experiments on both synthetic and real-world data sets show that the algorithm is efficient and maintains low dimensional configurations of high dimensional data under various data distributions.
Martin, Andrew J
2015-06-01
There has been increasing interest in growth approaches to students' academic development, including value-added models, modelling of academic trajectories, growth motivation orientations, growth mindsets, and growth goals. This study sought to investigate the relationships between implicit theories about intelligence (incremental and entity theories) and growth (personal best, PB) goals - with particular interest in the ordering of factors across time. The study focused on longitudinal data of 969 Australian high school students. The classic cross-lagged panel design (using structural equation modelling) was employed to shed light on the ordering of Time 1 growth goals, incremental theories, and entity theories relative to Time 2 (1 year later) growth goals, incremental theories, and entity theories. Findings showed that Time 1 growth goals predicted Time 2 incremental theories (positively) and entity theories (negatively); Time 1 entity and incremental theories negatively predicted Time 2 incremental and entity theories respectively; but, Time 1 incremental theories and entity theories did not predict growth goals at Time 2. This suggests that entity and incremental theories are negatively reciprocally related across time, but growth goals seem to be directionally salient over incremental and entity theories. Implications for promoting growth goals and growth mindsets are discussed. © 2014 The British Psychological Society.
Code of Federal Regulations, 2010 CFR
2010-07-01
... incremental cost model shall be reported. ... 39 Postal Service 1 2010-07-01 2010-07-01 false Documentation supporting incremental cost... REGULATORY COMMISSION PERSONNEL PERIODIC REPORTING § 3050.23 Documentation supporting incremental cost...
Zhang, Hai Ping; Li, Feng Ri; Dong, Li Hu; Liu, Qiang
2017-06-18
Based on the 212 re-measured permanent plots for natural Betula platyphylla fore-sts in Daxing'an Mountains and Xiaoxing'an Mountains and 30 meteorological stations data, an individual tree growth model based on meteorological factors was constructed. The differences of stand and meteorological factors between Daxing'an Mountains and Xiaoxing'an Mountains were analyzed and the diameter increment model including the regional effects was developed by dummy variable approach. The results showed that the minimum temperature (T g min ) and mean precipitation (P g m ) in growing season were the main meteorological factors which affected the diameter increment in the two study areas. T g min and P g m were positively correlated with the diameter increment, but the influence strength of T g min was obviously different between the two research areas. The adjusted coefficient of determination (R a 2 ) of the diameter increment model with meteorological factors was 0.56 and had an 11% increase compared to the one without meteorological factors. It was concluded that meteorological factors could well explain the diameter increment of B. platyphylla. R a 2 of the model with regional effects was 0.59, and increased by 18% compared to the one without regional effects, and effectively solved the incompatible problem of parameters between the two research areas. The validation results showed that the individual tree diameter growth model with regional effect had the best prediction accuracy in estimating the diameter increment of B. platyphylla. The mean error, mean absolute error, mean error percent and mean prediction error percent were 0.0086, 0.4476, 5.8% and 20.0%, respectively. Overall, dummy variable model of individual tree diameter increment based on meteorological factors could well describe the diameter increment process of natural B. platyphylla in Daxing'an Mountains and Xiaoxing'an Mountains.
Pricing and Welfare in Health Plan Choice.
Bundorf, M Kate; Levin, Jonathan; Mahoney, Neale
2012-12-01
Premiums in health insurance markets frequently do not reflect individual differences in costs, either because consumers have private information or because prices are not risk rated. This creates inefficiencies when consumers self-select into plans. We develop a simple econometric model to study this problem and estimate it using data on small employers. We find a welfare loss of 2-11 percent of coverage costs compared to what is feasible with risk rating. Only about one-quarter of this is due to inefficiently chosen uniform contribution levels. We also investigate the reclassification risk created by risk rating individual incremental premiums, finding only a modest welfare cost.
Personal manufacturing systems
NASA Astrophysics Data System (ADS)
Bailey, P.
1992-04-01
Personal Manufacturing Systems are the missing link in the automation of the design-to- manufacture process. A PMS will act as a CAD peripheral, closing the loop around the designer enabling him to directly produce models, short production runs or soft tooling with as little fuss as he might otherwise plot a drawing. Whereas conventional 5-axis CNC machines are based on orthogonal axes and simple incremental movements, the PMS is based on a geodetic structure and complex co-ordinated 'spline' movements. The software employs a novel 3D pixel technique for give itself 'spatial awareness' and an expert system to determine the optimum machining conditions. A completely automatic machining strategy can then be determined.
Power-Laws and Scaling in Finance: Empirical Evidence and Simple Models
NASA Astrophysics Data System (ADS)
Bouchaud, Jean-Philippe
We discuss several models that may explain the origin of power-law distributions and power-law correlations in financial time series. From an empirical point of view, the exponents describing the tails of the price increments distribution and the decay of the volatility correlations are rather robust and suggest universality. However, many of the models that appear naturally (for example, to account for the distribution of wealth) contain some multiplicative noise, which generically leads to non universal exponents. Recent progress in the empirical study of the volatility suggests that the volatility results from some sort of multiplicative cascade. A convincing `microscopic' (i.e. trader based) model that explains this observation is however not yet available. We discuss a rather generic mechanism for long-ranged volatility correlations based on the idea that agents constantly switch between active and inactive strategies depending on their relative performance.
Net reclassification index at event rate: properties and relationships.
Pencina, Michael J; Steyerberg, Ewout W; D'Agostino, Ralph B
2017-12-10
The net reclassification improvement (NRI) is an attractively simple summary measure quantifying improvement in performance because of addition of new risk marker(s) to a prediction model. Originally proposed for settings with well-established classification thresholds, it quickly extended into applications with no thresholds in common use. Here we aim to explore properties of the NRI at event rate. We express this NRI as a difference in performance measures for the new versus old model and show that the quantity underlying this difference is related to several global as well as decision analytic measures of model performance. It maximizes the relative utility (standardized net benefit) across all classification thresholds and can be viewed as the Kolmogorov-Smirnov distance between the distributions of risk among events and non-events. It can be expressed as a special case of the continuous NRI, measuring reclassification from the 'null' model with no predictors. It is also a criterion based on the value of information and quantifies the reduction in expected regret for a given regret function, casting the NRI at event rate as a measure of incremental reduction in expected regret. More generally, we find it informative to present plots of standardized net benefit/relative utility for the new versus old model across the domain of classification thresholds. Then, these plots can be summarized with their maximum values, and the increment in model performance can be described by the NRI at event rate. We provide theoretical examples and a clinical application on the evaluation of prognostic biomarkers for atrial fibrillation. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Continuous microbial cultures maintained by electronically-controlled device
NASA Technical Reports Server (NTRS)
Eisler, W. J., Jr.; Webb, R. B.
1967-01-01
Photocell-controlled instrument maintains microbial culture. It uses commercially available chemostat glassware, provides adequate aeration through bubbling of the culture, maintains the population size and density, continuously records growth rates over small increments of time, and contains a simple, sterilizable nutrient control mechanism.
The Capillary Flow Experiments Aboard the International Space Station: Increments 9-15
NASA Technical Reports Server (NTRS)
Jenson, Ryan M.; Weislogel, Mark M.; Tavan, Noel T.; Chen, Yongkang; Semerjian, Ben; Bunnell, Charles T.; Collicott, Steven H.; Klatte, Jorg; dreyer, Michael E.
2009-01-01
This report provides a summary of the experimental, analytical, and numerical results of the Capillary Flow Experiment (CFE) performed aboard the International Space Station (ISS). The experiments were conducted in space beginning with Increment 9 through Increment 16, beginning August 2004 and ending December 2007. Both primary and extra science experiments were conducted during 19 operations performed by 7 astronauts including: M. Fincke, W. McArthur, J. Williams, S. Williams, M. Lopez-Alegria, C. Anderson, and P. Whitson. CFE consists of 6 approximately 1 to 2 kg handheld experiment units designed to investigate a selection of capillary phenomena of fundamental and applied importance, such as large length scale contact line dynamics (CFE-Contact Line), critical wetting in discontinuous structures (CFE-Vane Gap), and capillary flows and passive phase separations in complex containers (CFE-Interior Corner Flow). Highly quantitative video from the simply performed flight experiments provide data helpful in benchmarking numerical methods, confirming theoretical models, and guiding new model development. In an extensive executive summary, a brief history of the experiment is reviewed before introducing the science investigated. A selection of experimental results and comparisons with both analytic and numerical predictions is given. The subsequent chapters provide additional details of the experimental and analytical methods developed and employed. These include current presentations of the state of the data reduction which we anticipate will continue throughout the year and culminate in several more publications. An extensive appendix is used to provide support material such as an experiment history, dissemination items to date (CFE publication, etc.), detailed design drawings, and crew procedures. Despite the simple nature of the experiments and procedures, many of the experimental results may be practically employed to enhance the design of spacecraft engineering systems involving capillary interface dynamics.
Steyerberg, Ewout W; Vedder, Moniek M; Leening, Maarten J G; Postmus, Douwe; D'Agostino, Ralph B; Van Calster, Ben; Pencina, Michael J
2015-07-01
New markers may improve prediction of diagnostic and prognostic outcomes. We aimed to review options for graphical display and summary measures to assess the predictive value of markers over standard, readily available predictors. We illustrated various approaches using previously published data on 3264 participants from the Framingham Heart Study, where 183 developed coronary heart disease (10-year risk 5.6%). We considered performance measures for the incremental value of adding HDL cholesterol to a prediction model. An initial assessment may consider statistical significance (HR = 0.65, 95% confidence interval 0.53 to 0.80; likelihood ratio p < 0.001), and distributions of predicted risks (densities or box plots) with various summary measures. A range of decision thresholds is considered in predictiveness and receiver operating characteristic curves, where the area under the curve (AUC) increased from 0.762 to 0.774 by adding HDL. We can furthermore focus on reclassification of participants with and without an event in a reclassification graph, with the continuous net reclassification improvement (NRI) as a summary measure. When we focus on one particular decision threshold, the changes in sensitivity and specificity are central. We propose a net reclassification risk graph, which allows us to focus on the number of reclassified persons and their event rates. Summary measures include the binary AUC, the two-category NRI, and decision analytic variants such as the net benefit (NB). Various graphs and summary measures can be used to assess the incremental predictive value of a marker. Important insights for impact on decision making are provided by a simple graph for the net reclassification risk. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Menon, J; Mishra, P
2018-04-01
We determined incremental health care resource utilization, incremental health care expenditures, incremental absenteeism, and incremental absenteeism costs associated with osteoarthritis. Medical Expenditure Panel Survey (MEPS) for 2011 was used as data source. Individuals 18 years or older and employed during 2011 were eligible for inclusion in the sample for analyses. Individuals with osteoarthritis were identified based on ICD-9-CM codes. Incremental health care resource utilization included annual hospitalization, hospital days, emergency room visits and outpatient visits. Incremental health expenditures included annual inpatient, outpatient, emergency room, medications, miscellaneous and annual total expenditures. Of the total sample, 1354 were diagnosed with osteoarthritis, and compared to non osteoarthritis individuals. Incremental resource utilization, expenditures, absenteeism and absenteeism costs were estimated using regression models, adjusting for age, gender, sex, region, marital status, insurance coverage, comorbidities, anxiety, asthma, hypertension and hyperlipidemia. Regression models revealed incremental mean annual resource use associated with osteoarthritis of 0.07 hospitalizations, equal to 70 additional hospitalizations per 100 osteoarthritic patients annually, and 3.63 outpatient visits, equal to 363 additional visits per 100 osteoarthritic patients annually. Mean annual incremental total expenditures associated with osteoarthritis were $2046. Annually, mean incremental expenditures were largest for inpatient expenditures at $826, followed by mean incremental outpatient expenditures of $659, and mean incremental medication expenditures of $325. Mean annual incremental absenteeism was 2.2 days and mean annual incremental absenteeism costs were $715.74. Total direct expenditures were estimated at $41.7 billion. Osteoarthritis was associated with significant incremental health care resource utilization, expenditures, absenteeism and absenteeism costs. Copyright © 2017 Osteoarthritis Research Society International. Published by Elsevier Ltd. All rights reserved.
Validation of a DICE Simulation Against a Discrete Event Simulation Implemented Entirely in Code.
Möller, Jörgen; Davis, Sarah; Stevenson, Matt; Caro, J Jaime
2017-10-01
Modeling is an essential tool for health technology assessment, and various techniques for conceptualizing and implementing such models have been described. Recently, a new method has been proposed-the discretely integrated condition event or DICE simulation-that enables frequently employed approaches to be specified using a common, simple structure that can be entirely contained and executed within widely available spreadsheet software. To assess if a DICE simulation provides equivalent results to an existing discrete event simulation, a comparison was undertaken. A model of osteoporosis and its management programmed entirely in Visual Basic for Applications and made public by the National Institute for Health and Care Excellence (NICE) Decision Support Unit was downloaded and used to guide construction of its DICE version in Microsoft Excel ® . The DICE model was then run using the same inputs and settings, and the results were compared. The DICE version produced results that are nearly identical to the original ones, with differences that would not affect the decision direction of the incremental cost-effectiveness ratios (<1% discrepancy), despite the stochastic nature of the models. The main limitation of the simple DICE version is its slow execution speed. DICE simulation did not alter the results and, thus, should provide a valid way to design and implement decision-analytic models without requiring specialized software or custom programming. Additional efforts need to be made to speed up execution.
Chen, C L Philip; Liu, Zhulin
2018-01-01
Broad Learning System (BLS) that aims to offer an alternative way of learning in deep structure is proposed in this paper. Deep structure and learning suffer from a time-consuming training process because of a large number of connecting parameters in filters and layers. Moreover, it encounters a complete retraining process if the structure is not sufficient to model the system. The BLS is established in the form of a flat network, where the original inputs are transferred and placed as "mapped features" in feature nodes and the structure is expanded in wide sense in the "enhancement nodes." The incremental learning algorithms are developed for fast remodeling in broad expansion without a retraining process if the network deems to be expanded. Two incremental learning algorithms are given for both the increment of the feature nodes (or filters in deep structure) and the increment of the enhancement nodes. The designed model and algorithms are very versatile for selecting a model rapidly. In addition, another incremental learning is developed for a system that has been modeled encounters a new incoming input. Specifically, the system can be remodeled in an incremental way without the entire retraining from the beginning. Satisfactory result for model reduction using singular value decomposition is conducted to simplify the final structure. Compared with existing deep neural networks, experimental results on the Modified National Institute of Standards and Technology database and NYU NORB object recognition dataset benchmark data demonstrate the effectiveness of the proposed BLS.
HIV Treatment and Prevention: A Simple Model to Determine Optimal Investment.
Juusola, Jessie L; Brandeau, Margaret L
2016-04-01
To create a simple model to help public health decision makers determine how to best invest limited resources in HIV treatment scale-up and prevention. A linear model was developed for determining the optimal mix of investment in HIV treatment and prevention, given a fixed budget. The model incorporates estimates of secondary health benefits accruing from HIV treatment and prevention and allows for diseconomies of scale in program costs and subadditive benefits from concurrent program implementation. Data sources were published literature. The target population was individuals infected with HIV or at risk of acquiring it. Illustrative examples of interventions include preexposure prophylaxis (PrEP), community-based education (CBE), and antiretroviral therapy (ART) for men who have sex with men (MSM) in the US. Outcome measures were incremental cost, quality-adjusted life-years gained, and HIV infections averted. Base case analysis indicated that it is optimal to invest in ART before PrEP and to invest in CBE before scaling up ART. Diseconomies of scale reduced the optimal investment level. Subadditivity of benefits did not affect the optimal allocation for relatively low implementation levels. The sensitivity analysis indicated that investment in ART before PrEP was optimal in all scenarios tested. Investment in ART before CBE became optimal when CBE reduced risky behavior by 4% or less. Limitations of the study are that dynamic effects are approximated with a static model. Our model provides a simple yet accurate means of determining optimal investment in HIV prevention and treatment. For MSM in the US, HIV control funds should be prioritized on inexpensive, effective programs like CBE, then on ART scale-up, with only minimal investment in PrEP. © The Author(s) 2015.
Dong, Hengjin; Buxton, Martin
2006-01-01
The objective of this study is to apply a Markov model to compare cost-effectiveness of total knee replacement (TKR) using computer-assisted surgery (CAS) with that of TKR using a conventional manual method in the absence of formal clinical trial evidence. A structured search was carried out to identify evidence relating to the clinical outcome, cost, and effectiveness of TKR. Nine Markov states were identified based on the progress of the disease after TKR. Effectiveness was expressed by quality-adjusted life years (QALYs). The simulation was carried out initially for 120 cycles of a month each, starting with 1,000 TKRs. A discount rate of 3.5 percent was used for both cost and effectiveness in the incremental cost-effectiveness analysis. Then, a probabilistic sensitivity analysis was carried out using a Monte Carlo approach with 10,000 iterations. Computer-assisted TKR was a long-term cost-effective technology, but the QALYs gained were small. After the first 2 years, the incremental cost per QALY of computer-assisted TKR was dominant because of cheaper and more QALYs. The incremental cost-effectiveness ratio (ICER) was sensitive to the "effect of CAS," to the CAS extra cost, and to the utility of the state "Normal health after primary TKR," but it was not sensitive to utilities of other Markov states. Both probabilistic and deterministic analyses produced similar cumulative serious or minor complication rates and complex or simple revision rates. They also produced similar ICERs. Compared with conventional TKR, computer-assisted TKR is a cost-saving technology in the long-term and may offer small additional QALYs. The "effect of CAS" is to reduce revision rates and complications through more accurate and precise alignment, and although the conclusions from the model, even when allowing for a full probabilistic analysis of uncertainty, are clear, the "effect of CAS" on the rate of revisions awaits long-term clinical evidence.
On the validity of the incremental approach to estimate the impact of cities on air quality
NASA Astrophysics Data System (ADS)
Thunis, Philippe
2018-01-01
The question of how much cities are the sources of their own air pollution is not only theoretical as it is critical to the design of effective strategies for urban air quality planning. In this work, we assess the validity of the commonly used incremental approach to estimate the likely impact of cities on their air pollution. With the incremental approach, the city impact (i.e. the concentration change generated by the city emissions) is estimated as the concentration difference between a rural background and an urban background location, also known as the urban increment. We show that the city impact is in reality made up of the urban increment and two additional components and consequently two assumptions need to be fulfilled for the urban increment to be representative of the urban impact. The first assumption is that the rural background location is not influenced by emissions from within the city whereas the second requires that background concentration levels, obtained with zero city emissions, are equal at both locations. Because the urban impact is not measurable, the SHERPA modelling approach, based on a full air quality modelling system, is used in this work to assess the validity of these assumptions for some European cities. Results indicate that for PM2.5, these two assumptions are far from being fulfilled for many large or medium city sizes. For this type of cities, urban increments are largely underestimating city impacts. Although results are in better agreement for NO2, similar issues are met. In many situations the incremental approach is therefore not an adequate estimate of the urban impact on air pollution. This poses issues in terms of interpretation when these increments are used to define strategic options in terms of air quality planning. We finally illustrate the interest of comparing modelled and measured increments to improve our confidence in the model results.
Transformational Learning: Reflections of an Adult Learning Story
ERIC Educational Resources Information Center
Foote, Laura S.
2015-01-01
Transformational learning, narrative learning, and spiritual learning frame adult experiences in new and exciting ways. These types of learning can involve a simple transformation of belief or opinion or a radical transformation involving one's total perspective; learning may occur abruptly or incrementally. Education should liberate students from…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shimizu, Kuniyasu, E-mail: kuniyasu.shimizu@it-chiba.ac.jp; Sekikawa, Munehisa; Inaba, Naohiko
2015-02-15
Bifurcations of complex mixed-mode oscillations denoted as mixed-mode oscillation-incrementing bifurcations (MMOIBs) have frequently been observed in chemical experiments. In a previous study [K. Shimizu et al., Physica D 241, 1518 (2012)], we discovered an extremely simple dynamical circuit that exhibits MMOIBs. Our model was represented by a slow/fast Bonhoeffer-van der Pol circuit under weak periodic perturbation near a subcritical Andronov-Hopf bifurcation point. In this study, we experimentally and numerically verify that our dynamical circuit captures the essence of the underlying mechanism causing MMOIBs, and we observe MMOIBs and chaos with distinctive waveforms in real circuit experiments.
NASA Astrophysics Data System (ADS)
Mabit, Lionel; Meusburger, Katrin; Iurian, Andra-Rada; Owens, Philip N.; Toloza, Arsenio; Alewell, Christine
2014-05-01
Soil and sediment related research for terrestrial agri-environmental assessments requires accurate depth incremental sampling of soil and exposed sediment profiles. Existing coring equipment does not allow collecting soil/sediment increments at millimetre resolution. Therefore, the authors have designed an economic, portable, hand-operated surface soil/sediment sampler - the Fine Increment Soil Collector (FISC) - which allows extensive control of soil/sediment sampling process and easy recovery of the material collected by using a simple screw-thread extraction system. In comparison with existing sampling tools, the FISC has the following advantages and benefits: (i) it permits sampling of soil/sediment samples at the top of the profile; (ii) it is easy to adjust so as to collect soil/sediment at mm resolution; (iii) it is simple to operate by one single person; (iv) incremental samples can be performed in the field or at the laboratory; (v) it permits precise evaluation of bulk density at millimetre vertical resolution; and (vi) sample size can be tailored to analytical requirements. To illustrate the usefulness of the FISC in sampling soil and sediments for 7Be - a well-known cosmogenic soil tracer and fingerprinting tool - measurements, the sampler was tested in a forested soil located 45 km southeast of Vienna in Austria. The fine resolution increments of 7Be (i.e. 2.5 mm) affects directly the measurement of the 7Be total inventory but above all impacts the shape of the 7Be exponential profile which is needed to assess soil movement rates. The FISC can improve the determination of the depth distributions of other Fallout Radionuclides (FRN) - such as 137Cs, 210Pbexand239+240Pu - which are frequently used for soil erosion and sediment transport studies and/or sediment fingerprinting. Such a device also offers great potential to investigate FRN depth distributions associated with fallout events such as that associated with nuclear emergencies. Furthermore, prior to remediation activities - such as topsoil removal - in contaminated soils and sediments (e.g. by heavy metals, pesticides or nuclear power plant accident releases), basic environmental assessment often requires the determination of the extent and the depth penetration of the different contaminants, precision that can be provided by using the FISC.
Giorgio Vacchiano; John D. Shaw; R. Justin DeRose; James N. Long
2008-01-01
Diameter increment is an important variable in modeling tree growth. Most facets of predicted tree development are dependent in part on diameter or diameter increment, the most commonly measured stand variable. The behavior of the Forest Vegetation Simulator (FVS) largely relies on the performance of the diameter increment model and the subsequent use of predicted dbh...
Deficits in Thematic Integration Processes in Broca's and Wernicke's Aphasia
ERIC Educational Resources Information Center
Nakano, Hiroko; Blumstein, Sheila E.
2004-01-01
This study investigated how normal subjects and Broca's and Wernicke's aphasics integrate thematic information incrementally using syntax, lexical-semantics, and pragmatics in a simple active declarative sentence. Three priming experiments were conducted using an auditory lexical decision task in which subjects made a lexical decision on a…
A digital indicator for maximum windspeeds.
William B. Fowler
1969-01-01
A simple device for indicating maximum windspeed during a time interval is described. Use of a unijunction transistor, for voltage sensing, results in a stable comparison circuit and also reduces overall component requirements. Measurement is presented digitally in 1-mile-per-hour increments over the range of 0-51 m.p.h.
Code of Federal Regulations, 2012 CFR
2012-07-01
... ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY... Units Model Rule-Increments of Progress § 60.2575 What are my requirements for meeting increments of...
Code of Federal Regulations, 2014 CFR
2014-07-01
... ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY... Units Model Rule-Increments of Progress § 60.2575 What are my requirements for meeting increments of...
Code of Federal Regulations, 2013 CFR
2013-07-01
... ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY... Units Model Rule-Increments of Progress § 60.2575 What are my requirements for meeting increments of...
Integrating Incremental Learning and Episodic Memory Models of the Hippocampal Region
ERIC Educational Resources Information Center
Meeter, M.; Myers, C. E.; Gluck, M. A.
2005-01-01
By integrating previous computational models of corticohippocampal function, the authors develop and test a unified theory of the neural substrates of familiarity, recollection, and classical conditioning. This approach integrates models from 2 traditions of hippocampal modeling, those of episodic memory and incremental learning, by drawing on an…
40 CFR 60.1615 - How do I comply with the increment of progress for awarding contracts?
Code of Federal Regulations, 2010 CFR
2010-07-01
... AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Emission..., 1999 Model Rule-Increments of Progress § 60.1615 How do I comply with the increment of progress for...
NASA Astrophysics Data System (ADS)
Juneja, A.; Lathrop, D. P.; Sreenivasan, K. R.; Stolovitzky, G.
1994-06-01
A family of schemes is outlined for constructing stochastic fields that are close to turbulence. The fields generated from the more sophisticated versions of these schemes differ little in terms of one-point and two-point statistics from velocity fluctuations in high-Reynolds-number turbulence; we shall designate such fields as synthetic turbulence. All schemes, implemented here in one dimension, consist of the following three ingredients, but differ in various details. First, a simple multiplicative procedure is utilized for generating an intermittent signal which has the same properties as those of the turbulent energy dissipation rate ɛ. Second, the properties of the intermittent signal averaged over an interval of size r are related to those of longitudinal velocity increments Δu(r), evaluated over the same distance r, through a stochastic variable V introduced in the spirit of Kolmogorov's refined similarity hypothesis. The third and final step, which partially resembles a well-known procedure for constructing fractional Brownian motion, consists of suitably combining velocity increments to construct an artificial velocity signal. Various properties of the synthetic turbulence are obtained both analytically and numerically, and found to be in good agreement with measurements made in the atmospheric surface layer. A brief review of some previous models is provided.
Planning Through Incrementalism
ERIC Educational Resources Information Center
Lasserre, Ph.
1974-01-01
An incremental model of decisionmaking is discussed and compared with the Comprehensive Rational Approach. A model of reconciliation between the two approaches is proposed, and examples are given in the field of economic development and educational planning. (Author/DN)
Anomalous scaling of stochastic processes and the Moses effect
NASA Astrophysics Data System (ADS)
Chen, Lijian; Bassler, Kevin E.; McCauley, Joseph L.; Gunaratne, Gemunu H.
2017-04-01
The state of a stochastic process evolving over a time t is typically assumed to lie on a normal distribution whose width scales like t1/2. However, processes in which the probability distribution is not normal and the scaling exponent differs from 1/2 are known. The search for possible origins of such "anomalous" scaling and approaches to quantify them are the motivations for the work reported here. In processes with stationary increments, where the stochastic process is time-independent, autocorrelations between increments and infinite variance of increments can cause anomalous scaling. These sources have been referred to as the Joseph effect and the Noah effect, respectively. If the increments are nonstationary, then scaling of increments with t can also lead to anomalous scaling, a mechanism we refer to as the Moses effect. Scaling exponents quantifying the three effects are defined and related to the Hurst exponent that characterizes the overall scaling of the stochastic process. Methods of time series analysis that enable accurate independent measurement of each exponent are presented. Simple stochastic processes are used to illustrate each effect. Intraday financial time series data are analyzed, revealing that their anomalous scaling is due only to the Moses effect. In the context of financial market data, we reiterate that the Joseph exponent, not the Hurst exponent, is the appropriate measure to test the efficient market hypothesis.
Anomalous scaling of stochastic processes and the Moses effect.
Chen, Lijian; Bassler, Kevin E; McCauley, Joseph L; Gunaratne, Gemunu H
2017-04-01
The state of a stochastic process evolving over a time t is typically assumed to lie on a normal distribution whose width scales like t^{1/2}. However, processes in which the probability distribution is not normal and the scaling exponent differs from 1/2 are known. The search for possible origins of such "anomalous" scaling and approaches to quantify them are the motivations for the work reported here. In processes with stationary increments, where the stochastic process is time-independent, autocorrelations between increments and infinite variance of increments can cause anomalous scaling. These sources have been referred to as the Joseph effect and the Noah effect, respectively. If the increments are nonstationary, then scaling of increments with t can also lead to anomalous scaling, a mechanism we refer to as the Moses effect. Scaling exponents quantifying the three effects are defined and related to the Hurst exponent that characterizes the overall scaling of the stochastic process. Methods of time series analysis that enable accurate independent measurement of each exponent are presented. Simple stochastic processes are used to illustrate each effect. Intraday financial time series data are analyzed, revealing that their anomalous scaling is due only to the Moses effect. In the context of financial market data, we reiterate that the Joseph exponent, not the Hurst exponent, is the appropriate measure to test the efficient market hypothesis.
40 CFR 60.1630 - How do I comply with the increment of progress for achieving final compliance?
Code of Federal Regulations, 2010 CFR
2010-07-01
... PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES... Before August 30, 1999 Model Rule-Increments of Progress § 60.1630 How do I comply with the increment of...
NASA Astrophysics Data System (ADS)
Alam, Md Jahangir; Goodall, Jonathan L.
2012-04-01
The goal of this research was to quantify the relative impact of hydrologic and nitrogen source changes on incremental nitrogen yield in the contiguous United States. Using nitrogen source estimates from various federal data bases, remotely sensed land use data from the National Land Cover Data program, and observed instream loadings from the United States Geological Survey National Stream Quality Accounting Network program, we calibrated and applied the spatially referenced regression model SPARROW to estimate incremental nitrogen yield for the contiguous United States. We ran different model scenarios to separate the effects of changes in source contributions from hydrologic changes for the years 1992 and 2001, assuming that only state conditions changed and that model coefficients describing the stream water-quality response to changes in state conditions remained constant between 1992 and 2001. Model results show a decrease of 8.2% in the median incremental nitrogen yield over the period of analysis with the vast majority of this decrease due to changes in hydrologic conditions rather than decreases in nitrogen sources. For example, when we changed the 1992 version of the model to have nitrogen source data from 2001, the model results showed only a small increase in median incremental nitrogen yield (0.12%). However, when we changed the 1992 version of the model to have hydrologic conditions from 2001, model results showed a decrease of approximately 8.7% in median incremental nitrogen yield. We did, however, find notable differences in incremental yield estimates for different sources of nitrogen after controlling for hydrologic changes, particularly for population related sources. For example, the median incremental yield for population related sources increased by 8.4% after controlling for hydrologic changes. This is in contrast to a 2.8% decrease in population related sources when hydrologic changes are included in the analysis. Likewise we found that median incremental yield from urban watersheds increased by 6.8% after controlling for hydrologic changes—in contrast to the median incremental nitrogen yield from cropland watersheds, which decreased by 2.1% over the same time period. These results suggest that, after accounting for hydrologic changes, population related sources became a more significant contributor of nitrogen yield to streams in the contiguous United States over the period of analysis. However, this study was not able to account for the influence of human management practices such as improvements in wastewater treatment plants or Best Management Practices that likely improved water quality, due to a lack of data for quantifying the impact of these practices for the study area.
The efficacy of using inventory data to develop optimal diameter increment models
Don C. Bragg
2002-01-01
Most optimal tree diameter growth models have arisen through either the conceptualization of physiological processes or the adaptation of empirical increment models. However, surprisingly little effort has been invested in the melding of these approaches even though it is possible to develop theoretically sound, computationally efficient optimal tree growth models...
NASA Astrophysics Data System (ADS)
Giordano, V.; Chisari, C.; Rizzano, G.; Latour, M.
2017-10-01
The main aim of this work is to understand how the prediction of the seismic performance of moment-resisting (MR) steel frames depends on the modelling of their dissipative zones when the structure geometry (number of stories and bays) and seismic excitation source vary. In particular, a parametric analysis involving 4 frames was carried out, and, for each one, the full-strength beam-to-column connections were modelled according to 4 numerical approaches with different degrees of sophistication (Smooth Hysteretic Model, Bouc-Wen, Hysteretic and simple Elastic-Plastic models). Subsequently, Incremental Dynamic Analyses (IDA) were performed by considering two different earthquakes (Spitak and Kobe). The preliminary results collected so far pointed out that the influence of the joint modelling on the overall frame response is negligible up to interstorey drift ratio values equal to those conservatively assumed by the codes to define conventional collapse (0.03 rad). Conversely, if more realistic ultimate interstorey drift values are considered for the q-factor evaluation, the influence of joint modelling can be significant, and thus may require accurate modelling of its cyclic behavior.
Mouelhi Guizani, S; Tenenbaum, G; Bouzaouach, I; Ben Kheder, A; Feki, Y; Bouaziz, M
2006-06-01
Skillful performance in combat and racquet sports consists of proficient technique accompanied with efficient information-processing while engaged in moderate to high physical effort. This study examined information processing and decision-making using simple reaction time (SRT) and choice reaction time (CRT) paradigms in athletes of combat sports and racquet ball games while undergoing incrementally increasing physical effort ranging from low to high intensities. Forty national level experienced athletics in the sports of tennis, table tennis, fencing, and boxing were selected for this study. Each subject performed both simple (SRT) and four-choice reaction time (4-CRT) tasks at rest, and while pedaling on a cycle ergometer at 20%, 40%, 60%, and 80% of their own maximal aerobic power (Pmax). RM MANCOVA revealed significant sport-type by physical load interaction effect mainly on CRT. Least significant difference (LSD) posthoc contrasts indicated that fencers and tennis players process information faster with incrementally increasing workload, while different patterns were obtained for boxers and table-tennis players. The error rate remained stable for each sport type over all conditions. Between-sport differences in SRT and CRT among the athletes were also noted. Findings provide evidence that the 4-CRT is a task that more closely corresponds to the original task athletes are familiar with and utilize in their practices and competitions. However, additional tests that mimic the real world experiences of each sport must be developed and used to capture the nature of information processing and response-selection in specific sports.
Flight Dynamics Modeling and Simulation of a Damaged Transport Aircraft
NASA Technical Reports Server (NTRS)
Shah, Gautam H.; Hill, Melissa A.
2012-01-01
A study was undertaken at NASA Langley Research Center to establish, demonstrate, and apply methodology for modeling and implementing the aerodynamic effects of MANPADS damage to a transport aircraft into real-time flight simulation, and to demonstrate a preliminary capability of using such a simulation to conduct an assessment of aircraft survivability. Key findings from this study include: superpositioning of incremental aerodynamic characteristics to the baseline simulation aerodynamic model proved to be a simple and effective way of modeling damage effects; the primary effect of wing damage rolling moment asymmetry may limit minimum airspeed for adequate controllability, but this can be mitigated by the use of sideslip; combined effects of aerodynamics, control degradation, and thrust loss can result in significantly degraded controllability for a safe landing; and high landing speeds may be required to maintain adequate control if large excursions from the nominal approach path are allowed, but high-gain pilot control during landing can mitigate this risk.
40 CFR 60.1610 - How do I comply with the increment of progress for submittal of a control plan?
Code of Federal Regulations, 2010 CFR
2010-07-01
... PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES... Before August 30, 1999 Model Rule-Increments of Progress § 60.1610 How do I comply with the increment of...
40 CFR 60.2590 - When must I submit the notifications of achievement of increments of progress?
Code of Federal Regulations, 2010 CFR
2010-07-01
... Commenced Construction On or Before November 30, 1999 Model Rule-Increments of Progress § 60.2590 When must... increments of progress must be postmarked no later than 10 business days after the compliance date for the...
40 CFR 60.2595 - What if I do not meet an increment of progress?
Code of Federal Regulations, 2010 CFR
2010-07-01
... or Before November 30, 1999 Model Rule-Increments of Progress § 60.2595 What if I do not meet an... Administrator postmarked within 10 business days after the date for that increment of progress in table 1 of...
Estimating radiofrequency power deposition in body NMR imaging.
Bottomley, P A; Redington, R W; Edelstein, W A; Schenck, J F
1985-08-01
Simple theoretical estimates of the average, maximum, and spatial variation of the radiofrequency power deposition (specific absorption rate) during hydrogen nuclear magnetic resonance imaging are deduced for homogeneous spheres and for cylinders of biological tissue with a uniformly penetrating linear rf field directed axially and transverse to the cylindrical axis. These are all simple scalar multiples of the expression for the cylinder in an axial field published earlier (Med. Phys. 8, 510 (1981]. Exact solutions for the power deposition in the cylinder with axial (Phys. Med. Biol. 23, 630 (1978] and transversely directed rf field are also presented, and the spatial variation of power deposition in head and body models is examined. In the exact models, the specific absorption rates decrease rapidly and monotonically with decreasing radius despite local increases in rf field amplitude. Conversion factors are provided for calculating the power deposited by Gaussian and sinc-modulated rf pulses used for slice selection in NMR imaging, relative to rectangular profiled pulses. Theoretical estimates are compared with direct measurements of the total power deposited in the bodies of nine adult males by a 63-MHz body-imaging system with transversely directed field, taking account of cable and NMR coil losses. The results for the average power deposition agree within about 20% for the exact model of the cylinder with axial field, when applied to the exposed torso volume enclosed by the rf coil. The average values predicted by the simple spherical and cylindrical models with axial fields, the exact cylindrical model with transverse field, and the simple truncated cylinder model with transverse field were about two to three times that measured, while the simple model consisting of an infinitely long cylinder with transverse field gave results about six times that measured. The surface power deposition measured by observing the incremental power as a function of external torso radius was comparable to the average value. This is consistent with the presence of a variable thickness peripheral adipose layer which does not substantially increase surface power deposition with increasing torso radius. The absence of highly localized intensity artifacts in 63-MHz body images does not suggest anomalously intense power deposition at localized internal sites, although peak power is difficult to measure.
ERIC Educational Resources Information Center
Behrens, John T.; DiCerbo, Kristen E.
2014-01-01
Background: It would be easy to think the technological shifts in the digital revolution are simple incremental progressions in societal advancement. However, the nature of digital technology is resulting in qualitative differences in nearly all parts of daily life. Purpose: This paper investigates how the new possibilities for understanding,…
Deryabin, Vasily E; Krans, Valentina M; Fedotova, Tatiana K
2005-07-01
Mean values of different body dimensions in different age cohorts of children make it possible to learn a lot about their dynamic changes. Their comparative analysis, as is usually practiced, in fact leads to a simple description of changes in measurement units (mm or cm) at the average level of some body dimension during a shorter or longer period of time. To estimate comparative intensity of the growth process of different body dimensions, the authors use the analogue of Mahalanobis distance, the so-called Kullback divergence (1967), which does not demand stability of dispersion or correlation coefficients of dimensions in compared cohorts of children. Most of the dimensions, excluding skinfolds, demonstrate growth dynamics with gradually reducing increments from birth to 7 years. Body length has the highest integrative increment, leg length about 94% of body length, body mass 77%, and trunk and extremities circumferences 56%. Skinfolds have a non-monotonic pattern of accumulated standardized increments with some increase until 1-2 years of age.
Lorber, Matthew; Toms, Leisa-Maree L
2017-10-01
Several studies have examined the role of breast milk consumption in the buildup of environmental chemicals in infants, and have concluded that this pathway elevates infant body burdens above what would occur in a formula-only diet. Unique data from Australia provide an opportunity to study this finding using simple pharmacokinetic (PK) models. Pooled serum samples from infants in the general population provided data on PCB 153, BDE 47, and DDE at 6-month increments from birth until 4 years of age. General population breast-feeding scenarios for Australian conditions were crafted and input into a simple PK model which predicted infant serum concentrations over time. Comparison scenarios of background exposures to characterize formula-feeding were also crafted. It was found that the models were able to replicate the rise in measured infant body burdens for PCB 153 and DDE in the breast-feeding scenarios, while the background scenarios resulted in infant body burdens substantially below the measurements. The same was not true for BDE 47, however. Both the breast-feeding and background scenarios substantially underpredicted body burden measurements. Two possible explanations were offered: that exposure to higher BDE congeners would debrominate and form BDE 47 in the body, and/or, a second overlooked exposure pathway for PBDEs might be the cause of high infant and toddler body burdens. This pathway was inhalation due to the use of PBDEs as flame retardants in bedding materials. More research to better understand and quantify this pathway, or other unknown pathways, to describe infant and toddler exposures to PBDEs is needed. Published by Elsevier Ltd.
NASA Astrophysics Data System (ADS)
Most, S.; Jia, N.; Bijeljic, B.; Nowak, W.
2016-12-01
Pre-asymptotic characteristics are almost ubiquitous when analyzing solute transport processes in porous media. These pre-asymptotic aspects are caused by spatial coherence in the velocity field and by its heterogeneity. For the Lagrangian perspective of particle displacements, the causes of pre-asymptotic, non-Fickian transport are skewed velocity distribution, statistical dependencies between subsequent increments of particle positions (memory) and dependence between the x, y and z-components of particle increments. Valid simulation frameworks should account for these factors. We propose a particle tracking random walk (PTRW) simulation technique that can use empirical pore-space velocity distributions as input, enforces memory between subsequent random walk steps, and considers cross dependence. Thus, it is able to simulate pre-asymptotic non-Fickian transport phenomena. Our PTRW framework contains an advection/dispersion term plus a diffusion term. The advection/dispersion term produces time-series of particle increments from the velocity CDFs. These time series are equipped with memory by enforcing that the CDF values of subsequent velocities change only slightly. The latter is achieved through a random walk on the axis of CDF values between 0 and 1. The virtual diffusion coefficient for that random walk is our only fitting parameter. Cross-dependence can be enforced by constraining the random walk to certain combinations of CDF values between the three velocity components in x, y and z. We will show that this modelling framework is capable of simulating non-Fickian transport by comparison with a pore-scale transport simulation and we analyze the approach to asymptotic behavior.
NASA Astrophysics Data System (ADS)
Bhargava, K.; Kalnay, E.; Carton, J.; Yang, F.
2017-12-01
Systematic forecast errors, arising from model deficiencies, form a significant portion of the total forecast error in weather prediction models like the Global Forecast System (GFS). While much effort has been expended to improve models, substantial model error remains. The aim here is to (i) estimate the model deficiencies in the GFS that lead to systematic forecast errors, (ii) implement an online correction (i.e., within the model) scheme to correct GFS following the methodology of Danforth et al. [2007] and Danforth and Kalnay [2008, GRL]. Analysis Increments represent the corrections that new observations make on, in this case, the 6-hr forecast in the analysis cycle. Model bias corrections are estimated from the time average of the analysis increments divided by 6-hr, assuming that initial model errors grow linearly and first ignoring the impact of observation bias. During 2012-2016, seasonal means of the 6-hr model bias are generally robust despite changes in model resolution and data assimilation systems, and their broad continental scales explain their insensitivity to model resolution. The daily bias dominates the sub-monthly analysis increments and consists primarily of diurnal and semidiurnal components, also requiring a low dimensional correction. Analysis increments in 2015 and 2016 are reduced over oceans, which is attributed to improvements in the specification of the SSTs. These results encourage application of online correction, as suggested by Danforth and Kalnay, for mean, seasonal and diurnal and semidiurnal model biases in GFS to reduce both systematic and random errors. As the error growth in the short-term is still linear, estimated model bias corrections can be added as a forcing term in the model tendency equation to correct online. Preliminary experiments with GFS, correcting temperature and specific humidity online show reduction in model bias in 6-hr forecast. This approach can then be used to guide and optimize the design of sub-grid scale physical parameterizations, more accurate discretization of the model dynamics, boundary conditions, radiative transfer codes, and other potential model improvements which can then replace the empirical correction scheme. The analysis increments also provide guidance in testing new physical parameterizations.
de Oliveira Correia, Ayla Macyelle; Tribst, João Paulo Mendes; de Souza Matos, Felipe; Platt, Jeffrey A; Caneppele, Taciana Marco Ferraz; Borges, Alexandre Luiz Souto
2018-06-20
This study evaluated the effect of different restorative techniques for non-carious cervical lesions (NCCL) on polymerization shrinkage stress of resins using three-dimensional (3D) finite element analysis (FEA). 3D-models of a maxillary premolar with a NCCL restored with different filling techniques (bulk filling and incremental) were generated to be compared by nonlinear FEA. The bulk filling technique was used for groups B (NCCL restored with Filtek™ Bulk Fill) and C (Filtek™ Z350 XT). The incremental technique was subdivided according to mode of application: P (2 parallel increments of the Filtek™ Z350 XT), OI (2 oblique increments of the Filtek™ Z350 XT, with incisal first), OIV (2 oblique increments of the Filtek™ Z350 XT, with incisal first and increments with the same volume), OG (2 oblique increments of the Filtek™ Z350 XT, with gingival first) and OGV (2 oblique increments of the Filtek™ Z350 XT, with gingival first and increments with the same volume), resulting in 7 models. All materials were considered isotropic, elastic and linear. The results were expressed in maximum principal stress (MPS). The tension stress distribution was influenced by the restorative technique. The lowest stress concentration occurred in group B followed by OG, OGV, OI, OIV, P and C; the incisal interface was more affected than the gingival. The restoration of NCCLs with bulk fill composite resulted in lower shrinkage stress in the gingival and incisal areas, followed by incremental techniques with the initial increment placed on the gingival wall. The non-carious cervical lesions (NCCLs) restored with bulk fill composite have a more favorable biomechanical behavior. Copyright © 2018. Published by Elsevier Ltd.
NASA Astrophysics Data System (ADS)
Yahya, I.; Kusuma, J. I.; Harjana; Kristiani, R.; Hanina, R.
2016-02-01
This paper emphasizes the influence of tubular shaped microresonators phononic crystal insertion on the sound absorption coefficient of profiled sound absorber. A simple cubic and two different bodies centered cubic phononic crystal lattice model were analyzed in a laboratory test procedure. The experiment was conducted by using transfer function based two microphone impedance tube method refer to ASTM E-1050-98. The results show that sound absorption coefficient increase significantly at the mid and high-frequency band (600 - 700 Hz) and (1 - 1.6 kHz) when tubular shaped microresonator phononic crystal inserted into the tested sound absorber element. The increment phenomena related to multi-resonance effect that occurs when sound waves propagate through the phononic crystal lattice model that produce multiple reflections and scattering in mid and high-frequency band which increases the sound absorption coefficient accordingly
Applying Graph Theory to Problems in Air Traffic Management
NASA Technical Reports Server (NTRS)
Farrahi, Amir Hossein; Goldbert, Alan; Bagasol, Leonard Neil; Jung, Jaewoo
2017-01-01
Graph theory is used to investigate three different problems arising in air traffic management. First, using a polynomial reduction from a graph partitioning problem, it is shown that both the airspace sectorization problem and its incremental counterpart, the sector combination problem are NP-hard, in general, under several simple workload models. Second, using a polynomial time reduction from maximum independent set in graphs, it is shown that for any fixed e, the problem of finding a solution to the minimum delay scheduling problem in traffic flow management that is guaranteed to be within n1-e of the optimal, where n is the number of aircraft in the problem instance, is NP-hard. Finally, a problem arising in precision arrival scheduling is formulated and solved using graph reachability. These results demonstrate that graph theory provides a powerful framework for modeling, reasoning about, and devising algorithmic solutions to diverse problems arising in air traffic management.
Applying Graph Theory to Problems in Air Traffic Management
NASA Technical Reports Server (NTRS)
Farrahi, Amir H.; Goldberg, Alan T.; Bagasol, Leonard N.; Jung, Jaewoo
2017-01-01
Graph theory is used to investigate three different problems arising in air traffic management. First, using a polynomial reduction from a graph partitioning problem, it isshown that both the airspace sectorization problem and its incremental counterpart, the sector combination problem are NP-hard, in general, under several simple workload models. Second, using a polynomial time reduction from maximum independent set in graphs, it is shown that for any fixed e, the problem of finding a solution to the minimum delay scheduling problem in traffic flow management that is guaranteed to be within n1-e of the optimal, where n is the number of aircraft in the problem instance, is NP-hard. Finally, a problem arising in precision arrival scheduling is formulated and solved using graph reachability. These results demonstrate that graph theory provides a powerful framework for modeling, reasoning about, and devising algorithmic solutions to diverse problems arising in air traffic management.
Weak percolation on multiplex networks
NASA Astrophysics Data System (ADS)
Baxter, Gareth J.; Dorogovtsev, Sergey N.; Mendes, José F. F.; Cellai, Davide
2014-04-01
Bootstrap percolation is a simple but nontrivial model. It has applications in many areas of science and has been explored on random networks for several decades. In single-layer (simplex) networks, it has been recently observed that bootstrap percolation, which is defined as an incremental process, can be seen as the opposite of pruning percolation, where nodes are removed according to a connectivity rule. Here we propose models of both bootstrap and pruning percolation for multiplex networks. We collectively refer to these two models with the concept of "weak" percolation, to distinguish them from the somewhat classical concept of ordinary ("strong") percolation. While the two models coincide in simplex networks, we show that they decouple when considering multiplexes, giving rise to a wealth of critical phenomena. Our bootstrap model constitutes the simplest example of a contagion process on a multiplex network and has potential applications in critical infrastructure recovery and information security. Moreover, we show that our pruning percolation model may provide a way to diagnose missing layers in a multiplex network. Finally, our analytical approach allows us to calculate critical behavior and characterize critical clusters.
Amsallem, Myriam; Sweatt, Andrew J; Aymami, Marie C; Kuznetsova, Tatiana; Selej, Mona; Lu, HongQuan; Mercier, Olaf; Fadel, Elie; Schnittger, Ingela; McConnell, Michael V; Rabinovitch, Marlene; Zamanian, Roham T; Haddad, Francois
2017-06-01
Right ventricular (RV) end-systolic dimensions provide information on both size and function. We investigated whether an internally scaled index of end-systolic dimension is incremental to well-validated prognostic scores in pulmonary arterial hypertension. From 2005 to 2014, 228 patients with pulmonary arterial hypertension were prospectively enrolled. RV end-systolic remodeling index (RVESRI) was defined by lateral length divided by septal height. The incremental values of RV free wall longitudinal strain and RVESRI to risk scores were determined. Mean age was 49±14 years, 78% were female, 33% had connective tissue disease, 52% were in New York Heart Association class ≥III, and mean pulmonary vascular resistance was 11.2±6.4 WU. RVESRI and right atrial area were strongly connected to the other right heart metrics. Three zones of adaptation (adapted, maladapted, and severely maladapted) were identified based on the RVESRI to RV systolic pressure relationship. During a mean follow-up of 3.9±2.4 years, the primary end point of death, transplant, or admission for heart failure was reached in 88 patients. RVESRI was incremental to risk prediction scores in pulmonary arterial hypertension, including the Registry to Evaluate Early and Long-Term PAH Disease Management score, the Pulmonary Hypertension Connection equation, and the Mayo Clinic model. Using multivariable analysis, New York Heart Association class III/IV, RVESRI, and log NT-proBNP (N-Terminal Pro-B-Type Natriuretic Peptide) were retained (χ 2 , 62.2; P <0.0001). Changes in RVESRI at 1 year (n=203) were predictive of outcome; patients initiated on prostanoid therapy showed the greatest improvement in RVESRI. Among right heart metrics, RVESRI demonstrated the best test-retest characteristics. RVESRI is a simple reproducible prognostic marker in patients with pulmonary arterial hypertension. © 2017 American Heart Association, Inc.
Lewis, Debra A; Ding, Yong Hong; Dai, Daying; Kadirvel, Ramanathan; Danielson, Mark A; Cloft, Harry J; Kallmes, David F
2008-01-01
Background and Purpose Elastase-induced aneurysms in rabbits have been proposed as a useful preclinical tool for device development. The object of this study is to report rates of morbidity and mortality associated with creation and embolization of the elastase-induced rabbit aneurysm, and to assess the impact of operator experience on these rates. Methods Elastase-induced model aneurysms were created in New Zealand White rabbits (n=700). One neuroradiologist/investigator, naïve to the aneurysm creation procedure at the outset of the experiments, performed all surgeries. All morbidity and deaths related to aneurysm creation (n=700) and embolization procedures (n=529) were categorized into acute and chronic deaths. Data were analyzed with single regression analysis and ANOVA. To assess the impact of increasing operator experience, the number of animals was broken into 50 animal increments. Results There were 121 (17%) deaths among 700 subjects. Among 700 aneurysm creation procedures, 59 deaths (8.4%) were noted. Among 529 aneurysm embolization procedures, 43 deaths (8.1%) were noted. Nineteen additional deaths (2.7% of 700 subjects) were unrelated to procedures. Simple regression indicated mortality associated with procedures diminished with increasing operator experience (R2=0.38; p=0.0180) and that for each 50 rabbit increment mortality is reduced on average by 0.6 percent. Conclusions Mortality rates of approximately 8% are associated with both experimental aneurysm creation and with embolization in the rabbit, elastase-induced aneurysm model. Increasing operator experience is inversely correlated with mortality and the age of the rabbit is positively associated with morbidity. PMID:19001536
Black, Bryan A; Griffin, Daniel; van der Sleen, Peter; Wanamaker, Alan D; Speer, James H; Frank, David C; Stahle, David W; Pederson, Neil; Copenheaver, Carolyn A; Trouet, Valerie; Griffin, Shelly; Gillanders, Bronwyn M
2016-07-01
High-resolution biogenic and geologic proxies in which one increment or layer is formed per year are crucial to describing natural ranges of environmental variability in Earth's physical and biological systems. However, dating controls are necessary to ensure temporal precision and accuracy; simple counts cannot ensure that all layers are placed correctly in time. Originally developed for tree-ring data, crossdating is the only such procedure that ensures all increments have been assigned the correct calendar year of formation. Here, we use growth-increment data from two tree species, two marine bivalve species, and a marine fish species to illustrate sensitivity of environmental signals to modest dating error rates. When falsely added or missed increments are induced at one and five percent rates, errors propagate back through time and eliminate high-frequency variability, climate signals, and evidence of extreme events while incorrectly dating and distorting major disturbances or other low-frequency processes. Our consecutive Monte Carlo experiments show that inaccuracies begin to accumulate in as little as two decades and can remove all but decadal-scale processes after as little as two centuries. Real-world scenarios may have even greater consequence in the absence of crossdating. Given this sensitivity to signal loss, the fundamental tenets of crossdating must be applied to fully resolve environmental signals, a point we underscore as the frontiers of growth-increment analysis continue to expand into tropical, freshwater, and marine environments. © 2016 John Wiley & Sons Ltd.
125 Brain Games for Babies: Simple Games To Promote Early Brain Development.
ERIC Educational Resources Information Center
Silberg, Jackie
Scientists believe that the stimulation that infants and young children receive determines which synapses form in the brain. This book presents 125 games for infants from birth to 12 months and is designed to nurture brain development. The book is organized chronologically in 3-month increments. Each game description includes information from…
NASA Astrophysics Data System (ADS)
Jules, Kenol; Istasse, Eric; Stenuit, Hilde; Murakami, Keiji; Yoshizaki, Izumi; Johnson-Green, Perry
2011-06-01
November 20, 2010, marked a significant milestone in the annals of human endeavors in space since it was the twelfth anniversary of one of the most challenging and complex construction projects ever attempted by humans away from our planet: The construction of the International Space Stations. On November 20, 1998, the Zarya Control Module was launched. With this simple, almost unnoticed launch in the science community, the construction of a continuously staffed research platform, in Low Earth Orbit, was underway. This paper discusses the research that was performed by many occupants of this research platform during the year celebrating its twelfth anniversary. The main objectives of this paper are fourfold: (1) to discuss the integrated manner in which science planning/replanning and prioritization during the execution phase of an increment is carried out across the United States Orbital Segment since that segment is made of four independent space agencies; (2) to discuss and summarize the research that was performed during increments 16 and 17 (October 2007 to October 2008). The discussion for these two increments is primarily focused on the main objectives of each investigation and its associated hypotheses that were investigated. Whenever available and approved, preliminary research results are also discussed for each of the investigations performed during these two increments; (3) to compare the planned research portfolio for these two increments versus what was actually accomplished during the execution phase in order to discuss the challenges associated with planning and performing research in a space laboratory located over 240 miles up in space, away from the ground support team; (4) to briefly touch on the research portfolio of increments 18 and 19/20 as the International Space Station begins its next decade in Low Earth Orbit.
A design of LED adaptive dimming lighting system based on incremental PID controller
NASA Astrophysics Data System (ADS)
He, Xiangyan; Xiao, Zexin; He, Shaojia
2010-11-01
As a new generation energy-saving lighting source, LED is applied widely in various technology and industry fields. The requirement of its adaptive lighting technology is more and more rigorous, especially in the automatic on-line detecting system. In this paper, a closed loop feedback LED adaptive dimming lighting system based on incremental PID controller is designed, which consists of MEGA16 chip as a Micro-controller Unit (MCU), the ambient light sensor BH1750 chip with Inter-Integrated Circuit (I2C), and constant-current driving circuit. A given value of light intensity required for the on-line detecting environment need to be saved to the register of MCU. The optical intensity, detected by BH1750 chip in real time, is converted to digital signal by AD converter of the BH1750 chip, and then transmitted to MEGA16 chip through I2C serial bus. Since the variation law of light intensity in the on-line detecting environment is usually not easy to be established, incremental Proportional-Integral-Differential (PID) algorithm is applied in this system. Control variable obtained by the incremental PID determines duty cycle of Pulse-Width Modulation (PWM). Consequently, LED's forward current is adjusted by PWM, and the luminous intensity of the detection environment is stabilized by self-adaptation. The coefficients of incremental PID are obtained respectively after experiments. Compared with the traditional LED dimming system, it has advantages of anti-interference, simple construction, fast response, and high stability by the use of incremental PID algorithm and BH1750 chip with I2C serial bus. Therefore, it is suitable for the adaptive on-line detecting applications.
The effect of narrow-band noise maskers on increment detection1
Messersmith, Jessica J.; Patra, Harisadhan; Jesteadt, Walt
2010-01-01
It is often assumed that listeners detect an increment in the intensity of a pure tone by detecting an increase in the energy falling within the critical band centered on the signal frequency. A noise masker can be used to limit the use of signal energy falling outside of the critical band, but facets of the noise may impact increment detection beyond this intended purpose. The current study evaluated the impact of envelope fluctuation in a noise masker on thresholds for detection of an increment. Thresholds were obtained for detection of an increment in the intensity of a 0.25- or 4-kHz pedestal in quiet and in the presence of noise of varying bandwidth. Results indicate that thresholds for detection of an increment in the intensity of a pure tone increase with increasing bandwidth for an on-frequency noise masker, but are unchanged by an off-frequency noise masker. Neither a model that includes a modulation-filter-bank analysis of envelope modulation nor a model based on discrimination of spectral patterns can account for all aspects of the observed data. PMID:21110593
Support vector machine incremental learning triggered by wrongly predicted samples
NASA Astrophysics Data System (ADS)
Tang, Ting-long; Guan, Qiu; Wu, Yi-rong
2018-05-01
According to the classic Karush-Kuhn-Tucker (KKT) theorem, at every step of incremental support vector machine (SVM) learning, the newly adding sample which violates the KKT conditions will be a new support vector (SV) and migrate the old samples between SV set and non-support vector (NSV) set, and at the same time the learning model should be updated based on the SVs. However, it is not exactly clear at this moment that which of the old samples would change between SVs and NSVs. Additionally, the learning model will be unnecessarily updated, which will not greatly increase its accuracy but decrease the training speed. Therefore, how to choose the new SVs from old sets during the incremental stages and when to process incremental steps will greatly influence the accuracy and efficiency of incremental SVM learning. In this work, a new algorithm is proposed to select candidate SVs and use the wrongly predicted sample to trigger the incremental processing simultaneously. Experimental results show that the proposed algorithm can achieve good performance with high efficiency, high speed and good accuracy.
Modeling the temporal periodicity of growth increments based on harmonic functions
Morales-Bojórquez, Enrique; González-Peláez, Sergio Scarry; Bautista-Romero, J. Jesús; Lluch-Cota, Daniel Bernardo
2018-01-01
Age estimation methods based on hard structures require a process of validation to confirm the periodical pattern of growth marks. Among such processes, one of the most used is the marginal increment ratio (MIR), which was stated to follow a sinusoidal cycle in a population. Despite its utility, in most cases, its implementation has lacked robust statistical analysis. Accordingly, we propose a modeling approach for the temporal periodicity of growth increments based on single and second order harmonic functions. For illustrative purposes, the MIR periodicities for two geoduck species (Panopea generosa and Panopea globosa) were modeled to identify the periodical pattern of growth increments in the shell. This model identified an annual periodicity for both species but described different temporal patterns. The proposed procedure can be broadly used to objectively define the timing of the peak, the degree of symmetry, and therefore, the synchrony of band deposition of different species on the basis of MIR data. PMID:29694381
Prosthetic ankle push-off work reduces metabolic rate but not collision work in non-amputee walking.
Caputo, Joshua M; Collins, Steven H
2014-12-03
Individuals with unilateral below-knee amputation expend more energy than non-amputees during walking and exhibit reduced push-off work and increased hip work in the affected limb. Simple dynamic models of walking suggest a possible solution, predicting that increasing prosthetic ankle push-off should decrease leading limb collision, thereby reducing overall energy requirements. We conducted a rigorous experimental test of this idea wherein ankle-foot prosthesis push-off work was incrementally varied in isolation from one-half to two-times normal levels while subjects with simulated amputation walked on a treadmill at 1.25 m · s(-1). Increased prosthesis push-off significantly reduced metabolic energy expenditure, with a 14% reduction at maximum prosthesis work. In contrast to model predictions, however, collision losses were unchanged, while hip work during swing initiation was decreased. This suggests that powered ankle push-off reduces walking effort primarily through other mechanisms, such as assisting leg swing, which would be better understood using more complete neuromuscular models.
Prosthetic ankle push-off work reduces metabolic rate but not collision work in non-amputee walking
NASA Astrophysics Data System (ADS)
Caputo, Joshua M.; Collins, Steven H.
2014-12-01
Individuals with unilateral below-knee amputation expend more energy than non-amputees during walking and exhibit reduced push-off work and increased hip work in the affected limb. Simple dynamic models of walking suggest a possible solution, predicting that increasing prosthetic ankle push-off should decrease leading limb collision, thereby reducing overall energy requirements. We conducted a rigorous experimental test of this idea wherein ankle-foot prosthesis push-off work was incrementally varied in isolation from one-half to two-times normal levels while subjects with simulated amputation walked on a treadmill at 1.25 m.s-1. Increased prosthesis push-off significantly reduced metabolic energy expenditure, with a 14% reduction at maximum prosthesis work. In contrast to model predictions, however, collision losses were unchanged, while hip work during swing initiation was decreased. This suggests that powered ankle push-off reduces walking effort primarily through other mechanisms, such as assisting leg swing, which would be better understood using more complete neuromuscular models.
Prosthetic ankle push-off work reduces metabolic rate but not collision work in non-amputee walking
Caputo, Joshua M.; Collins, Steven H.
2014-01-01
Individuals with unilateral below-knee amputation expend more energy than non-amputees during walking and exhibit reduced push-off work and increased hip work in the affected limb. Simple dynamic models of walking suggest a possible solution, predicting that increasing prosthetic ankle push-off should decrease leading limb collision, thereby reducing overall energy requirements. We conducted a rigorous experimental test of this idea wherein ankle-foot prosthesis push-off work was incrementally varied in isolation from one-half to two-times normal levels while subjects with simulated amputation walked on a treadmill at 1.25 m·s−1. Increased prosthesis push-off significantly reduced metabolic energy expenditure, with a 14% reduction at maximum prosthesis work. In contrast to model predictions, however, collision losses were unchanged, while hip work during swing initiation was decreased. This suggests that powered ankle push-off reduces walking effort primarily through other mechanisms, such as assisting leg swing, which would be better understood using more complete neuromuscular models. PMID:25467389
Precipitation Structure in the Sierra Nevada of California During Winter
NASA Technical Reports Server (NTRS)
Pandey, Ganesh R.; Cayan, Daniel R.; Georgakakos, Kostantine P.
1998-01-01
The influences of upper air characteristics along the coast of California upon the winter time precipitation in the Sierra Nevada region were investigated. Most precipitation episodes in the Sierra are associated with moist southwesterly winds and also tend to occur when the 700-mb temperature is close to -2 C. This favored wind direction and temperature signifies the equal importance of moisture transport and orographic lifting for maximum precipitation frequency. Making use of this observation, simple linear models were formulated to quantify the precipitation totals observed at different sites as a function of moisture transport. The skill of the model is least for daily precipitation and increases with time scale of aggregation. In terms of incremental gain, the skill of the model is optimal for an aggregation period of 5-7 days, which is also the duration of the most frequent precipitation events in the Sierra. This indicates that upper air moisture transport at can be used to make reasonable estimates of the precipitation totals for most frequent events in the Sierra region.
Modelling the growth of plants with a uniform growth logistics.
Kilian, H G; Bartkowiak, D; Kazda, M; Kaufmann, D
2014-05-21
The increment model has previously been used to describe the growth of plants in general. Here, we examine how the same logistics enables the development of different superstructures. Data from the literature are analyzed with the increment model. Increments are growth-invariant molecular clusters, treated as heuristic particles. This approach formulates the law of mass action for multi-component systems, describing the general properties of superstructures which are optimized via relaxation processes. The daily growth patterns of hypocotyls can be reproduced implying predetermined growth invariant model parameters. In various species, the coordinated formation and death of fine roots are modeled successfully. Their biphasic annual growth follows distinct morphological programs but both use the same logistics. In tropical forests, distributions of the diameter in breast height of trees of different species adhere to the same pattern. Beyond structural fluctuations, competition and cooperation within and between the species may drive optimization. All superstructures of plants examined so far could be reproduced with our approach. With genetically encoded growth-invariant model parameters (interaction with the environment included) perfect morphological development runs embedded in the uniform logistics of the increment model. Copyright © 2014 Elsevier Ltd. All rights reserved.
The balanced scorecard: an incremental approach model to health care management.
Pineno, Charles J
2002-01-01
The balanced scorecard represents a technique used in strategic management to translate an organization's mission and strategy into a comprehensive set of performance measures that provide the framework for implementation of strategic management. This article develops an incremental approach for decision making by formulating a specific balanced scorecard model with an index of nonfinancial as well as financial measures. The incremental approach to costs, including profit contribution analysis and probabilities, allows decisionmakers to assess, for example, how their desire to meet different health care needs will cause changes in service design. This incremental approach to the balanced scorecard may prove to be useful in evaluating the existence of causality relationships between different objective and subjective measures to be included within the balanced scorecard.
From market games to real-world markets
NASA Astrophysics Data System (ADS)
Jefferies, P.; Hart, M. L.; Hui, P. M.; Johnson, N. F.
2001-04-01
This paper uses the development of multi-agent market models to present a unified approach to the joint questions of how financial market movements may be simulated, predicted, and hedged against. We first present the results of agent-based market simulations in which traders equipped with simple buy/sell strategies and limited information compete in speculatory trading. We examine the effect of different market clearing mechanisms and show that implementation of a simple Walrasian auction leads to unstable market dynamics. We then show that a more realistic out-of-equilibrium clearing process leads to dynamics that closely resemble real financial movements, with fat-tailed price increments, clustered volatility and high volume autocorrelation. We then show that replacing the `synthetic' price history used by these simulations with data taken from real financial time-series leads to the remarkable result that the agents can collectively learn to identify moments in the market where profit is attainable. Hence on real financial data, the system as a whole can perform better than random. We then employ the formalism of Bouchaud in conjunction with agent based models to show that in general risk cannot be eliminated from trading with these models. We also show that, in the presence of transaction costs, the risk of option writing is greatly increased. This risk, and the costs, can however be reduced through the use of a delta-hedging strategy with modified, time-dependent volatility structure.
Lightness computation by the human visual system
NASA Astrophysics Data System (ADS)
Rudd, Michael E.
2017-05-01
A model of achromatic color computation by the human visual system is presented, which is shown to account in an exact quantitative way for a large body of appearance matching data collected with simple visual displays. The model equations are closely related to those of the original Retinex model of Land and McCann. However, the present model differs in important ways from Land and McCann's theory in that it invokes additional biological and perceptual mechanisms, including contrast gain control, different inherent neural gains for incremental, and decremental luminance steps, and two types of top-down influence on the perceptual weights applied to local luminance steps in the display: edge classification and spatial integration attentional windowing. Arguments are presented to support the claim that these various visual processes must be instantiated by a particular underlying neural architecture. By pointing to correspondences between the architecture of the model and findings from visual neurophysiology, this paper suggests that edge classification involves a top-down gating of neural edge responses in early visual cortex (cortical areas V1 and/or V2) while spatial integration windowing occurs in cortical area V4 or beyond.
Coordinated control system modelling of ultra-supercritical unit based on a new T-S fuzzy structure.
Hou, Guolian; Du, Huan; Yang, Yu; Huang, Congzhi; Zhang, Jianhua
2018-03-01
The thermal power plant, especially the ultra-supercritical unit is featured with severe nonlinearity, strong multivariable coupling. In order to deal with these difficulties, it is of great importance to build an accurate and simple model of the coordinated control system (CCS) in the ultra-supercritical unit. In this paper, an improved T-S fuzzy model identification approach is proposed. First of all, the k-means++ algorithm is employed to identify the premise parameters so as to guarantee the number of fuzzy rules. Then, the local linearized models are determined by using the incremental historical data around the cluster centers, which are obtained via the stochastic gradient descent algorithm with momentum and variable learning rate. Finally, with the proposed method, the CCS model of a 1000 MW USC unit in Tai Zhou power plant is developed. The effectiveness of the proposed approach is validated by the given extensive simulation results, and it can be further employed to design the overall advanced controllers for the CCS in an USC unit. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Constructing increment-decrement life tables.
Schoen, R
1975-05-01
A life table model which can recognize increments (or entrants) as well as decrements has proven to be of considerable value in the analysis of marital status patterns, labor force participation patterns, and other areas of substantive interest. Nonetheless, relatively little work has been done on the methodology of increment-decrement (or combined) life tables. The present paper reviews the general, recursive solution of Schoen and Nelson (1974), develops explicit solutions for three cases of particular interest, and compares alternative approaches to the construction of increment-decrement tables.
NASA Astrophysics Data System (ADS)
Yokoi, S.
2013-12-01
The Japan Meteorological Agency (JMA) recently released a new reanalysis dataset JRA-55 with the use of a JMA operational prediction model and 4D-VAR data assimilation. To evaluate merit in utilizing the JRA-55 dataset to investigate dynamics of the tropical intraseasonal variability (ISV) including the Madden-Julian Oscillation (MJO), this study examines ISV-scale precipitable water vapor (PWV) budget over the period 1989-2012. The ISV-scale PWV anomaly related to the boreal-winter MJO propagates eastward along with precipitation, consistent with the SSM/I PWV product. Decomposition of the PWV tendency into that simulated by the model and the analysis increment estimated by the data assimilation reveals that the model makes the PWV anomaly move eastward. On the other hand, the analysis increment exhibits positive values over the area where the PWV anomaly is positive, indicating that the model tends to damp the MJO signal. Note that the analysis increment over the Maritime Continent has comparable magnitude to the model tendency. The positive analysis increment may mainly be caused by an excess of precipitation anomaly with respect to the magnitude of PWV anomaly. In addition to the boreal-winter MJO, this study also examines the PWV budget associated with northward-propagating ISV during the boreal summer and find similar relationship between the PWV anomaly and analysis increment.
Plutons: Simmer between 350° and 500°C for 10 million years, then serve cold (Invited)
NASA Astrophysics Data System (ADS)
Coleman, D. S.; Davis, J.
2009-12-01
The growing recognition that continental plutons are assembled incrementally over millions of years requires reexamination of the thermal histories of intrusive rocks. With the exception of the suggestion that pluton magma chambers can be revitalized by mafic input at their deepest structural levels, most aspects of modern pluton petrology are built on the underlying assumption that silicic plutons intrude as discrete thermal packages that undergo subsequent monotonic decay back to a steady-state geothermal gradient. The recognition that homogeneous silicic plutons are constructed over timescales too great to be single events necessitates rethinking pluton intrusion mechanisms, textures, thermochronology, chemical evolution and links to volcanic rocks. Three-dimensional thermal modeling of sheeted (horizontal and vertical) incremental pluton assembly (using HEAT3D by Wohletz, 2007) yields several results that are largely independent of intrusive geometry and may help understand bothersome field and laboratory results from plutonic rocks. 1) All increments cool quickly below hornblende closure temperature. However, late increments are emplaced into walls warmed by earlier increments, and they cycle between hornblende and biotite closure temperatures, a range in which fluid-rich melts are likely to be present. These conditions persist until the increments are far from the region of new magma flux, or the addition of increments stops. These observations are supported by Ar thermochronology and may explain why heterogeneous early marginal intrusive phases often grade into younger homogeneous interior map units. 2) Early increments become the contact metamorphic wall rocks of later increments. This observation suggests that much of the contact metamorphism associated with a given volume of plutonic rock is “lost” via textural modification of early increments during intrusion of later increments. Johnson and Glazner (CMP, in press) argue that mappable variations in pluton texture can result from textural modification during thermal cycling associated with incremental assembly. 3) The thermal structure of the model pluton evolves toward roughly spheroidal isotherms even though the pluton is assembled from thin tabular sheets. The zone of melt-bearing rock and the shape of intrapluton contact metamorphic isograds bear little resemblance to the increments from which the pluton was built. Consequently, pluton contacts mapped by variations in texture that reflect the thermal cycling inherent to incremental assembly will inevitably be “blob” or diapir-like, but will yield little insight into magma intrusion geometry. 4) Although models yield large regions of melt-bearing rock, the melt fraction is low and the melt-bearing volume at any time is small compared to the total volume of the pluton. This observation raises doubts about the connections between zoned silicic plutons and large ignimbrite eruptions.
Comparison of the Incremental Validity of the Old and New MCAT.
ERIC Educational Resources Information Center
Wolf, Fredric M.; And Others
The predictive and incremental validity of both the Old and New Medical College Admission Test (MCAT) was examined and compared with a sample of over 300 medical students. Results of zero order and incremental validity coefficients, as well as prediction models resulting from all possible subsets regression analyses using Mallow's Cp criterion,…
Prediction of height increment for models of forest growth
Albert R. Stage
1975-01-01
Functional forms of equations were derived for predicting 10-year periodic height increment of forest trees from height, diameter, diameter increment, and habitat type. Crown ratio was considered as an additional variable for prediction, but its contribution was negligible. Coefficients of the function were estimated for 10 species of trees growing in 10 habitat types...
Aerodynamic Analyses and Database Development for Ares I Vehicle First Stage Separation
NASA Technical Reports Server (NTRS)
Pamadi, Bandu N.; Pei, Jing; Pinier, Jeremy T.; Klopfer, Goetz H.; Holland, Scott D.; Covell, Peter F.
2011-01-01
This paper presents the aerodynamic analysis and database development for first stage separation of Ares I A106 crew launch vehicle configuration. Separate 6-DOF databases were created for the first stage and upper stage and each database consists of three components: (a) isolated or freestream coefficients, (b) power-off proximity increments, and (c) power-on proximity increments. The isolated and power-off incremental databases were developed using data from 1% scaled model tests in AEDC VKF Tunnel A. The power-on proximity increments were developed using OVERFLOW CFD solutions. The database also includes incremental coefficients for one BDM and one USM failure scenarios.
Incremental principal component pursuit for video background modeling
Rodriquez-Valderrama, Paul A.; Wohlberg, Brendt
2017-03-14
An incremental Principal Component Pursuit (PCP) algorithm for video background modeling that is able to process one frame at a time while adapting to changes in background, with a computational complexity that allows for real-time processing, having a low memory footprint and is robust to translational and rotational jitter.
Hunt, E R; Martin, F C; Running, S W
1991-01-01
Simulation models of ecosystem processes may be necessary to separate the long-term effects of climate change on forest productivity from the effects of year-to-year variations in climate. The objective of this study was to compare simulated annual stem growth with measured annual stem growth from 1930 to 1982 for a uniform stand of ponderosa pine (Pinus ponderosa Dougl.) in Montana, USA. The model, FOREST-BGC, was used to simulate growth assuming leaf area index (LAI) was either constant or increasing. The measured stem annual growth increased exponentially over time; the differences between the simulated and measured stem carbon accumulations were not large. Growth trends were removed from both the measured and simulated annual increments of stem carbon to enhance the year-to-year variations in growth resulting from climate. The detrended increments from the increasing LAI simulation fit the detrended increments of the stand data over time with an R(2) of 0.47; the R(2) increased to 0.65 when the previous year's simulated detrended increment was included with the current year's simulated increment to account for autocorrelation. Stepwise multiple linear regression of the detrended increments of the stand data versus monthly meteorological variables had an R(2) of 0.37, and the R(2) increased to 0.47 when the previous year's meteorological data were included to account for autocorrelation. Thus, FOREST-BGC was more sensitive to the effects of year-to-year climate variation on annual stem growth than were multiple linear regression models.
Elliott, Rachel A; Putman, Koen D; Franklin, Matthew; Annemans, Lieven; Verhaeghe, Nick; Eden, Martin; Hayre, Jasdeep; Rodgers, Sarah; Sheikh, Aziz; Avery, Anthony J
2014-06-01
We recently showed that a pharmacist-led information technology-based intervention (PINCER) was significantly more effective in reducing medication errors in general practices than providing simple feedback on errors, with cost per error avoided at £79 (US$131). We aimed to estimate cost effectiveness of the PINCER intervention by combining effectiveness in error reduction and intervention costs with the effect of the individual errors on patient outcomes and healthcare costs, to estimate the effect on costs and QALYs. We developed Markov models for each of six medication errors targeted by PINCER. Clinical event probability, treatment pathway, resource use and costs were extracted from literature and costing tariffs. A composite probabilistic model combined patient-level error models with practice-level error rates and intervention costs from the trial. Cost per extra QALY and cost-effectiveness acceptability curves were generated from the perspective of NHS England, with a 5-year time horizon. The PINCER intervention generated £2,679 less cost and 0.81 more QALYs per practice [incremental cost-effectiveness ratio (ICER): -£3,037 per QALY] in the deterministic analysis. In the probabilistic analysis, PINCER generated 0.001 extra QALYs per practice compared with simple feedback, at £4.20 less per practice. Despite this extremely small set of differences in costs and outcomes, PINCER dominated simple feedback with a mean ICER of -£3,936 (standard error £2,970). At a ceiling 'willingness-to-pay' of £20,000/QALY, PINCER reaches 59 % probability of being cost effective. PINCER produced marginal health gain at slightly reduced overall cost. Results are uncertain due to the poor quality of data to inform the effect of avoiding errors.
Kim, Ho-Joong; Kang, Kyoung-Tak; Park, Sung-Cheol; Kwon, Oh-Hyo; Son, Juhyun; Chang, Bong-Soon; Lee, Choon-Ki; Yeom, Jin S; Lenke, Lawrence G
2017-05-01
There have been conflicting results on the surgical outcome of lumbar fusion surgery using two different techniques: robot-assisted pedicle screw fixation and conventional freehand technique. In addition, there have been no studies about the biomechanical issues between both techniques. This study aimed to investigate the biomechanical properties in terms of stress at adjacent segments using robot-assisted pedicle screw insertion technique (robot-assisted, minimally invasive posterior lumbar interbody fusion, Rom-PLIF) and freehand technique (conventional, freehand, open approach, posterior lumbar interbody fusion, Cop-PLIF) for instrumented lumbar fusion surgery. This is an additional post-hoc analysis for patient-specific finite element (FE) model. The sample is composed of patients with degenerative lumbar disease. Intradiscal pressure and facet contact force are the outcome measures. Patients were randomly assigned to undergo an instrumented PLIF procedure using a Rom-PLIF (37 patients) or a Cop-PLIF (41), respectively. Five patients in each group were selected using a simple random sampling method after operation, and 10 preoperative and postoperative lumbar spines were modeled from preoperative high-resolution computed tomography of 10 patients using the same method for a validated lumbar spine model. Under four pure moments of 7.5 Nm, the changes in intradiscal pressure and facet joint contact force at the proximal adjacent segment following fusion surgery were analyzed and compared with preoperative states. The representativeness of random samples was verified. Both groups showed significant increases in postoperative intradiscal pressure at the proximal adjacent segment under four moments, compared with the preoperative state. The Cop-PLIF models demonstrated significantly higher percent increments of intradiscal pressure at proximal adjacent segments under extension, lateral bending, and torsion moments than the Rom-PLIF models (p=.032, p=.008, and p=.016, respectively). Furthermore, the percent increment of facet contact force was significantly higher in the Cop-PLIF models under extension and torsion moments than in the Rom-PLIF models (p=.016 under both extension and torsion moments). The present study showed the clinical application of subject-specific FE analysis in the spine. Even though there was biomechanical superiority of the robot-assisted insertions in terms of alleviation of stress increments at adjacent segments after fusion, cautious interpretation is needed because of the small sample size. Copyright © 2016 Elsevier Inc. All rights reserved.
A Simple Case Study of a Grid Performance System
NASA Technical Reports Server (NTRS)
Aydt, Ruth; Gunter, Dan; Quesnel, Darcy; Smith, Warren; Taylor, Valerie; Biegel, Bryan (Technical Monitor)
2001-01-01
This document presents a simple case study of a Grid performance system based on the Grid Monitoring Architecture (GMA) being developed by the Grid Forum Performance Working Group. It describes how the various system components would interact for a very basic monitoring scenario, and is intended to introduce people to the terminology and concepts presented in greater detail in other Working Group documents. We believe that by focusing on the simple case first, working group members can familiarize themselves with terminology and concepts, and productively join in the ongoing discussions of the group. In addition, prototype implementations of this basic scenario can be built to explore the feasibility of the proposed architecture and to expose possible shortcomings. Once the simple case is understood and agreed upon, complexities can be added incrementally as warranted by cases not addressed in the most basic implementation described here. Following the basic performance monitoring scenario discussion, unresolved issues are introduced for future discussion.
Wallis, Thomas S. A.; Dorr, Michael; Bex, Peter J.
2015-01-01
Sensitivity to luminance contrast is a prerequisite for all but the simplest visual systems. To examine contrast increment detection performance in a way that approximates the natural environmental input of the human visual system, we presented contrast increments gaze-contingently within naturalistic video freely viewed by observers. A band-limited contrast increment was applied to a local region of the video relative to the observer's current gaze point, and the observer made a forced-choice response to the location of the target (≈25,000 trials across five observers). We present exploratory analyses showing that performance improved as a function of the magnitude of the increment and depended on the direction of eye movements relative to the target location, the timing of eye movements relative to target presentation, and the spatiotemporal image structure at the target location. Contrast discrimination performance can be modeled by assuming that the underlying contrast response is an accelerating nonlinearity (arising from a nonlinear transducer or gain control). We implemented one such model and examined the posterior over model parameters, estimated using Markov-chain Monte Carlo methods. The parameters were poorly constrained by our data; parameters constrained using strong priors taken from previous research showed poor cross-validated prediction performance. Atheoretical logistic regression models were better constrained and provided similar prediction performance to the nonlinear transducer model. Finally, we explored the properties of an extended logistic regression that incorporates both eye movement and image content features. Models of contrast transduction may be better constrained by incorporating data from both artificial and natural contrast perception settings. PMID:26057546
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-30
... notice to solicit comments on the proposed rule change from interested persons. \\1\\ 15 U.S.C. 78s(b)(1... there is a reasonable lowest minimum increment for bids and offers that makes it simple to monitor and... national market system, and, in general, to protect investors and the public interest. Additionally, the...
Bryan A. Black; Daniel Griffin; Peter van der Sleen; Alan D. Wanamaker; James H. Speer; David C. Frank; David W. Stahle; Neil Pederson; Carolyn A. Copenheaver; Valerie Trouet; Shelly Griffin; Bronwyn M. Gillanders
2016-01-01
High-resolution biogenic and geologic proxies in which one increment or layer is formed per year are crucial to describing natural ranges of environmental variability in Earth's physical and biological systems. However, dating controls are necessary to ensure temporal precision and accuracy; simple counts cannot ensure that all layers are placed correctly in time...
Limits to CO2-Neutrality of Burning Wood. (Review)
NASA Astrophysics Data System (ADS)
Abolins, J.; Gravitis, J.
2016-08-01
Consumption of wood as a source of energy is discussed with respect to efficiency and restraints to ensure sustainability of the environment on the grounds of a simple analytical model describing dynamics of biomass accumulation in forest stands - a particular case of the well-known empirical Richards' equation. Amounts of wood harvested under conditions of maximum productivity of forest land are presented in units normalised with respect to the maximum of the mean annual increment and used to determine the limits of CO2-neutrality. The ecological "footprint" defined by the area of growing stands necessary to absorb the excess amount of CO2 annually released from burning biomass is shown to be equal to the land area of a plantation providing sustainable supply of fire-wood.
77 FR 13632 - Receipt of Complaint; Solicitation of Comments Relating to the Public Interest
Federal Register 2010, 2011, 2012, 2013, 2014
2012-03-07
... Certain Digital Models, Digital Data, and Treatment Plans for Use in Making Incremental Dental Positioning... importation of certain digital models, digital data, and treatment plans for use in making incremental dental... health and welfare in the United States, competitive conditions in the United States economy, the...
40 CFR 60.2580 - When must I complete each increment of progress?
Code of Federal Regulations, 2013 CFR
2013-07-01
... PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Emissions Guidelines and Compliance Times for Commercial and Industrial Solid Waste Incineration Units Model Rule-Increments of...
40 CFR 60.2580 - When must I complete each increment of progress?
Code of Federal Regulations, 2014 CFR
2014-07-01
... PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Emissions Guidelines and Compliance Times for Commercial and Industrial Solid Waste Incineration Units Model Rule-Increments of...
40 CFR 60.2580 - When must I complete each increment of progress?
Code of Federal Regulations, 2012 CFR
2012-07-01
... PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Emissions Guidelines and Compliance Times for Commercial and Industrial Solid Waste Incineration Units Model Rule-Increments of...
Fabian C.C. Uzoh; William W. Oliver
2008-01-01
A diameter increment model is developed and evaluated for individual trees of ponderosa pine throughout the species range in the United States using a multilevel linear mixed model. Stochastic variability is broken down among period, locale, plot, tree and within-tree components. Covariates acting at tree and stand level, as breast height diameter, density, site index...
NASA Technical Reports Server (NTRS)
Sandlin, Doral R.; Howard, Kipp E.
1991-01-01
A user friendly FORTRAN code that can be used for preliminary design of V/STOL aircraft is described. The program estimates lift increments, due to power induced effects, encountered by aircraft in V/STOL flight. These lift increments are calculated using empirical relations developed from wind tunnel tests and are due to suckdown, fountain, ground vortex, jet wake, and the reaction control system. The code can be used as a preliminary design tool along with NASA Ames' Aircraft Synthesis design code or as a stand-alone program for V/STOL aircraft designers. The Power Induced Effects (PIE) module was validated using experimental data and data computed from lift increment routines. Results are presented for many flat plate models along with the McDonnell Aircraft Company's MFVT (mixed flow vectored thrust) V/STOL preliminary design and a 15 percent scale model of the YAV-8B Harrier V/STOL aircraft. Trends and magnitudes of lift increments versus aircraft height above the ground were predicted well by the PIE module. The code also provided good predictions of the magnitudes of lift increments versus aircraft forward velocity. More experimental results are needed to determine how well the code predicts lift increments as they vary with jet deflection angle and angle of attack. The FORTRAN code is provided in the appendix.
Lamb, Berton Lee; Burkardt, Nina
2008-01-01
When Linda Pilkey- Jarvis and Orrin Pilkey state in their article, "Useless Arithmetic," that "mathematical models are simplified, generalized representations of a process or system," they probably do not mean to imply that these models are simple. Rather, the models are simpler than nature and that is the heart of the problem with predictive models. We have had a long professional association with the developers and users of one of these simplifications of nature in the form of a mathematical model known as Physical Habitat Simulation (PHABSIM), which is part of the Instream Flow Incremental Methodology (IFIM). The IFIM is a suite of techniques, including PHABSIM, that allows the analyst to incorporate hydrology , hydraulics, habitat, water quality, stream temperature, and other variables into a tradeoff analysis that decision makers can use to design a flow regime to meet management objectives (Stalnaker et al. 1995). Although we are not the developers of the IFIM, we have worked with those who did design it, and we have tried to understand how the IFIM and PHABSIM are actually used in decision making (King, Burkardt, and Clark 2006; Lamb 1989).
Thermal modeling of cogging process using finite element method
NASA Astrophysics Data System (ADS)
Khaled, Mahmoud; Ramadan, Mohamad; Fourment, Lionel
2016-10-01
Among forging processes, incremental processes are those where the work piece undergoes several thermal and deformation steps with small increment of deformation. They offer high flexibility in terms of the work piece size since they allow shaping wide range of parts from small to large size. Since thermal treatment is essential to obtain the required shape and quality, this paper presents the thermal modeling of incremental processes. The finite element discretization, spatial and temporal, is exposed. Simulation is performed using commercial software Forge 3. Results show the thermal behavior at the beginning and at the end of the process.
A Geostatistical Scaling Approach for the Generation of Non Gaussian Random Variables and Increments
NASA Astrophysics Data System (ADS)
Guadagnini, Alberto; Neuman, Shlomo P.; Riva, Monica; Panzeri, Marco
2016-04-01
We address manifestations of non-Gaussian statistical scaling displayed by many variables, Y, and their (spatial or temporal) increments. Evidence of such behavior includes symmetry of increment distributions at all separation distances (or lags) with sharp peaks and heavy tails which tend to decay asymptotically as lag increases. Variables reported to exhibit such distributions include quantities of direct relevance to hydrogeological sciences, e.g. porosity, log permeability, electrical resistivity, soil and sediment texture, sediment transport rate, rainfall, measured and simulated turbulent fluid velocity, and other. No model known to us captures all of the documented statistical scaling behaviors in a unique and consistent manner. We recently proposed a generalized sub-Gaussian model (GSG) which reconciles within a unique theoretical framework the probability distributions of a target variable and its increments. We presented an algorithm to generate unconditional random realizations of statistically isotropic or anisotropic GSG functions and illustrated it in two dimensions. In this context, we demonstrated the feasibility of estimating all key parameters of a GSG model underlying a single realization of Y by analyzing jointly spatial moments of Y data and corresponding increments. Here, we extend our GSG model to account for noisy measurements of Y at a discrete set of points in space (or time), present an algorithm to generate conditional realizations of corresponding isotropic or anisotropic random field, and explore them on one- and two-dimensional synthetic test cases.
Propagation of the Hawaiian-Emperor volcano chain by Pacific plate cooling stress
Stuart, W.D.; Foulger, G.R.; Barall, M.
2007-01-01
The lithosphere crack model, the main alternative to the mantle plume model for age-progressive magma emplacement along the Hawaiian-Emperor volcano chain, requires the maximum horizontal tensile stress to be normal to the volcano chain. However, published stress fields calculated from Pacific lithosphere tractions and body forces (e.g., subduction pull, basal drag, lithosphere density) are not optimal for southeast propagation of a stress-free, vertical tensile crack coincident with the Hawaiian segment of the Hawaiian-Emperor chain. Here we calculate the thermoelastic stress rate for present-day cooling of the Pacific plate using a spherical shell finite element representation of the plate geometry. We use observed seafloor isochrons and a standard model for lithosphere cooling to specify the time dependence of vertical temperature profiles. The calculated stress rate multiplied by a time increment (e.g., 1 m.y.) then gives a thermoelastic stress increment for the evolving Pacific plate. Near the Hawaiian chain position, the calculated stress increment in the lower part of the shell is tensional, with maximum tension normal to the chain direction. Near the projection of the chain trend to the southeast beyond Hawaii, the stress increment is compressive. This incremental stress field has the form necessary to maintain and propagate a tensile crack or similar lithosphere flaw and is thus consistent with the crack model for the Hawaiian volcano chain.?? 2007 The Geological Society of America.
Fragmentation under the Scaling Symmetry and Turbulent Cascade with Intermittency
NASA Technical Reports Server (NTRS)
Gorokhovski, M.
2003-01-01
Fragmentation plays an important role in a variety of physical, chemical, and geological processes. Examples include atomization in sprays, crushing of rocks, explosion and impact of solids, polymer degradation, etc. Although each individual action of fragmentation is a complex process, the number of these elementary actions is large. It is natural to abstract a simple 'effective' scenario of fragmentation and to represent its essential features. One of the models is the fragmentation under the scaling symmetry: each breakup action reduces the typical length of fragments, r (right arrow) alpha r, by an independent random multiplier alpha (0 < alpha < 1), which is governed by the fragmentation intensity spectrum q(alpha), integral(sup 1)(sub 0) q(alpha)d alpha = 1. This scenario has been proposed by Kolmogorov (1941), when he considered the breakup of solid carbon particle. Describing the breakup as a random discrete process, Kolmogorov stated that at latest times, such a process leads to the log-normal distribution. In Gorokhovski & Saveliev, the fragmentation under the scaling symmetry has been reviewed as a continuous evolution process with new features established. The objective of this paper is twofold. First, the paper synthesizes and completes theoretical part of Gorokhovski & Saveliev. Second, the paper shows a new application of the fragmentation theory under the scale invariance. This application concerns the turbulent cascade with intermittency. We formulate here a model describing the evolution of the velocity increment distribution along the progressively decreasing length scale. The model shows that when the turbulent length scale gets smaller, the velocity increment distribution has central growing peak and develops stretched tails. The intermittency in turbulence is manifested in the same way: large fluctuations of velocity provoke highest strain in narrow (dissipative) regions of flow.
Modeling Human Disease Phenotype in Model Organisms: “It’s only a model!”
Marian, A.J.
2011-01-01
Preface A perspective by definition is a viewpoint. A viewpoint, like any other opinion, could be utterly erroneous. This Perspective is meant to be provocative but not to lessen the accomplishments of the scientific society as a whole or belittle any particular field of science or investigators. Scientific discoveries are typically incremental with various levels of increments. Often the significance of the discoveries remains unrecognized for many years if not decades, as was the case for the discovery of DNA by Friedrich Miescher in 1868 1. The significance of the discovery remained largely unrecognized for about 75 years, until simple and elegant experiments by Hershey and Chase showed that DNA and not protein, as was commonly perceived, was the genetic material 2. Our shortcomings in recognizing the significance of the scientific discoveries should not deter us from Cartesian skepticism, which was pioneered by the Persian philosopher Ghazali and popularized by Rene Descartes’ “I doubt, therefore I think, therefore I am.” An essential component of our academic society is the freedom to express viewpoints. Yet, personal opinions must not guide judgment on merits of the scientific discoveries and other peer-review matters. Science must be judged by the scientific standards of the time. It must not be judged by personal views. Scientific referees like all judges must be impartial and devoid of personal biases on rendering judgments. Accordingly, this viewpoint is simply that, a viewpoint. It is not indicative of author’s personal biases on any specific scientific discipline. The Perspective is aimed to raise doubts, as doubt is an incentive to truth. PMID:21817163
Moore, B C; Peters, R W; Glasberg, B R
1999-12-01
Psychometric functions for detecting increments or decrements in level of sinusoidal pedestals were measured for increment and decrement durations of 5, 10, 20, 50, 100, and 200 ms and for frequencies of 250, 1000, and 4000 Hz. The sinusoids were presented in background noise intended to mask spectral splatter. A three-interval, three-alternative procedure was used. The results indicated that, for increments, the detectability index d' was approximately proportional to delta I/I. For decrements, d' was approximately proportional to delta L. The slopes of the psychometric functions increased (indicating better performance) with increasing frequency for both increments and decrements. For increments, the slopes increased with increasing increment duration up to 200 ms at 250 and 1000 Hz, but at 4000 Hz they increased only up to 50 ms. For decrements, the slopes increased for durations up to 50 ms, and then remained roughly constant, for all frequencies. For a center frequency of 250 Hz, the slopes of the psychometric functions for increment detection increased with duration more rapidly than predicted by a "multiple-looks" hypothesis, i.e., more rapidly than the square root of duration, for durations up to 50 ms. For center frequencies of 1000 and 4000 Hz, the slopes increased less rapidly than predicted by a multiple-looks hypothesis, for durations greater than about 20 ms. The slopes of the psychometric functions for decrement detection increased with decrement duration at a rate slightly greater than the square root of duration, for durations up to 50 ms, at all three frequencies. For greater durations, the increase in slope was less than proportional to the square root of duration. The results were analyzed using a model incorporating a simulated auditory filter, a compressive nonlinearity, a sliding temporal integrator, and a decision device based on a template mechanism. The model took into account the effects of both the external noise and an assumed internal noise. The model was able to account for the major features of the data for both increment and decrement detection.
40 CFR 60.5090 - When must I complete each increment of progress?
Code of Federal Regulations, 2011 CFR
2011-07-01
... PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Emission Guidelines and Compliance Times for Existing Sewage Sludge Incineration Units Model Rule-Increments of Progress § 60.5090 When...
40 CFR 60.2595 - What if I do not meet an increment of progress?
Code of Federal Regulations, 2013 CFR
2013-07-01
... PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Emissions Guidelines and Compliance Times for Commercial and Industrial Solid Waste Incineration Units Model Rule-Increments of...
40 CFR 60.2595 - What if I do not meet an increment of progress?
Code of Federal Regulations, 2014 CFR
2014-07-01
... PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Emissions Guidelines and Compliance Times for Commercial and Industrial Solid Waste Incineration Units Model Rule-Increments of...
40 CFR 60.2595 - What if I do not meet an increment of progress?
Code of Federal Regulations, 2012 CFR
2012-07-01
... PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Emissions Guidelines and Compliance Times for Commercial and Industrial Solid Waste Incineration Units Model Rule-Increments of...
40 CFR 69.32 - Title V conditional exemption.
Code of Federal Regulations, 2010 CFR
2010-07-01
... (PSD) increments. (2) CNMI may conduct air emissions modeling, using EPA guidelines, for power plants... determine whether existing power plants cause or contribute to violation of the NAAQS and PSD increments in...
40 CFR 69.32 - Title V conditional exemption.
Code of Federal Regulations, 2011 CFR
2011-07-01
... (PSD) increments. (2) CNMI may conduct air emissions modeling, using EPA guidelines, for power plants... determine whether existing power plants cause or contribute to violation of the NAAQS and PSD increments in...
40 CFR 69.32 - Title V conditional exemption.
Code of Federal Regulations, 2012 CFR
2012-07-01
... (PSD) increments. (2) CNMI may conduct air emissions modeling, using EPA guidelines, for power plants... determine whether existing power plants cause or contribute to violation of the NAAQS and PSD increments in...
40 CFR 69.32 - Title V conditional exemption.
Code of Federal Regulations, 2014 CFR
2014-07-01
... (PSD) increments. (2) CNMI may conduct air emissions modeling, using EPA guidelines, for power plants... determine whether existing power plants cause or contribute to violation of the NAAQS and PSD increments in...
Development of a Risk Prediction Model and Clinical Risk Score for Isolated Tricuspid Valve Surgery.
LaPar, Damien J; Likosky, Donald S; Zhang, Min; Theurer, Patty; Fonner, C Edwin; Kern, John A; Bolling, Stephen F; Drake, Daniel H; Speir, Alan M; Rich, Jeffrey B; Kron, Irving L; Prager, Richard L; Ailawadi, Gorav
2018-02-01
While tricuspid valve (TV) operations remain associated with high mortality (∼8-10%), no robust prediction models exist to support clinical decision-making. We developed a preoperative clinical risk model with an easily calculable clinical risk score (CRS) to predict mortality and major morbidity after isolated TV surgery. Multi-state Society of Thoracic Surgeons database records were evaluated for 2,050 isolated TV repair and replacement operations for any etiology performed at 50 hospitals (2002-2014). Parsimonious preoperative risk prediction models were developed using multi-level mixed effects regression to estimate mortality and composite major morbidity risk. Model results were utilized to establish a novel CRS for patients undergoing TV operations. Models were evaluated for discrimination and calibration. Operative mortality and composite major morbidity rates were 9% and 42%, respectively. Final regression models performed well (both P<0.001, AUC = 0.74 and 0.76) and included preoperative factors: age, gender, stroke, hemodialysis, ejection fraction, lung disease, NYHA class, reoperation and urgent or emergency status (all P<0.05). A simple CRS from 0-10+ was highly associated (P<0.001) with incremental increases in predicted mortality and major morbidity. Predicted mortality risk ranged from 2%-34% across CRS categories, while predicted major morbidity risk ranged from 13%-71%. Mortality and major morbidity after isolated TV surgery can be predicted using preoperative patient data from the STS Adult Cardiac Database. A simple clinical risk score predicts mortality and major morbidity after isolated TV surgery. This score may facilitate perioperative counseling and identification of suitable patients for TV surgery. Copyright © 2018 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.
Di Molfetta, Arianna; Amodeo, Antonio; Fresiello, Libera; Trivella, Maria Giovanna; Iacobelli, Roberta; Pilati, Mara; Ferrari, Gianfranco
2015-07-01
Considering the lack of donors, ventricular assist devices (VADs) could be an alternative to heart transplantation for failing Fontan patients, in spite of the lack of experience and the complex anatomy and physiopathology of these patients. Considering the high number of variables that play an important role such as type of Fontan failure, type of VAD connection, and setting (right VAD [RVAD], left VAD [LVAD], or biventricular VAD [BIVAD]), a numerical model could be useful to support clinical decisions. The aim of this article is to develop and test a lumped parameter model of the cardiovascular system simulating and comparing the VAD effects on failing Fontan. Hemodynamic and echocardiographic data of 10 Fontan patients were used to simulate the baseline patients' condition using a dedicated lumped parameter model. Starting from the simulated baseline and for each patient, a systolic dysfunction, a diastolic dysfunction, and an increment of the pulmonary vascular resistance were simulated. Then, for each patient and for each pathology, the RVAD, LVAD, and BIVAD implantations were simulated. The model can reproduce patients' baseline well. In the case of systolic dysfunction, the LVAD unloads the single ventricle and increases the cardiac output (CO) (35%) and the arterial systemic pressure (Pas) (25%). With RVAD, a decrement of inferior vena cava pressure (Pvci) (39%) was observed with 34% increment of CO, but an increment of the single ventricle external work (SVEW). With the BIVAD, an increment of Pas (29%) and CO (37%) was observed. In the case of diastolic dysfunction, the LVAD increases CO (42%) and the RVAD decreases the Pvci, while both increase the SVEW. In the case of pulmonary vascular resistance increment, the highest CO (50%) and Pas (28%) increment is obtained with an RVAD with the highest decrement of Pvci (53%) and an increment of the SVEW but with the lowest VAD power consumption. The use of numerical models could be helpful in this innovative field to evaluate the effect of VAD implantation on Fontan patients to support patient and VAD type selection personalizing the assistance. Copyright © 2015 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.
Rychkova, Svetlana; Ninio, Jacques
2011-01-01
When stereoscopic images are presented alternately to the two eyes, stereopsis occurs at F ≥ 1 Hz full-cycle frequencies for very simple stimuli, and F ≥ 3 Hz full-cycle frequencies for random-dot stereograms (eg Ludwig I, Pieper W, Lachnit H, 2007 “Temporal integration of monocular images separated in time: stereopsis, stereoacuity, and binocular luster” Perception & Psychophysics 69 92–102). Using twenty different stereograms presented through liquid crystal shutters, we studied the transition to stereopsis with fifteen subjects. The onset of stereopsis was observed during a stepwise increase of the alternation frequency, and its disappearance was observed during a stepwise decrease in frequency. The lowest F values (around 2.5 Hz) were observed with stimuli involving two to four simple disjoint elements (circles, arcs, rectangles). Higher F values were needed for stimuli containing slanted elements or curved surfaces (about 1 Hz increment), overlapping elements at two different depths (about 2.5 Hz increment), or camouflaged overlapping surfaces (> 7 Hz increment). A textured cylindrical surface with a horizontal axis appeared easier to interpret (5.7 Hz) than a pair of slanted segments separated in depth but forming a cross in projection (8 Hz). Training effects were minimal, and F usually increased as disparities were reduced. The hierarchy of difficulties revealed in the study may shed light on various problems that the brain needs to solve during stereoscopic interpretation. During the construction of the three-dimensional percept, the loss of information due to natural decay of the stimuli traces must be compensated by refreshes of visual input. In the discussion an attempt is made to link our results with recent advances in the comprehension of visual scene memory. PMID:23145225
Enhancing performance during inclined loaded walking with a powered ankle-foot exoskeleton.
Galle, Samuel; Malcolm, Philippe; Derave, Wim; De Clercq, Dirk
2014-11-01
A simple ankle-foot exoskeleton that assists plantarflexion during push-off can reduce the metabolic power during walking. This suggests that walking performance during a maximal incremental exercise could be improved with an exoskeleton if the exoskeleton is still efficient during maximal exercise intensities. Therefore, we quantified the walking performance during a maximal incremental exercise test with a powered and unpowered exoskeleton: uphill walking with progressively higher weights. Nine female subjects performed two incremental exercise tests with an exoskeleton: 1 day with (powered condition) and another day without (unpowered condition) plantarflexion assistance. Subjects walked on an inclined treadmill (15%) at 5 km h(-1) and 5% of body weight was added every 3 min until exhaustion. At volitional termination no significant differences were found between the powered and unpowered condition for blood lactate concentration (respectively, 7.93 ± 2.49; 8.14 ± 2.24 mmol L(-1)), heart rate (respectively, 190.00 ± 6.50; 191.78 ± 6.50 bpm), Borg score (respectively, 18.57 ± 0.79; 18.93 ± 0.73) and VO₂ peak (respectively, 40.55 ± 2.78; 40.55 ± 3.05 ml min(-1) kg(-1)). Thus, subjects were able to reach the same (near) maximal effort in both conditions. However, subjects continued the exercise test longer in the powered condition and carried 7.07 ± 3.34 kg more weight because of the assistance of the exoskeleton. Our results show that plantarflexion assistance during push-off can increase walking performance during a maximal exercise test as subjects were able to carry more weight. This emphasizes the importance of acting on the ankle joint in assistive devices and the potential of simple ankle-foot exoskeletons for reducing metabolic power and increasing weight carrying capability, even during maximal intensities.
A simplified gis-based model for large wood recruitment and connectivity in mountain basins
NASA Astrophysics Data System (ADS)
Franceschi, Silvia; Antonello, Andrea; Vela, Ana Lucia; Cavalli, Marco; Crema, Stefano; Comiti, Francesco; Tonon, Giustino
2015-04-01
During the last 50 years in the Alps the decline of the rural and forest economy and at the depopulation of the mountain areas caused the progressive abandon of the land in general and in particular of the riparian zones and the consequent increment of the vegetation extension. On one hand the wood increases the availability of organic matter and has positive effects on mountain river systems. However, during flooding events large woods that reach the stream cause the clogging of bridges with an increase of flood hazard. The approach to the evaluation of the availability of large wood during flooding events is still a challenge. There are models that simulate the propagation of the logs downstream, but the evaluation of the trees that can reach the stream is still done using simplified GIS procedures. These procedures are the base for our research which will include LiDAR derived information on vegetation to evaluate large wood recruitment extreme events. Within the last Google Summer of Code (2014) we developed a set of tools to evaluate large wood recruitment and propagation along the channel network based on a simplified methodology for monitoring and modeling large wood recruitment and transport in mountain basins implemented by Lucía et 2014. These tools are integrated in the JGrassTools project as a dedicated section in the Hydro-Geomorphology library. The section LWRecruitment contains 10 simple modules that allow the user to start from very simple information related to geomorphology, flooding areas and vegetation cover and obtain a map of the most probable critical sections on the streams. The tools cover the two main aspects related to the iteration of large wood with the rivers: the recruitment mechanisms and the propagation downstream. While the propagation tool is very simple and does not consider the hydrodynamic of the problem, the recruitment algorithms are more specific and consider the influence of hillslopes stability and the flooding extension. The modules are available for download at www.jgrasstools.org. A simple and easy to use graphical interface to run the models is available at https://github.com/moovida/STAGE/releases.
Optimal tree increment models for the Northeastern United Statesq
Don C. Bragg
2003-01-01
used the potential relative increment (PRI) methodology to develop optimal tree diameter growth models for the Northeastern United States. Thirty species from the Eastwide Forest Inventory Database yielded 69,676 individuals, which were then reduced to fast-growing subsets for PRI analysis. For instance, only 14 individuals from the greater than 6,300-tree eastern...
Optimal Tree Increment Models for the Northeastern United States
Don C. Bragg
2005-01-01
I used the potential relative increment (PRI) methodology to develop optimal tree diameter growth models for the Northeastern United States. Thirty species from the Eastwide Forest Inventory Database yielded 69,676 individuals, which were then reduced to fast-growing subsets for PRI analysis. For instance, only 14 individuals from the greater than 6,300-tree eastern...
Rough Set Based Splitting Criterion for Binary Decision Tree Classifiers
2006-09-26
Alata O. Fernandez-Maloigne C., and Ferrie J.C. (2001). Unsupervised Algorithm for the Segmentation of Three-Dimensional Magnetic Resonance Brain ...instinctual and learned responses in the brain , causing it to make decisions based on patterns in the stimuli. Using this deceptively simple process...2001. [2] Bohn C. (1997). An Incremental Unsupervised Learning Scheme for Function Approximation. In: Proceedings of the 1997 IEEE International
Volume II: Compendium Abstracts
2008-08-01
project developed a fast and simple method of characterization for ceramic , polymer composite, and ceramic -composite materials systems. Current methods...incrementally at 1-inch intervals and displayed as a false-color image map of the sample. This experimental setup can be easily scaled from single ceramic ...low-power, high-force characteristics of lead zirconate titanate ( PZT ) and an offset-beam design to achieve rotational or near-linear translational
I can do that: the impact of implicit theories on leadership role model effectiveness.
Hoyt, Crystal L; Burnette, Jeni L; Innella, Audrey N
2012-02-01
This research investigates the role of implicit theories in influencing the effectiveness of successful role models in the leadership domain. Across two studies, the authors test the prediction that incremental theorists ("leaders are made") compared to entity theorists ("leaders are born") will respond more positively to being presented with a role model before undertaking a leadership task. In Study 1, measuring people's naturally occurring implicit theories of leadership, the authors showed that after being primed with a role model, incremental theorists reported greater leadership confidence and less anxious-depressed affect than entity theorists following the leadership task. In Study 2, the authors demonstrated the causal role of implicit theories by manipulating participants' theory of leadership ability. They replicated the findings from Study 1 and demonstrated that identification with the role model mediated the relationship between implicit theories and both confidence and affect. In addition, incremental theorists outperformed entity theorists on the leadership task.
Simple reaction time to the onset of time-varying sounds.
Schlittenlacher, Josef; Ellermeier, Wolfgang
2015-10-01
Although auditory simple reaction time (RT) is usually defined as the time elapsing between the onset of a stimulus and a recorded reaction, a sound cannot be specified by a single point in time. Therefore, the present work investigates how the period of time immediately after onset affects RT. By varying the stimulus duration between 10 and 500 msec, this critical duration was determined to fall between 32 and 40 milliseconds for a 1-kHz pure tone at 70 dB SPL. In a second experiment, the role of the buildup was further investigated by varying the rise time and its shape. The increment in RT for extending the rise time by a factor of ten was about 7 to 8 msec. There was no statistically significant difference in RT between a Gaussian and linear rise shape. A third experiment varied the modulation frequency and point of onset of amplitude-modulated tones, producing onsets at different initial levels with differently rapid increase or decrease immediately afterwards. The results of all three experiments results were explained very well by a straightforward extension of the parallel grains model (Miller and Ulrich Cogn. Psychol. 46, 101-151, 2003), a probabilistic race model employing many parallel channels. The extension of the model to time-varying sounds made the activation of such a grain depend on intensity as a function of time rather than a constant level. A second approach by mechanisms known from loudness produced less accurate predictions.
40 CFR 60.5105 - What if I do not meet an increment of progress?
Code of Federal Regulations, 2011 CFR
2011-07-01
... PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Emission Guidelines and Compliance Times for Existing Sewage Sludge Incineration Units Model Rule-Increments of Progress § 60.5105 What if...
40 CFR 60.1605 - What if I do not meet an increment of progress?
Code of Federal Regulations, 2010 CFR
2010-07-01
... Times for Small Municipal Waste Combustion Units Constructed on or Before August 30, 1999 Model Rule... increment of progress, you must submit a notification to the Administrator postmarked within 10 business...
Zapata-Vázquez, Rita Esther; Álvarez-Cervera, Fernando José; Alonzo-Vázquez, Felipe Manuel; García-Lira, José Ramón; Granados-García, Víctor; Pérez-Herrera, Norma Elena; Medina-Moreno, Manuel
2017-12-01
To conduct an economic evaluation of intracranial pressure (ICP) monitoring on the basis of current evidence from pediatric patients with severe traumatic brain injury, through a statistical model. The statistical model is a decision tree, whose branches take into account the severity of the lesion, the hospitalization costs, and the quality-adjusted life-year for the first 6 months post-trauma. The inputs consist of probability distributions calculated from a sample of 33 surviving children with severe traumatic brain injury, divided into two groups: with ICP monitoring (monitoring group) and without ICP monitoring (control group). The uncertainty of the parameters from the sample was quantified through a probabilistic sensitivity analysis using the Monte-Carlo simulation method. The model overcomes the drawbacks of small sample sizes, unequal groups, and the ethical difficulty in randomly assigning patients to a control group (without monitoring). The incremental cost in the monitoring group was Mex$3,934 (Mexican pesos), with an increase in quality-adjusted life-year of 0.05. The incremental cost-effectiveness ratio was Mex$81,062. The cost-effectiveness acceptability curve had a maximum at 54% of the cost effective iterations. The incremental net health benefit for a willingness to pay equal to 1 time the per capita gross domestic product for Mexico was 0.03, and the incremental net monetary benefit was Mex$5,358. The results of the model suggest that ICP monitoring is cost effective because there was a monetary gain in terms of the incremental net monetary benefit. Copyright © 2017. Published by Elsevier Inc.
Kovic, Bruno; Guyatt, Gordon; Brundage, Michael; Thabane, Lehana; Bhatnagar, Neera; Xie, Feng
2016-01-01
Introduction There is an increasing number of new oncology drugs being studied, approved and put into clinical practice based on improvement in progression-free survival, when no overall survival benefits exist. In oncology, the association between progression-free survival and health-related quality of life is currently unknown, despite its importance for patients with cancer, and the unverified assumption that longer progression-free survival indicates improved health-related quality of life. Thus far, only 1 study has investigated this association, providing insufficient evidence and inconclusive results. The objective of this study protocol is to provide increased transparency in supporting a systematic summary of the evidence bearing on this association in oncology. Methods and analysis Using the OVID platform in MEDLINE, Embase and Cochrane databases, we will conduct a systematic review of randomised controlled human trials addressing oncology issues published starting in 2000. A team of reviewers will, in pairs, independently screen and abstract data using standardised, pilot-tested forms. We will employ numerical integration to calculate mean incremental area under the curve between treatment groups in studies for health-related quality of life, along with total related error estimates, and a 95% CI around incremental area. To describe the progression-free survival to health-related quality of life association, we will construct a scatterplot for incremental health-related quality of life versus incremental progression-free survival. To estimate the association, we will use a weighted simple regression approach, comparing mean incremental health-related quality of life with either median incremental progression-free survival time or the progression-free survival HR, in the absence of overall survival benefit. Discussion Identifying direction and magnitude of association between progression-free survival and health-related quality of life is critically important in interpreting results of oncology trials. Systematic evidence produced from our study will contribute to improvement of patient care and practice of evidence-based medicine in oncology. PMID:27591026
Methods for determining deformation history for chocolate tablet boudinage with fibrous crystals
NASA Astrophysics Data System (ADS)
Casey, M.; Dietrich, D.; Ramsay, J. G.
1983-02-01
Chocolate tablet boudinage with fibrous crystal growths between the boudinaged plates from two localities were studied. In one, from Leytron, Valais, Switzerland, the deformation history was found to be a succession of plane strain increments with the shortening direction perpendicular to the boudinaged sheet and the extension direction showing a progressive change in orientation within the sheet. The incremental and finite strains were evaluated. The other specimen, from Parys Mountain, Anglesey Great Britain, was found to have a more complex history with diachronous break up of the competent layer and flattening strain increments. It was found that under these circumstances the direct graphical methods of determining finite and incremental strains gave inconsistent results. A numerical model was developed which allowed the simulation of chocolate tablet structure with a complex deformation history. The model was applied to the Anglesey specimen and three possible strain histories for this structure were tried.
Atmospheric response to Saharan dust deduced from ECMWF reanalysis (ERA) temperature increments
NASA Astrophysics Data System (ADS)
Kishcha, P.; Alpert, P.; Barkan, J.; Kirchner, I.; Machenhauer, B.
2003-09-01
This study focuses on the atmospheric temperature response to dust deduced from a new source of data the European Reanalysis (ERA) increments. These increments are the systematic errors of global climate models, generated in the reanalysis procedure. The model errors result not only from the lack of desert dust but also from a complex combination of many kinds of model errors. Over the Sahara desert the lack of dust radiative effect is believed to be a predominant model defect which should significantly affect the increments. This dust effect was examined by considering correlation between the increments and remotely sensed dust. Comparisons were made between April temporal variations of the ERA analysis increments and the variations of the Total Ozone Mapping Spectrometer aerosol index (AI) between 1979 and 1993. The distinctive structure was identified in the distribution of correlation composed of three nested areas with high positive correlation (>0.5), low correlation and high negative correlation (<-0.5). The innermost positive correlation area (PCA) is a large area near the center of the Sahara desert. For some local maxima inside this area the correlation even exceeds 0.8. The outermost negative correlation area (NCA) is not uniform. It consists of some areas over the eastern and western parts of North Africa with a relatively small amount of dust. Inside those areas both positive and negative high correlations exist at pressure levels ranging from 850 to 700 hPa, with the peak values near 775 hPa. Dust-forced heating (cooling) inside the PCA (NCA) is accompanied by changes in the static instability of the atmosphere above the dust layer. The reanalysis data of the European Center for Medium Range Weather Forecast (ECMWF) suggest that the PCA (NCA) corresponds mainly to anticyclonic (cyclonic) flow, negative (positive) vorticity and downward (upward) airflow. These findings are associated with the interaction between dust-forced heating/cooling and atmospheric circulation. This paper contributes to a better understanding of dust radiative processes missed in the model.
Collins, Anne G. E.; Frank, Michael J.
2012-01-01
Instrumental learning involves corticostriatal circuitry and the dopaminergic system. This system is typically modeled in the reinforcement learning (RL) framework by incrementally accumulating reward values of states and actions. However, human learning also implicates prefrontal cortical mechanisms involved in higher level cognitive functions. The interaction of these systems remains poorly understood, and models of human behavior often ignore working memory (WM) and therefore incorrectly assign behavioral variance to the RL system. Here we designed a task that highlights the profound entanglement of these two processes, even in simple learning problems. By systematically varying the size of the learning problem and delay between stimulus repetitions, we separately extracted WM-specific effects of load and delay on learning. We propose a new computational model that accounts for the dynamic integration of RL and WM processes observed in subjects' behavior. Incorporating capacity-limited WM into the model allowed us to capture behavioral variance that could not be captured in a pure RL framework even if we (implausibly) allowed separate RL systems for each set size. The WM component also allowed for a more reasonable estimation of a single RL process. Finally, we report effects of two genetic polymorphisms having relative specificity for prefrontal and basal ganglia functions. Whereas the COMT gene coding for catechol-O-methyl transferase selectively influenced model estimates of WM capacity, the GPR6 gene coding for G-protein-coupled receptor 6 influenced the RL learning rate. Thus, this study allowed us to specify distinct influences of the high-level and low-level cognitive functions on instrumental learning, beyond the possibilities offered by simple RL models. PMID:22487033
Kim, Bumhwi; Ban, Sang-Woo; Lee, Minho
2013-10-01
Humans can efficiently perceive arbitrary visual objects based on an incremental learning mechanism with selective attention. This paper proposes a new task specific top-down attention model to locate a target object based on its form and color representation along with a bottom-up saliency based on relativity of primitive visual features and some memory modules. In the proposed model top-down bias signals corresponding to the target form and color features are generated, which draw the preferential attention to the desired object by the proposed selective attention model in concomitance with the bottom-up saliency process. The object form and color representation and memory modules have an incremental learning mechanism together with a proper object feature representation scheme. The proposed model includes a Growing Fuzzy Topology Adaptive Resonance Theory (GFTART) network which plays two important roles in object color and form biased attention; one is to incrementally learn and memorize color and form features of various objects, and the other is to generate a top-down bias signal to localize a target object by focusing on the candidate local areas. Moreover, the GFTART network can be utilized for knowledge inference which enables the perception of new unknown objects on the basis of the object form and color features stored in the memory during training. Experimental results show that the proposed model is successful in focusing on the specified target objects, in addition to the incremental representation and memorization of various objects in natural scenes. In addition, the proposed model properly infers new unknown objects based on the form and color features of previously trained objects. Copyright © 2013 Elsevier Ltd. All rights reserved.
Rate determination from vector observations
NASA Technical Reports Server (NTRS)
Weiss, Jerold L.
1993-01-01
Vector observations are a common class of attitude data provided by a wide variety of attitude sensors. Attitude determination from vector observations is a well-understood process and numerous algorithms such as the TRIAD algorithm exist. These algorithms require measurement of the line of site (LOS) vector to reference objects and knowledge of the LOS directions in some predetermined reference frame. Once attitude is determined, it is a simple matter to synthesize vehicle rate using some form of lead-lag filter, and then, use it for vehicle stabilization. Many situations arise, however, in which rate knowledge is required but knowledge of the nominal LOS directions are not available. This paper presents two methods for determining spacecraft angular rates from vector observations without a priori knowledge of the vector directions. The first approach uses an extended Kalman filter with a spacecraft dynamic model and a kinematic model representing the motion of the observed LOS vectors. The second approach uses a 'differential' TRIAD algorithm to compute the incremental direction cosine matrix, from which vehicle rate is then derived.
NASA Astrophysics Data System (ADS)
Milani, Gabriele; Olivito, Renato S.; Tralli, Antonio
2014-10-01
The buckling behavior of slender unreinforced masonry (URM) walls subjected to axial compression and out-of-plane lateral loads is investigated through a combined experimental and numerical homogenizedapproach. After a preliminary analysis performed on a unit cell meshed by means of elastic FEs and non-linear interfaces, macroscopic moment-curvature diagrams so obtained are implemented at a structural level, discretizing masonry by means of rigid triangular elements and non-linear interfaces. The non-linear incremental response of the structure is accounted for a specific quadratic programming routine. In parallel, a wide experimental campaign is conducted on walls in two way bending, with the double aim of both validating the numerical model and investigating the behavior of walls that may not be reduced to simple cantilevers or simply supported beams. Panels investigated are dry-joint in scale square walls simply supported at the base and on a vertical edge, exhibiting the classical Rondelet's mechanism. The results obtained are compared with those provided by the numerical model.
NASA Technical Reports Server (NTRS)
Tolhurst, William H., Jr.; Hickey, David H.; Aoyagi, Kiyoshi
1961-01-01
Wind-tunnel tests have been conducted on a large-scale model of a swept-wing jet transport type airplane to study the factors affecting exhaust gas ingestion into the engine inlets when thrust reversal is used during ground roll. The model was equipped with four small jet engines mounted in nacelles beneath the wing. The tests included studies of both cascade and target type reversers. The data obtained included the free-stream velocity at the occurrence of exhaust gas ingestion in the outboard engine and the increment of drag due to thrust reversal for various modifications of thrust reverser configuration. Motion picture films of smoke flow studies were also obtained to supplement the data. The results show that the free-stream velocity at which ingestion occurred in the outboard engines could be reduced considerably, by simple modifications to the reversers, without reducing the effective drag due to reversed thrust.
Microeconomics of process control in semiconductor manufacturing
NASA Astrophysics Data System (ADS)
Monahan, Kevin M.
2003-06-01
Process window control enables accelerated design-rule shrinks for both logic and memory manufacturers, but simple microeconomic models that directly link the effects of process window control to maximum profitability are rare. In this work, we derive these links using a simplified model for the maximum rate of profit generated by the semiconductor manufacturing process. We show that the ability of process window control to achieve these economic objectives may be limited by variability in the larger manufacturing context, including measurement delays and process variation at the lot, wafer, x-wafer, x-field, and x-chip levels. We conclude that x-wafer and x-field CD control strategies will be critical enablers of density, performance and optimum profitability at the 90 and 65nm technology nodes. These analyses correlate well with actual factory data and often identify millions of dollars in potential incremental revenue and cost savings. As an example, we show that a scatterometry-based CD Process Window Monitor is an economically justified, enabling technology for the 65nm node.
ERIC Educational Resources Information Center
Burns, Matthew K.
2005-01-01
Previous research suggested that Incremental Rehearsal (IR; Tucker, 1989) led to better retention than other drill practices models. However, little research exists in the literature regarding drill models for mathematics and no studies were found that used IR to practice multiplication facts. Therefore, the current study used IR as an…
Variable diffusion in stock market fluctuations
NASA Astrophysics Data System (ADS)
Hua, Jia-Chen; Chen, Lijian; Falcon, Liberty; McCauley, Joseph L.; Gunaratne, Gemunu H.
2015-02-01
We analyze intraday fluctuations in several stock indices to investigate the underlying stochastic processes using techniques appropriate for processes with nonstationary increments. The five most actively traded stocks each contains two time intervals during the day where the variance of increments can be fit by power law scaling in time. The fluctuations in return within these intervals follow asymptotic bi-exponential distributions. The autocorrelation function for increments vanishes rapidly, but decays slowly for absolute and squared increments. Based on these results, we propose an intraday stochastic model with linear variable diffusion coefficient as a lowest order approximation to the real dynamics of financial markets, and to test the effects of time averaging techniques typically used for financial time series analysis. We find that our model replicates major stylized facts associated with empirical financial time series. We also find that ensemble averaging techniques can be used to identify the underlying dynamics correctly, whereas time averages fail in this task. Our work indicates that ensemble average approaches will yield new insight into the study of financial markets' dynamics. Our proposed model also provides new insight into the modeling of financial markets dynamics in microscopic time scales.
Distributions of Autocorrelated First-Order Kinetic Outcomes: Illness Severity
Englehardt, James D.
2015-01-01
Many complex systems produce outcomes having recurring, power law-like distributions over wide ranges. However, the form necessarily breaks down at extremes, whereas the Weibull distribution has been demonstrated over the full observed range. Here the Weibull distribution is derived as the asymptotic distribution of generalized first-order kinetic processes, with convergence driven by autocorrelation, and entropy maximization subject to finite positive mean, of the incremental compounding rates. Process increments represent multiplicative causes. In particular, illness severities are modeled as such, occurring in proportion to products of, e.g., chronic toxicant fractions passed by organs along a pathway, or rates of interacting oncogenic mutations. The Weibull form is also argued theoretically and by simulation to be robust to the onset of saturation kinetics. The Weibull exponential parameter is shown to indicate the number and widths of the first-order compounding increments, the extent of rate autocorrelation, and the degree to which process increments are distributed exponential. In contrast with the Gaussian result in linear independent systems, the form is driven not by independence and multiplicity of process increments, but by increment autocorrelation and entropy. In some physical systems the form may be attracting, due to multiplicative evolution of outcome magnitudes towards extreme values potentially much larger and smaller than control mechanisms can contain. The Weibull distribution is demonstrated in preference to the lognormal and Pareto I for illness severities versus (a) toxicokinetic models, (b) biologically-based network models, (c) scholastic and psychological test score data for children with prenatal mercury exposure, and (d) time-to-tumor data of the ED01 study. PMID:26061263
Equivalent air depth: fact or fiction.
Berghage, T E; McCraken, T M
1979-12-01
In mixed-gas diving theory, the equivalent air depth (EAD) concept suggests that oxygen does not contribute to the total tissue gas tension and can therefore be disregarded in calculations of the decompression process. The validity of this assumption has been experimentally tested by exposing 365 rats to various partial pressures of oxygen for various lengths of time. If the EAD assumption is correct, under a constant exposure pressure each incremental change in the oxygen partial pressure would produce a corresponding incremental change in pressure reduction tolerance. Results of this study suggest that the EAD concept does not adequately describe the decompression advantages obtained from breathing elevated oxygen partial pressures. The authors suggest that the effects of breathing oxygen vary in a nonlinear fashion across the range from anoxia to oxygen toxicity, and that a simple inert gas replacement concept is no longer tenable.
NASA Technical Reports Server (NTRS)
Rodriguez, Pedro I.
1986-01-01
A computer implementation to Prony's curve fitting by exponential functions is presented. The method, although more than one hundred years old, has not been utilized to its fullest capabilities due to the restriction that the time range must be given in equal increments in order to obtain the best curve fit for a given set of data. The procedure used in this paper utilizes the 3-dimensional capabilities of the Interactive Graphics Design System (I.G.D.S.) in order to obtain the equal time increments. The resultant information is then input into a computer program that solves directly for the exponential constants yielding the best curve fit. Once the exponential constants are known, a simple least squares solution can be applied to obtain the final form of the equation.
40 CFR 60.1600 - When must I submit the notifications of achievement of increments of progress?
Code of Federal Regulations, 2010 CFR
2010-07-01
... PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES... Before August 30, 1999 Model Rule-Increments of Progress § 60.1600 When must I submit the notifications...
Code of Federal Regulations, 2014 CFR
2014-07-01
... ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY... Units Model Rule-Air Curtain Incinerators § 60.2815 What are my requirements for meeting increments of...
40 CFR 60.2585 - What must I include in the notifications of achievement of increments of progress?
Code of Federal Regulations, 2013 CFR
2013-07-01
... ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY... Units Model Rule-Increments of Progress § 60.2585 What must I include in the notifications of...
40 CFR 60.2585 - What must I include in the notifications of achievement of increments of progress?
Code of Federal Regulations, 2014 CFR
2014-07-01
... ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY... Units Model Rule-Increments of Progress § 60.2585 What must I include in the notifications of...
40 CFR 60.2585 - What must I include in the notifications of achievement of increments of progress?
Code of Federal Regulations, 2012 CFR
2012-07-01
... ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY... Units Model Rule-Increments of Progress § 60.2585 What must I include in the notifications of...
Ghorbani, Nima; Watson, P J
2005-06-01
This study examined the incremental validity of Hardiness scales in a sample of Iranian managers. Along with measures of the Five Factor Model and of Organizational and Psychological Adjustment, Hardiness scales were administered to 159 male managers (M age = 39.9, SD = 7.5) who had worked in their organizations for 7.9 yr. (SD=5.4). Hardiness predicted greater Job Satisfaction, higher Organization-based Self-esteem, and perceptions of the work environment as being less stressful and constraining. Hardiness also correlated positively with Assertiveness, Emotional Stability, Extraversion, Openness to Experience, Agreeableness, and Conscientiousness and negatively with Depression, Anxiety, Perceived Stress, Chance External Control, and a Powerful Others External Control. Evidence of incremental validity was obtained when the Hardiness scales supplemented the Five Factor Model in predicting organizational and psychological adjustment. These data documented the incremental validity of the Hardiness scales in a non-Western sample and thus confirmed once again that Hardiness has a relevance that extends beyond the culture in which it was developed.
Effect of property gradients on enamel fracture in human molar teeth.
Barani, Amir; Bush, Mark B; Lawn, Brian R
2012-11-01
A model for the fracture of tooth enamel with graded elastic modulus and toughness is constructed using an extended finite element modeling (XFEM) package. The property gradients are taken from literature data on human molars, with maximum in modulus at the outer enamel surface and in toughness at the inner surface. The tooth is modeled as a brittle shell (enamel) and a compliant interior (dentin), with occlusal loading from a hard, flat contact at the cusp. Longitudinal radial (R) and margin (M) cracks are allowed to extend piecewise along the enamel walls under the action of an incrementally increasing applied load. A simple stratagem is deployed in which fictitious temperature profiles generate the requisite property gradients. The resulting XFEM simulations demonstrate that the crack fronts become more segmented as the property gradients become more pronounced, with enhanced propagation at the outer surface and inhibited propagation at the inner. Whereas the growth history of the cracks is profoundly influenced by the gradients, the ultimate critical loads required to attain full fractures are relatively unaffected. Some implications concerning dentistry are considered. Copyright © 2012 Elsevier Ltd. All rights reserved.
Simple and Flexible Self-Reproducing Structures in Asynchronous Cellular Automata and Their Dynamics
NASA Astrophysics Data System (ADS)
Huang, Xin; Lee, Jia; Yang, Rui-Long; Zhu, Qing-Sheng
2013-03-01
Self-reproduction on asynchronous cellular automata (ACAs) has attracted wide attention due to the evident artifacts induced by synchronous updating. Asynchronous updating, which allows cells to undergo transitions independently at random times, might be more compatible with the natural processes occurring at micro-scale, but the dark side of the coin is the increment in the complexity of an ACA in order to accomplish stable self-reproduction. This paper proposes a novel model of self-timed cellular automata (STCAs), a special type of ACAs, where unsheathed loops are able to duplicate themselves reliably in parallel. The removal of sheath cannot only allow various loops with more flexible and compact structures to replicate themselves, but also reduce the number of cell states of the STCA as compared to the previous model adopting sheathed loops [Y. Takada, T. Isokawa, F. Peper and N. Matsui, Physica D227, 26 (2007)]. The lack of sheath, on the other hand, often tends to cause much more complicated interactions among loops, when all of them struggle independently to stretch out their constructing arms at the same time. In particular, such intense collisions may even cause the emergence of a mess of twisted constructing arms in the cellular space. By using a simple and natural method, our self-reproducing loops (SRLs) are able to retract their arms successively, thereby disentangling from the mess successfully.
Evaluation of on-board hydrogen storage methods for hypersonic vehicles
NASA Technical Reports Server (NTRS)
Akyurtlu, Ates; Akyurtlu, J. F.; Adeyiga, A. A.; Perdue, Samara; Northam, G. B.
1989-01-01
Hydrogen is the foremost candidate as a fuel for use in high speed transport. Since any aircraft moving at hypersonic speeds must have a very slender body, means of decreasing the storage volume requirements below that for liquid hydrogen are needed. The total performance of the hypersonic plane needs to be considered for the evaluation of candidate fuel and storage systems. To accomplish this, a simple model for the performance of a hypersonic plane is presented. To allow for the use of different engines and fuels during different phases of flight, the total trajectory is divided into three phases: subsonic-supersonic, hypersonic and rocket propulsion phase. The fuel fraction for the first phase is found be a simple energy balance using an average thrust to drag ratio for this phase. The hypersonic flight phase is investigated in more detail by taking small altitude increments. This approach allowed the use of flight profiles other than the constant dynamic pressure flight. The effect of fuel volume on drag, structural mass and tankage mass was introduced through simplified equations involving the characteristic dimension of the plane. The propellant requirement for the last phase is found by employing the basic rocket equations. The candidate fuel systems such as the cryogenic fuel combinations and solid and liquid endothermic hydrogen generators are first screened thermodynamically with respect to their energy densities and cooling capacities and then evaluated using the above model.
Multiplicative Process in Turbulent Velocity Statistics: A Simplified Analysis
NASA Astrophysics Data System (ADS)
Chillà, F.; Peinke, J.; Castaing, B.
1996-04-01
A lot of models in turbulence links the energy cascade process and intermittency, the characteristic of which being the shape evolution of the probability density functions (pdf) for longitudinal velocity increments. Using recent models and experimental results, we show that the flatness factor of these pdf gives a simple and direct estimate for what is called the deepness of the cascade. We analyse in this way the published data of a Direct Numerical Simulation and show that the deepness of the cascade presents the same Reynolds number dependence as in laboratory experiments. Plusieurs modèles de turbulence relient la cascade d'énergie et l'intermittence, caractérisée par l'évolution des densités de probabilité (pdf) des incréments longitudinaux de vitesse. Nous appuyant aussi bien sur des modèles récents que sur des résultats expérimentaux, nous montrons que la Curtosis de ces pdf permet une estimation simple et directe de la profondeur de la cascade. Cela nous permet de réanalyser les résultats publiés d'une simulation numérique et de montrer que la profondeur de la cascade y évolue de la même façon que pour les expériences de laboratoire en fonction du nombre de Reynolds.
DOE Office of Scientific and Technical Information (OSTI.GOV)
High performance water heaters are typically more time consuming and costly to install in retrofit applications, making high performance water heaters difficult to justify economically. However, recent advancements in high performance water heaters have targeted the retrofit market, simplifying installations and reducing costs. Four high efficiency natural gas water heaters designed specifically for retrofit applications were installed in single-family homes along with detailed monitoring systems to characterize their savings potential, their installed efficiencies, and their ability to meet household demands. The water heaters tested for this project were designed to improve the cost-effectiveness and increase market penetration of high efficiencymore » water heaters in the residential retrofit market. The retrofit high efficiency water heaters achieved their goal of reducing costs, maintaining savings potential and installed efficiency of other high efficiency water heaters, and meeting the necessary capacity in order to improve cost-effectiveness. However, the improvements were not sufficient to achieve simple paybacks of less than ten years for the incremental cost compared to a minimum efficiency heater. Significant changes would be necessary to reduce the simple payback to six years or less. Annual energy savings in the range of $200 would also reduce paybacks to less than six years. These energy savings would require either significantly higher fuel costs (greater than $1.50 per therm) or very high usage (around 120 gallons per day). For current incremental costs, the water heater efficiency would need to be similar to that of a heat pump water heater to deliver a six year payback.« less
Simple Retrofit High-Efficiency Natural Gas Water Heater Field Test
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schoenbauer, Ben
High-performance water heaters are typically more time consuming and costly to install in retrofit applications, making high performance water heaters difficult to justify economically. However, recent advancements in high performance water heaters have targeted the retrofit market, simplifying installations and reducing costs. Four high efficiency natural gas water heaters designed specifically for retrofit applications were installed in single-family homes along with detailed monitoring systems to characterize their savings potential, their installed efficiencies, and their ability to meet household demands. The water heaters tested for this project were designed to improve the cost-effectiveness and increase market penetration of high efficiency watermore » heaters in the residential retrofit market. The retrofit high efficiency water heaters achieved their goal of reducing costs, maintaining savings potential and installed efficiency of other high efficiency water heaters, and meeting the necessary capacity in order to improve cost-effectiveness. However, the improvements were not sufficient to achieve simple paybacks of less than ten years for the incremental cost compared to a minimum efficiency heater. Significant changes would be necessary to reduce the simple payback to six years or less. Annual energy savings in the range of $200 would also reduce paybacks to less than six years. These energy savings would require either significantly higher fuel costs (greater than $1.50 per therm) or very high usage (around 120 gallons per day). For current incremental costs, the water heater efficiency would need to be similar to that of a heat pump water heater to deliver a six year payback.« less
Simple Retrofit High-Efficiency Natural Gas Water Heater Field Test
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schoenbauer, Ben
High performance water heaters are typically more time consuming and costly to install in retrofit applications, making high performance water heaters difficult to justify economically. However, recent advancements in high performance water heaters have targeted the retrofit market, simplifying installations and reducing costs. Four high efficiency natural gas water heaters designed specifically for retrofit applications were installed in single-family homes along with detailed monitoring systems to characterize their savings potential, their installed efficiencies, and their ability to meet household demands. The water heaters tested for this project were designed to improve the cost-effectiveness and increase market penetration of high efficiencymore » water heaters in the residential retrofit market. The retrofit high efficiency water heaters achieved their goal of reducing costs, maintaining savings potential and installed efficiency of other high efficiency water heaters, and meeting the necessary capacity in order to improve cost-effectiveness. However, the improvements were not sufficient to achieve simple paybacks of less than ten years for the incremental cost compared to a minimum efficiency heater. Significant changes would be necessary to reduce the simple payback to six years or less. Annual energy savings in the range of $200 would also reduce paybacks to less than six years. These energy savings would require either significantly higher fuel costs (greater than $1.50 per therm) or very high usage (around 120 gallons per day). For current incremental costs, the water heater efficiency would need to be similar to that of a heat pump water heater to deliver a six year payback.« less
European Science Notes. Volume 39, Number 8.
1985-08-01
electrophysiological 370 -- , * * ESN 39-8 (1985) slower increment of the speed to the end mus which occurs immediately after the phase . The initial jump is... curve and that removal appears to simple constructions, ease of modula- correspond more closely with the melt tion, and efficient discharge excita- phase ...separate experimental condi- (uncued condition). On some trials the tions. Figure 2 shows the curves for light actually appeared in a location the four
"Chair Stand Test" as Simple Tool for Sarcopenia Screening in Elderly Women.
Pinheiro, P A; Carneiro, J A O; Coqueiro, R S; Pereira, R; Fernandes, M H
2016-01-01
To investigate the association between sarcopenia and "chair stand test" performance, and evaluate this test as a screening tool for sarcopenia in community-dwelling elderly women. Cross-sectional Survey. 173 female individuals, aged ≥ 60 years and living in the urban area of the municipality of Lafaiete Coutinho, Bahia's inland, Brazil. The association between sarcopenia (defined by muscle mass, strength and/or performance loss) and performance in the "chair stand test" was tested by binary logistic regression technique. The ROC curve parameters were used to evaluate the diagnostic power of the test in sarcopenia screening. The significance level was set at 5 %. The model showed that the time spent for the "chair stand test" was positively associated (OR = 1.08; 95% CI = 1.01 - 1.16, p = 0.024) to sarcopenia, indicating that, for each 1 second increment in the test performance, the sarcopenia's probability increased by 8% in elderly women. The cut-off point that showed the best balance between sensitivity and specificity was 13 seconds. The performance of "chair stand test" showed predictive ability for sarcopenia, being an effective and simple screening tool for sarcopenia in elderly women. This test could be used for screening sarcopenic elderly women, allowing early interventions.
[Volatile organic compounds (VOCs) emitted from furniture and electrical appliances].
Tanaka-Kagawa, Toshiko; Jinno, Hideto; Furukawa, Yoko; Nishimura, Tetsuji
2010-01-01
Organic chemicals are widely used as ingredients in household products. Therefore, furniture and other household products as well as building products may influence the indoor air quality. This study was performed to estimate quantitatively influence of household products on indoor air quality. Volatile organic compound (VOC) emissions were investigated for 10 products including furniture (chest, desk, dining table, sofa, cupboard) and electrical appliances (refrigerator, electric heater, desktop personal computer, liquid crystal display television and audio) by the large chamber test method (JIS A 1912) under the standard conditions of 28 degrees C, 50% relative humidity and 0.5 times/h ventilation. Emission rate of total VOC (TVOC) from the sofa showed the highest; over 7900 microg toluene-equivalent/unit/h. Relatively high TVOC emissions were observed also from desk and chest. Based on the emission rates, the impacts on the indoor TVOC were estimated by the simple model with a volume of 17.4 m3 and ventilation frequency of 0.5 times/h. The estimated TVOC increment for the sofa was 911 microg/m3, accounting for almost 230% of the provisional target value, 400 microg/m3. The values of estimated increment of toluene emitted from cupboard and styrene emitted from refrigerator were 10% and 16% of guideline values, respectively. These results revealed that VOC emissions from household products may influence significantly indoor air quality.
NASA Astrophysics Data System (ADS)
Kersting, E.; von Seggern, H.
2017-08-01
A new production route for europium doped cesium bromide (CsBr:Eu2+) imaging plates has been developed, synthesizing CsBr:Eu2+ powder from a precipitation reaction of aqueous CsBr solution with ethanol. This new route allows the control of features like homogeneous grain size and grain shape of the obtained powder. After drying and subsequent compacting the powder, disk-like samples were fabricated, and their resulting photostimulated luminescence (PSL) properties like yield and spatial resolution were determined. It will be shown that hydration of such disks causes the CsBr:Eu2+ powder to recrystallize starting from the humidity exposed surfaces to the sample interior up to a completely polycrystalline sample resulting in a decreasing PSL yield and an increasing resolution. Subsequent annealing leads to grain refinement combined with a large PSL yield increment and a minor effect on the spatial resolution. By first annealing the "as made" disk, one observes a strong increment of the PSL yield and almost no effect on the spatial resolution. During subsequent hydration, the recrystallization is hindered by minor structural changes of the grains. The related PSL yield drops slightly with increasing hydration time, and the spatial resolution drops considerably. The obtained PSL properties with respect to structure will be discussed with a simple model.
2D discontinuous piecewise linear map: Emergence of fashion cycles.
Gardini, L; Sushko, I; Matsuyama, K
2018-05-01
We consider a discrete-time version of the continuous-time fashion cycle model introduced in Matsuyama, 1992. Its dynamics are defined by a 2D discontinuous piecewise linear map depending on three parameters. In the parameter space of the map periodicity, regions associated with attracting cycles of different periods are organized in the period adding and period incrementing bifurcation structures. The boundaries of all the periodicity regions related to border collision bifurcations are obtained analytically in explicit form. We show the existence of several partially overlapping period incrementing structures, that is, a novelty for the considered class of maps. Moreover, we show that if the time-delay in the discrete time formulation of the model shrinks to zero, the number of period incrementing structures tends to infinity and the dynamics of the discrete time fashion cycle model converges to those of continuous-time fashion cycle model.
Fabian Uzoh; William W. Oliver
2006-01-01
A height increment model is developed and evaluated for individual trees of ponderosa pine throughout the species range in western United States. The data set used in this study came from long-term permanent research plots in even-aged, pure stands both planted and of natural origin. The data base consists of six levels-of-growing stock studies supplemented by initial...
Modeling CANDU-6 liquid zone controllers for effects of thorium-based fuels
DOE Office of Scientific and Technical Information (OSTI.GOV)
St-Aubin, E.; Marleau, G.
2012-07-01
We use the DRAGON code to model the CANDU-6 liquid zone controllers and evaluate the effects of thorium-based fuels on their incremental cross sections and reactivity worth. We optimize both the numerical quadrature and spatial discretization for 2D cell models in order to provide accurate fuel properties for 3D liquid zone controller supercell models. We propose a low computer cost parameterized pseudo-exact 3D cluster geometries modeling approach that avoids tracking issues on small external surfaces. This methodology provides consistent incremental cross sections and reactivity worths when the thickness of the buffer region is reduced. When compared with an approximate annularmore » geometry representation of the fuel and coolant region, we observe that the cluster description of fuel bundles in the supercell models does not increase considerably the precision of the results while increasing substantially the CPU time. In addition, this comparison shows that it is imperative to finely describe the liquid zone controller geometry since it has a strong impact of the incremental cross sections. This paper also shows that liquid zone controller reactivity worth is greatly decreased in presence of thorium-based fuels compared to the reference natural uranium fuel, since the fission and the fast to thermal scattering incremental cross sections are higher for the new fuels. (authors)« less
Wakim, Rita; Ritchey, Matthew; Hockenberry, Jason; Casper, Michele
2016-12-29
Using 2012 data on fee-for-service Medicare claims, we documented regional and county variation in incremental standardized costs of heart disease (ie, comparing costs between beneficiaries with heart disease and beneficiaries without heart disease) by type of service (eg, inpatient, outpatient, post-acute care). Absolute incremental total costs varied by region. Although the largest absolute incremental total costs of heart disease were concentrated in southern and Appalachian counties, geographic patterns of costs varied by type of service. These data can be used to inform development of policies and payment models that address the observed geographic disparities.
A diameter increment model for Red Fir in California and Southern Oregon
K. Leroy Dolph
1992-01-01
Periodic (10-year) diameter increment of individual red fir trees in Califomia and southern Oregon can be predicted from initial diameter and crown ratio of each tree, site index, percent slope, and aspect of the site. The model actually predicts the natural logarithm ofthe change in squared diameter inside bark between the startand the end of a 10-year growth period....
NASA Astrophysics Data System (ADS)
DeMarco, Adam Ward
The turbulent motions with the atmospheric boundary layer exist over a wide range of spatial and temporal scales and are very difficult to characterize. Thus, to explore the behavior of such complex flow enviroments, it is customary to examine their properties from a statistical perspective. Utilizing the probability density functions of velocity and temperature increments, deltau and deltaT, respectively, this work investigates their multiscale behavior to uncover the unique traits that have yet to be thoroughly studied. Utilizing diverse datasets, including idealized, wind tunnel experiments, atmospheric turbulence field measurements, multi-year ABL tower observations, and mesoscale models simulations, this study reveals remarkable similiarities (and some differences) between the small and larger scale components of the probability density functions increments fields. This comprehensive analysis also utilizes a set of statistical distributions to showcase their ability to capture features of the velocity and temperature increments' probability density functions (pdfs) across multiscale atmospheric motions. An approach is proposed for estimating their pdfs utilizing the maximum likelihood estimation (MLE) technique, which has never been conducted utilizing atmospheric data. Using this technique, we reveal the ability to estimate higher-order moments accurately with a limited sample size, which has been a persistent concern for atmospheric turbulence research. With the use robust Goodness of Fit (GoF) metrics, we quantitatively reveal the accuracy of the distributions to the diverse dataset. Through this analysis, it is shown that the normal inverse Gaussian (NIG) distribution is a prime candidate to be used as an estimate of the increment pdfs fields. Therefore, using the NIG model and its parameters, we display the variations in the increments over a range of scales revealing some unique scale-dependent qualities under various stability and ow conditions. This novel approach can provide a method of characterizing increment fields with the sole use of only four pdf parameters. Also, we investigate the capability of the current state-of-the-art mesoscale atmospheric models to predict the features and highlight the potential for use for future model development. With the knowledge gained in this study, a number of applications can benefit by using our methodology, including the wind energy and optical wave propagation fields.
40 CFR 60.1620 - How do I comply with the increment of progress for initiating onsite construction?
Code of Federal Regulations, 2010 CFR
2010-07-01
... ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY... or Before August 30, 1999 Model Rule-Increments of Progress § 60.1620 How do I comply with the...
Code of Federal Regulations, 2010 CFR
2010-07-01
... ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY... or Before August 30, 1999 Model Rule-Increments of Progress § 60.1585 What are my requirements for...
40 CFR 60.1625 - How do I comply with the increment of progress for completing onsite construction?
Code of Federal Regulations, 2010 CFR
2010-07-01
... ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY... or Before August 30, 1999 Model Rule-Increments of Progress § 60.1625 How do I comply with the...
NASA Astrophysics Data System (ADS)
Li, Ming; Wang, Q. J.; Bennett, James C.; Robertson, David E.
2016-09-01
This study develops a new error modelling method for ensemble short-term and real-time streamflow forecasting, called error reduction and representation in stages (ERRIS). The novelty of ERRIS is that it does not rely on a single complex error model but runs a sequence of simple error models through four stages. At each stage, an error model attempts to incrementally improve over the previous stage. Stage 1 establishes parameters of a hydrological model and parameters of a transformation function for data normalization, Stage 2 applies a bias correction, Stage 3 applies autoregressive (AR) updating, and Stage 4 applies a Gaussian mixture distribution to represent model residuals. In a case study, we apply ERRIS for one-step-ahead forecasting at a range of catchments. The forecasts at the end of Stage 4 are shown to be much more accurate than at Stage 1 and to be highly reliable in representing forecast uncertainty. Specifically, the forecasts become more accurate by applying the AR updating at Stage 3, and more reliable in uncertainty spread by using a mixture of two Gaussian distributions to represent the residuals at Stage 4. ERRIS can be applied to any existing calibrated hydrological models, including those calibrated to deterministic (e.g. least-squares) objectives.
NASA Astrophysics Data System (ADS)
Richman, J. G.; Shriver, J. F.; Metzger, E. J.; Hogan, P. J.; Smedstad, O. M.
2017-12-01
The Oceanography Division of the Naval Research Laboratory recently completed a 23-year (1993-2015) coupled ocean-sea ice reanalysis forced by NCEP CFS reanalysis fluxes. The reanalysis uses the Global Ocean Forecast System (GOFS) framework of the HYbrid Coordinate Ocean Model (HYCOM) and the Los Alamos Community Ice CodE (CICE) and the Navy Coupled Ocean Data Assimilation 3D Var system (NCODA). The ocean model has 41 layers and an equatorial resolution of 0.08° (8.8 km) on a tri-polar grid with the sea ice model on the same grid that reduces to 3.5 km at the North Pole. Sea surface temperature (SST), sea surface height (SSH) and temperature-salinity profile data are assimilated into the ocean every day. The SSH anomalies are converted into synthetic profiles of temperature and salinity prior to assimilation. Incremental analysis updating of geostrophically balanced increments is performed over a 6-hour insertion window. Sea ice concentration is assimilated into the sea ice model every day. Following the lead of the Ocean Reanalysis Intercomparison Project (ORA-IP), the monthly mean upper ocean heat and salt content from the surface to 300 m, 700m and 1500 m, the mixed layer depth, the depth of the 20°C isotherm, the steric sea surface height and the Atlantic Meridional Overturning Circulation for the GOFS reanalysis and the Simple Ocean Data Assimilation (SODA 3.3.1) eddy-permitting reanalysis have been compared on a global uniform 0.5° grid. The differences between the two ocean reanalyses in heat and salt content increase with increasing integration depth. Globally, GOFS trends to be colder than SODA at all depth. Warming trends are observed at all depths over the 23 year period. The correlation of the upper ocean heat content is significant above 700 m. Prior to 2004, differences in the data assimilated lead to larger biases. The GOFS reanalysis assimilates SSH as profile data, while SODA doesn't. Large differences are found in the Western Boundary Currents, Southern Ocean and equatorial regions. In the Indian Ocean, the Equatorial Counter Current extends to far to the east and the subsurface flow in the thermocline is too weak in GOFS. The 20°C isotherm is biased 2 m shallow in SODA compared to GOFS, but the monthly anomalies in the depth are highly correlated.
NASA Astrophysics Data System (ADS)
Most, Sebastian; Nowak, Wolfgang; Bijeljic, Branko
2015-04-01
Fickian transport in groundwater flow is the exception rather than the rule. Transport in porous media is frequently simulated via particle methods (i.e. particle tracking random walk (PTRW) or continuous time random walk (CTRW)). These methods formulate transport as a stochastic process of particle position increments. At the pore scale, geometry and micro-heterogeneities prohibit the commonly made assumption of independent and normally distributed increments to represent dispersion. Many recent particle methods seek to loosen this assumption. Hence, it is important to get a better understanding of the processes at pore scale. For our analysis we track the positions of 10.000 particles migrating through the pore space over time. The data we use come from micro CT scans of a homogeneous sandstone and encompass about 10 grain sizes. Based on those images we discretize the pore structure and simulate flow at the pore scale based on the Navier-Stokes equation. This flow field realistically describes flow inside the pore space and we do not need to add artificial dispersion during the transport simulation. Next, we use particle tracking random walk and simulate pore-scale transport. Finally, we use the obtained particle trajectories to do a multivariate statistical analysis of the particle motion at the pore scale. Our analysis is based on copulas. Every multivariate joint distribution is a combination of its univariate marginal distributions. The copula represents the dependence structure of those univariate marginals and is therefore useful to observe correlation and non-Gaussian interactions (i.e. non-Fickian transport). The first goal of this analysis is to better understand the validity regions of commonly made assumptions. We are investigating three different transport distances: 1) The distance where the statistical dependence between particle increments can be modelled as an order-one Markov process. This would be the Markovian distance for the process, where the validity of yet-unexplored non-Gaussian-but-Markovian random walks start. 2) The distance where bivariate statistical dependence simplifies to a multi-Gaussian dependence based on simple linear correlation (validity of correlated PTRW/CTRW). 3) The distance of complete statistical independence (validity of classical PTRW/CTRW). The second objective is to reveal characteristic dependencies influencing transport the most. Those dependencies can be very complex. Copulas are highly capable of representing linear dependence as well as non-linear dependence. With that tool we are able to detect persistent characteristics dominating transport even across different scales. The results derived from our experimental data set suggest that there are many more non-Fickian aspects of pore-scale transport than the univariate statistics of longitudinal displacements. Non-Fickianity can also be found in transverse displacements, and in the relations between increments at different time steps. Also, the found dependence is non-linear (i.e. beyond simple correlation) and persists over long distances. Thus, our results strongly support the further refinement of techniques like correlated PTRW or correlated CTRW towards non-linear statistical relations.
Numerical simulation of pseudoelastic shape memory alloys using the large time increment method
NASA Astrophysics Data System (ADS)
Gu, Xiaojun; Zhang, Weihong; Zaki, Wael; Moumni, Ziad
2017-04-01
The paper presents a numerical implementation of the large time increment (LATIN) method for the simulation of shape memory alloys (SMAs) in the pseudoelastic range. The method was initially proposed as an alternative to the conventional incremental approach for the integration of nonlinear constitutive models. It is adapted here for the simulation of pseudoelastic SMA behavior using the Zaki-Moumni model and is shown to be especially useful in situations where the phase transformation process presents little or lack of hardening. In these situations, a slight stress variation in a load increment can result in large variations of strain and local state variables, which may lead to difficulties in numerical convergence. In contrast to the conventional incremental method, the LATIN method solve the global equilibrium and local consistency conditions sequentially for the entire loading path. The achieved solution must satisfy the conditions of static and kinematic admissibility and consistency simultaneously after several iterations. 3D numerical implementation is accomplished using an implicit algorithm and is then used for finite element simulation using the software Abaqus. Computational tests demonstrate the ability of this approach to simulate SMAs presenting flat phase transformation plateaus and subjected to complex loading cases, such as the quasi-static behavior of a stent structure. Some numerical results are contrasted to those obtained using step-by-step incremental integration.
Cost-Effectiveness Analysis of Regorafenib for Metastatic Colorectal Cancer
Goldstein, Daniel A.; Ahmad, Bilal B.; Chen, Qiushi; Ayer, Turgay; Howard, David H.; Lipscomb, Joseph; El-Rayes, Bassel F.; Flowers, Christopher R.
2015-01-01
Purpose Regorafenib is a standard-care option for treatment-refractory metastatic colorectal cancer that increases median overall survival by 6 weeks compared with placebo. Given this small incremental clinical benefit, we evaluated the cost-effectiveness of regorafenib in the third-line setting for patients with metastatic colorectal cancer from the US payer perspective. Methods We developed a Markov model to compare the cost and effectiveness of regorafenib with those of placebo in the third-line treatment of metastatic colorectal cancer. Health outcomes were measured in life-years and quality-adjusted life-years (QALYs). Drug costs were based on Medicare reimbursement rates in 2014. Model robustness was addressed in univariable and probabilistic sensitivity analyses. Results Regorafenib provided an additional 0.04 QALYs (0.13 life-years) at a cost of $40,000, resulting in an incremental cost-effectiveness ratio of $900,000 per QALY. The incremental cost-effectiveness ratio for regorafenib was > $550,000 per QALY in all of our univariable and probabilistic sensitivity analyses. Conclusion Regorafenib provides minimal incremental benefit at high incremental cost per QALY in the third-line management of metastatic colorectal cancer. The cost-effectiveness of regorafenib could be improved by the use of value-based pricing. PMID:26304904
Situation Model Updating in Young and Older Adults: Global versus Incremental Mechanisms
Bailey, Heather R.; Zacks, Jeffrey M.
2015-01-01
Readers construct mental models of situations described by text. Activity in narrative text is dynamic, so readers must frequently update their situation models when dimensions of the situation change. Updating can be incremental, such that a change leads to updating just the dimension that changed, or global, such that the entire model is updated. Here, we asked whether older and young adults make differential use of incremental and global updating. Participants read narratives containing changes in characters and spatial location and responded to recognition probes throughout the texts. Responses were slower when probes followed a change, suggesting that situation models were updated at changes. When either dimension changed, responses to probes for both dimensions were slowed; this provides evidence for global updating. Moreover, older adults showed stronger evidence of global updating than did young adults. One possibility is that older adults perform more global updating to offset reduced ability to manipulate information in working memory. PMID:25938248
Courville, Xan F; Tomek, Ivan M; Kirkland, Kathryn B; Birhle, Marian; Kantor, Stephen R; Finlayson, Samuel R G
2012-02-01
To perform a cost-effectiveness analysis to evaluate preoperative use of mupirocin in patients with total joint arthroplasty (TJA). Simple decision tree model. Outpatient TJA clinical setting. Hypothetical cohort of patients with TJA. A simple decision tree model compared 3 strategies in a hypothetical cohort of patients with TJA: (1) obtaining preoperative screening cultures for all patients, followed by administration of mupirocin to patients with cultures positive for Staphylococcus aureus; (2) providing empirical preoperative treatment with mupirocin for all patients without screening; and (3) providing no preoperative treatment or screening. We assessed the costs and benefits over a 1-year period. Data inputs were obtained from a literature review and from our institution's internal data. Utilities were measured in quality-adjusted life-years, and costs were measured in 2005 US dollars. Incremental cost-effectiveness ratio. The treat-all and screen-and-treat strategies both had lower costs and greater benefits, compared with the no-treatment strategy. Sensitivity analysis revealed that this result is stable even if the cost of mupirocin was over $100 and the cost of SSI ranged between $26,000 and $250,000. Treating all patients remains the best strategy when the prevalence of S. aureus carriers and surgical site infection is varied across plausible values as well as when the prevalence of mupirocin-resistant strains is high. Empirical treatment with mupirocin ointment or use of a screen-and-treat strategy before TJA is performed is a simple, safe, and cost-effective intervention that can reduce the risk of SSI. S. aureus decolonization with nasal mupirocin for patients undergoing TJA should be considered. Level II, economic and decision analysis.
Code of Federal Regulations, 2010 CFR
2010-07-01
... ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY... or Before August 30, 1999 Model Rule-Increments of Progress § 60.1595 What must I include in the...
40 CFR 60.2830 - When must I submit the notifications of achievement of increments of progress?
Code of Federal Regulations, 2010 CFR
2010-07-01
... Commenced Construction On or Before November 30, 1999 Model Rule-Air Curtain Incinerators § 60.2830 When... increments of progress must be postmarked no later than 10 business days after the compliance date for the...
Evaluation of incremental reactivity and its uncertainty in Southern California.
Martien, Philip T; Harley, Robert A; Milford, Jana B; Russell, Armistead G
2003-04-15
The incremental reactivity (IR) and relative incremental reactivity (RIR) of carbon monoxide and 30 individual volatile organic compounds (VOC) were estimated for the South Coast Air Basin using two photochemical air quality models: a 3-D, grid-based model and a vertically resolved trajectory model. Both models include an extended version of the SAPRC99 chemical mechanism. For the 3-D modeling, the decoupled direct method (DDM-3D) was used to assess reactivities. The trajectory model was applied to estimate uncertainties in reactivities due to uncertainties in chemical rate parameters, deposition parameters, and emission rates using Monte Carlo analysis with Latin hypercube sampling. For most VOC, RIRs were found to be consistent in rankings with those produced by Carter using a box model. However, 3-D simulations show that coastal regions, upwind of most of the emissions, have comparatively low IR but higher RIR than predicted by box models for C4-C5 alkenes and carbonyls that initiate the production of HOx radicals. Biogenic VOC emissions were found to have a lower RIR than predicted by box model estimates, because emissions of these VOC were mostly downwind of the areas of primary ozone production. Uncertainties in RIR of individual VOC were found to be dominated by uncertainties in the rate parameters of their primary oxidation reactions. The coefficient of variation (COV) of most RIR values ranged from 20% to 30%, whereas the COV of absolute incremental reactivity ranged from about 30% to 40%. In general, uncertainty and variability both decreased when relative rather than absolute reactivity metrics were used.
Cost-effectiveness of Lung Cancer Screening in Canada.
Goffin, John R; Flanagan, William M; Miller, Anthony B; Fitzgerald, Natalie R; Memon, Saima; Wolfson, Michael C; Evans, William K
2015-09-01
The US National Lung Screening Trial supports screening for lung cancer among smokers using low-dose computed tomographic (LDCT) scans. The cost-effectiveness of screening in a publically funded health care system remains a concern. To assess the cost-effectiveness of LDCT scan screening for lung cancer within the Canadian health care system. The Cancer Risk Management Model (CRMM) simulated individual lives within the Canadian population from 2014 to 2034, incorporating cancer risk, disease management, outcome, and cost data. Smokers and former smokers eligible for lung cancer screening (30 pack-year smoking history, ages 55-74 years, for the reference scenario) were modeled, and performance parameters were calibrated to the National Lung Screening Trial (NLST). The reference screening scenario assumes annual scans to age 75 years, 60% participation by 10 years, 70% adherence to screening, and unchanged smoking rates. The CRMM outputs are aggregated, and costs (2008 Canadian dollars) and life-years are discounted 3% annually. The incremental cost-effectiveness ratio. Compared with no screening, the reference scenario saved 51,000 quality-adjusted life-years (QALY) and had an incremental cost-effectiveness ratio of CaD $52,000/QALY. If smoking history is modeled for 20 or 40 pack-years, incremental cost-effectiveness ratios of CaD $62,000 and CaD $43,000/QALY, respectively, were generated. Changes in participation rates altered life years saved but not the incremental cost-effectiveness ratio, while the incremental cost-effectiveness ratio is sensitive to changes in adherence. An adjunct smoking cessation program improving the quit rate by 22.5% improves the incremental cost-effectiveness ratio to CaD $24,000/QALY. Lung cancer screening with LDCT appears cost-effective in the publicly funded Canadian health care system. An adjunct smoking cessation program has the potential to improve outcomes.
NASA Astrophysics Data System (ADS)
Scholl, V.; Hulslander, D.; Goulden, T.; Wasser, L. A.
2015-12-01
Spatial and temporal monitoring of vegetation structure is important to the ecological community. Airborne Light Detection and Ranging (LiDAR) systems are used to efficiently survey large forested areas. From LiDAR data, three-dimensional models of forests called canopy height models (CHMs) are generated and used to estimate tree height. A common problem associated with CHMs is data pits, where LiDAR pulses penetrate the top of the canopy, leading to an underestimation of vegetation height. The National Ecological Observatory Network (NEON) currently implements an algorithm to reduce data pit frequency, which requires two height threshold parameters, increment size and range ceiling. CHMs are produced at a series of height increments up to a height range ceiling and combined to produce a CHM with reduced pits (referred to as a "pit-free" CHM). The current implementation uses static values for the height increment and ceiling (5 and 15 meters, respectively). To facilitate the generation of accurate pit-free CHMs across diverse NEON sites with varying vegetation structure, the impacts of adjusting the height threshold parameters were investigated through development of an algorithm which dynamically selects the height increment and ceiling. A series of pit-free CHMs were generated using three height range ceilings and four height increment values for three ecologically different sites. Height threshold parameters were found to change CHM-derived tree heights up to 36% compared to original CHMs. The extent of the parameters' influence on modelled tree heights was greater than expected, which will be considered during future CHM data product development at NEON. (A) Aerial image of Harvard National Forest, (B) standard CHM containing pits, appearing as black speckles, (C) a pit-free CHM created with the static algorithm implementation, and (D) a pit-free CHM created through varying the height threshold ceiling up to 82 m and the increment to 1 m.
Identifying best-fitting inputs in health-economic model calibration: a Pareto frontier approach.
Enns, Eva A; Cipriano, Lauren E; Simons, Cyrena T; Kong, Chung Yin
2015-02-01
To identify best-fitting input sets using model calibration, individual calibration target fits are often combined into a single goodness-of-fit (GOF) measure using a set of weights. Decisions in the calibration process, such as which weights to use, influence which sets of model inputs are identified as best-fitting, potentially leading to different health economic conclusions. We present an alternative approach to identifying best-fitting input sets based on the concept of Pareto-optimality. A set of model inputs is on the Pareto frontier if no other input set simultaneously fits all calibration targets as well or better. We demonstrate the Pareto frontier approach in the calibration of 2 models: a simple, illustrative Markov model and a previously published cost-effectiveness model of transcatheter aortic valve replacement (TAVR). For each model, we compare the input sets on the Pareto frontier to an equal number of best-fitting input sets according to 2 possible weighted-sum GOF scoring systems, and we compare the health economic conclusions arising from these different definitions of best-fitting. For the simple model, outcomes evaluated over the best-fitting input sets according to the 2 weighted-sum GOF schemes were virtually nonoverlapping on the cost-effectiveness plane and resulted in very different incremental cost-effectiveness ratios ($79,300 [95% CI 72,500-87,600] v. $139,700 [95% CI 79,900-182,800] per quality-adjusted life-year [QALY] gained). Input sets on the Pareto frontier spanned both regions ($79,000 [95% CI 64,900-156,200] per QALY gained). The TAVR model yielded similar results. Choices in generating a summary GOF score may result in different health economic conclusions. The Pareto frontier approach eliminates the need to make these choices by using an intuitive and transparent notion of optimality as the basis for identifying best-fitting input sets. © The Author(s) 2014.
Identifying best-fitting inputs in health-economic model calibration: a Pareto frontier approach
Enns, Eva A.; Cipriano, Lauren E.; Simons, Cyrena T.; Kong, Chung Yin
2014-01-01
Background To identify best-fitting input sets using model calibration, individual calibration target fits are often combined into a single “goodness-of-fit” (GOF) measure using a set of weights. Decisions in the calibration process, such as which weights to use, influence which sets of model inputs are identified as best-fitting, potentially leading to different health economic conclusions. We present an alternative approach to identifying best-fitting input sets based on the concept of Pareto-optimality. A set of model inputs is on the Pareto frontier if no other input set simultaneously fits all calibration targets as well or better. Methods We demonstrate the Pareto frontier approach in the calibration of two models: a simple, illustrative Markov model and a previously-published cost-effectiveness model of transcatheter aortic valve replacement (TAVR). For each model, we compare the input sets on the Pareto frontier to an equal number of best-fitting input sets according to two possible weighted-sum GOF scoring systems, and compare the health economic conclusions arising from these different definitions of best-fitting. Results For the simple model, outcomes evaluated over the best-fitting input sets according to the two weighted-sum GOF schemes were virtually non-overlapping on the cost-effectiveness plane and resulted in very different incremental cost-effectiveness ratios ($79,300 [95%CI: 72,500 – 87,600] vs. $139,700 [95%CI: 79,900 - 182,800] per QALY gained). Input sets on the Pareto frontier spanned both regions ($79,000 [95%CI: 64,900 – 156,200] per QALY gained). The TAVR model yielded similar results. Conclusions Choices in generating a summary GOF score may result in different health economic conclusions. The Pareto frontier approach eliminates the need to make these choices by using an intuitive and transparent notion of optimality as the basis for identifying best-fitting input sets. PMID:24799456
The Effect of the Nunn-McCurdy Amendment on Unit-Cost-Growth of Defense Acquisition Projects
2010-07-01
Congressional testimony: “I consider Virginia Class cost-reduction efforts a model for all our ships, submarines, and aircraft” (Roughhead, 2009...PATRIOT/MEADS CAP- MISSLE 2004 DE STRYKER 2004 PdE WIN-T INCREMENT 1 2007 PdE WIN-T INCREMENT 2 2007 DE Subtot~l Navy: ADS (ANJWOR-3) 2005 DE AGM...JAVELIN JLENS LONGBOW APACHE LUH PATRIOT PAC-3 PATRIOT/MEADS CAP - FIRE UNIT PATRIOT/MEADS CAP - MISSLE STRYKER WIN-T INCREMENT 1 WIN-T
Rakesh Minocha; Walter C. Shortle
1993-01-01
Two simple and fast methods for the extraction of major inorganic cations (Ca, Mg, Mn, K) from small quantities of stemwood and needles of woody plants were developed. A 3.2- or 6.4-mm cobalt drill bit was used to shave samples from disks and increment cores of stemwood. For ion extraction, wood (ground or shavings) or needles were either homogenzied using a Tekmar...
Reduced Order Model Basis Vector Generation: Generates Basis Vectors fro ROMs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arrighi, Bill
2016-03-03
libROM is a library that implements order reduction via singular value decomposition (SVD) of sampled state vectors. It implements 2 parallel, incremental SVD algorithms and one serial, non-incremental algorithm. It also provides a mechanism for adaptive sampling of basis vectors.
40 CFR 60.2835 - What if I do not meet an increment of progress?
Code of Federal Regulations, 2010 CFR
2010-07-01
... or Before November 30, 1999 Model Rule-Air Curtain Incinerators § 60.2835 What if I do not meet an... Administrator postmarked within 10 business days after the date for that increment of progress in table 1 of...
Mixed-initiative control of intelligent systems
NASA Technical Reports Server (NTRS)
Borchardt, G. C.
1987-01-01
Mixed-initiative user interfaces provide a means by which a human operator and an intelligent system may collectively share the task of deciding what to do next. Such interfaces are important to the effective utilization of real-time expert systems as assistants in the execution of critical tasks. Presented here is the Incremental Inference algorithm, a symbolic reasoning mechanism based on propositional logic and suited to the construction of mixed-initiative interfaces. The algorithm is similar in some respects to the Truth Maintenance System, but replaces the notion of 'justifications' with a notion of recency, allowing newer values to override older values yet permitting various interested parties to refresh these values as they become older and thus more vulnerable to change. A simple example is given of the use of the Incremental Inference algorithm plus an overview of the integration of this mechanism within the SPECTRUM expert system for geological interpretation of imaging spectrometer data.
Curtis, Alexandra M; VanBuren, John; Cavanaugh, Joseph E; Warren, John J; Marshall, Teresa A; Levy, Steven M
2018-05-12
To assess longitudinal associations between permanent tooth caries increment and both modifiable and non-modifiable risk factors, using best subsets model selection. The Iowa Fluoride Study has followed a birth cohort with standardized caries exams without radiographs of the permanent dentition conducted at about ages 9, 13, and 17 years. Questionnaires were sent semi-annually to assess fluoride exposures and intakes, select food and beverage intakes, and tooth brushing frequency. Exposure variables were averaged over ages 7-9, 11-13, and 15-17, reflecting exposure 2 years prior to the caries exam. Longitudinal models were used to relate period-specific averaged exposures and demographic variables to adjusted decayed and filled surface increments (ADJCI) (n = 392). The Akaike Information Criterion (AIC) was used to assess optimal explanatory variable combinations. From birth to age 9, 9-13, and 13-17 years, 24, 30, and 55 percent of subjects had positive permanent ADJCI, respectively. Ten models had AIC values within two units of the lowest AIC model and were deemed optimal based on AIC. Younger age, being male, higher mother's education, and higher brushing frequency were associated with lower caries increment in all 10 models, while milk intake was included in 3 of 10 models. Higher milk intakes were slightly associated with lower ADJCI. With the exception of brushing frequency, modifiable risk factors under study were not significantly associated with ADJCI. When possible, researchers should consider presenting multiple models if fit criteria cannot discern among a group of optimal models. © 2018 American Association of Public Health Dentistry.
Real-Time Detection of Dust Devils from Pressure Readings
NASA Technical Reports Server (NTRS)
Wagstaff, Kiri
2009-01-01
A method for real-time detection of dust devils at a given location is based on identifying the abrupt, temporary decreases in atmospheric pressure that are characteristic of dust devils as they travel through that location. The method was conceived for use in a study of dust devils on the Martian surface, where bandwidth limitations encourage the transmission of only those blocks of data that are most likely to contain information about features of interest, such as dust devils. The method, which is a form of intelligent data compression, could readily be adapted to use for the same purpose in scientific investigation of dust devils on Earth. In this method, the readings of an atmospheric- pressure sensor are repeatedly digitized, recorded, and processed by an algorithm that looks for extreme deviations from a continually updated model of the current pressure environment. The question in formulating the algorithm is how to model current normal observations and what minimum magnitude deviation can be considered sufficiently anomalous as to indicate the presence of a dust devil. There is no single, simple answer to this question: any answer necessarily entails a compromise between false detections and misses. For the original Mars application, the answer was sought through analysis of sliding time windows of digitized pressure readings. Windows of 5-, 10-, and 15-minute durations were considered. The windows were advanced in increments of 30 seconds. Increments of other sizes can also be used, but computational cost increases as the increment decreases and analysis is performed more frequently. Pressure models were defined using a polynomial fit to the data within the windows. For example, the figure depicts pressure readings from a 10-minute window wherein the model was defined by a third-degree polynomial fit to the readings and dust devils were identified as negative deviations larger than both 3 standard deviations (from the mean) and 0.05 mbar in magnitude. An algorithm embodying the detection scheme of this example was found to yield a miss rate of just 8 percent and a false-detection rate of 57 percent when evaluated on historical pressure-sensor data collected by the Mars Pathfinder lander. Since dust devils occur infrequently over the course of a mission, prioritizing observations that contain successful detections could greatly conserve bandwidth allocated to a given mission. This technique can be used on future Mars landers and rovers, such as Mars Phoenix and the Mars Science Laboratory.
Incremental wind tunnel testing of high lift systems
NASA Astrophysics Data System (ADS)
Victor, Pricop Mihai; Mircea, Boscoianu; Daniel-Eugeniu, Crunteanu
2016-06-01
Efficiency of trailing edge high lift systems is essential for long range future transport aircrafts evolving in the direction of laminar wings, because they have to compensate for the low performance of the leading edge devices. Modern high lift systems are subject of high performance requirements and constrained to simple actuation, combined with a reduced number of aerodynamic elements. Passive or active flow control is thus required for the performance enhancement. An experimental investigation of reduced kinematics flap combined with passive flow control took place in a low speed wind tunnel. The most important features of the experimental setup are the relatively large size, corresponding to a Reynolds number of about 2 Million, the sweep angle of 30 degrees corresponding to long range airliners with high sweep angle wings and the large number of flap settings and mechanical vortex generators. The model description, flap settings, methodology and results are presented.
Reflections on social activism in otolaryngology.
Kopelovich, Jonathan C
2014-03-01
What is "social activism" to you? For older otolaryngologists, the term is likely to signify the tumult of the 1960s. For incoming generations, this connotation is outdated. Rather, it more broadly reflects concerted efforts to improve the public good. Some ally with existing institutions to work toward incremental progress. Some start new organizations, using technological tools to build networks, marshal resources, and leapfrog hurdles. Countering these efforts are the ever-changing challenges of practicing otolaryngology today: electronic health records, shifting incentives, and changes in the practice model. Employment by large conglomerates is more common, decreasing our visibility as community leaders. Burnout is a recognized "hazard," and budding otolaryngologists are particularly susceptible. Adding one more thing, like social activism, to a full plate seems counterintuitive. But it shouldn't be. You don't need a "bigger" plate to get involved in social causes. Start simple. Find a partner. Scale up. You'll find it rewarding.
Quantitative model of the growth of floodplains by vertical accretion
Moody, J.A.; Troutman, B.M.
2000-01-01
A simple one-dimensional model is developed to quantitatively predict the change in elevation, over a period of decades, for vertically accreting floodplains. This unsteady model approximates the monotonic growth of a floodplain as an incremental but constant increase of net sediment deposition per flood for those floods of a partial duration series that exceed a threshold discharge corresponding to the elevation of the floodplain. Sediment deposition from each flood increases the elevation of the floodplain and consequently the magnitude of the threshold discharge resulting in a decrease in the number of floods and growth rate of the floodplain. Floodplain growth curves predicted by this model are compared to empirical growth curves based on dendrochronology and to direct field measurements at five floodplain sites. The model was used to predict the value of net sediment deposition per flood which best fits (in a least squares sense) the empirical and field measurements; these values fall within the range of independent estimates of the net sediment deposition per flood based on empirical equations. These empirical equations permit the application of the model to estimate of floodplain growth for other floodplains throughout the world which do not have detailed data of sediment deposition during individual floods. Copyright (C) 2000 John Wiley and Sons, Ltd.
Tuffaha, Haitham W; Mitchell, Andrew; Ward, Robyn L; Connelly, Luke; Butler, James R G; Norris, Sarah; Scuffham, Paul A
2018-01-04
PurposeTo evaluate the cost-effectiveness of BRCA testing in women with breast cancer, and cascade testing in family members of BRCA mutation carriers.MethodsA cost-effectiveness analysis was conducted using a cohort Markov model from a health-payer perspective. The model estimated the long-term benefits and costs of testing women with breast cancer who had at least a 10% pretest BRCA mutation probability, and the cascade testing of first- and second-degree relatives of women who test positive.ResultsCompared with no testing, BRCA testing of affected women resulted in an incremental cost per quality-adjusted life-year (QALY) gained of AU$18,900 (incremental cost AU$1,880; incremental QALY gain 0.10) with reductions of 0.04 breast and 0.01 ovarian cancer events. Testing affected women and cascade testing of family members resulted in an incremental cost per QALY gained of AU$9,500 compared with testing affected women only (incremental cost AU$665; incremental QALY gain 0.07) with additional reductions of 0.06 breast and 0.01 ovarian cancer events.ConclusionBRCA testing in women with breast cancer is cost-effective and is associated with reduced risk of cancer and improved survival. Extending testing to cover family members of affected women who test positive improves cost-effectiveness beyond restricting testing to affected women only.GENETICS in MEDICINE advance online publication, 4 January 2018; doi:10.1038/gim.2017.231.
Bigger is Better, but at What Cost? Estimating the Economic Value of Incremental Data Assets.
Dalessandro, Brian; Perlich, Claudia; Raeder, Troy
2014-06-01
Many firms depend on third-party vendors to supply data for commercial predictive modeling applications. An issue that has received very little attention in the prior research literature is the estimation of a fair price for purchased data. In this work we present a methodology for estimating the economic value of adding incremental data to predictive modeling applications and present two cases studies. The methodology starts with estimating the effect that incremental data has on model performance in terms of common classification evaluation metrics. This effect is then translated into economic units, which gives an expected economic value that the firm might realize with the acquisition of a particular data asset. With this estimate a firm can then set a data acquisition price that targets a particular return on investment. This article presents the methodology in full detail and illustrates it in the context of two marketing case studies.
Flitcroft, Rebecca; Burnett, Kelly; Christiansen, Kelly
2013-07-01
Diadromous aquatic species that cross a diverse range of habitats (including marine, estuarine, and freshwater) face different effects of climate change in each environment. One such group of species is the anadromous Pacific salmon (Oncorhynchus spp.). Studies of the potential effects of climate change on salmonids have focused on both marine and freshwater environments. Access to a variety of estuarine habitat has been shown to enhance juvenile life-history diversity, thereby contributing to the resilience of many salmonid species. Our study is focused on the effect of sea-level rise on the availability, complexity, and distribution of estuarine, and low-freshwater habitat for Chinook salmon (Oncorhynchus tshawytscha), steelhead (anadromous O. mykiss), and coho salmon (O. kisutch) along the Oregon Coast under future climate change scenarios. Using LiDAR, we modeled the geomorphologies of five Oregon estuaries and estimated a contour associated with the current mean high tide. Contour intervals at 1- and 2-m increments above the current mean high tide were generated, and changes in the estuary morphology were assessed. Because our analysis relied on digital data, we compared three types of digital data in one estuary to assess the utility of different data sets in predicting the changes in estuary shape. For each salmonid species, changes in the amount and complexity of estuarine edge habitats varied by estuary. The simple modeling approach we applied can also be used to identify areas that may be most amenable to pre-emptive restoration actions to mitigate or enhance salmonid habitat under future climatic conditions.
Higher mortgages, lower energy bills: The real economics of buying an energy-efficient home
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mills, E.
1987-02-01
To measure the actual costs and benefits of buying an energy- efficient home, it is necessary to employ a cash-flow model that accounts for mortgage interest and other charges associated with the incremental costs of conservation measures. The ability to make payments gradually over the term of a mortgage, energy savings, and tax benefits contribute to increased cost effectiveness. Conversely, financial benefits are reduced by interest payments, insurance, taxes, and various fees linked to the (higher) sale price of an energy-efficient home. Accounting for these factors can yield a strikingly different picture from those given by commonly used ''engineering'' indicators,more » such as simple payback time, internal rate of return, or net present value (NPV), which are based solely on incremental costs and energy savings. This analysis uses actual energy savings data and incremental construction costs to evaluate the mortgage cash flow for 79 of the 144 energy-efficient homes constructed in Minnesota under the Energy-Efficient Housing Demonstration Program (EEHDP) initiated in 1980 by the Minnesota Housing Finance Agency. Using typical lending terms and fees, we find that the mean mortgage-NPV derived from the homeowners' real cash flow (including construction and financing costs) is 20% lower than the standard engineering-NPV of the conservation investment: $7981 versus $9810. For eight homes, the mortgage-NPV becomes negative once we account for the various mortgage-related effects. Sensitivities to interest rates, down payment, loan term, and marginal tax rate are included to illustrate the often large impact of alternative assumptions about these parameters. The most dramatic effect occurs when the loan term is reduced from 30 to 15 years and the mortgage NPV falls to -$925. We also evaluate the favorable Federal Home Administration (FHA) terms actually applied to the EEHDP homes. 8 refs., 4 figs., 3 tabs.« less
Laxy, Michael; Wilson, Edward C F; Boothby, Clare E; Griffin, Simon J
2017-12-01
There is uncertainty about the cost effectiveness of early intensive treatment versus routine care in individuals with type 2 diabetes detected by screening. To derive a trial-informed estimate of the incremental costs of intensive treatment as delivered in the Anglo-Danish-Dutch Study of Intensive Treatment in People with Screen-Detected Diabetes in Primary Care-Europe (ADDITION) trial and to revisit the long-term cost-effectiveness analysis from the perspective of the UK National Health Service. We analyzed the electronic primary care records of a subsample of the ADDITION-Cambridge trial cohort (n = 173). Unit costs of used primary care services were taken from the published literature. Incremental annual costs of intensive treatment versus routine care in years 1 to 5 after diagnosis were calculated using multilevel generalized linear models. We revisited the long-term cost-utility analyses for the ADDITION-UK trial cohort and reported results for ADDITION-Cambridge using the UK Prospective Diabetes Study Outcomes Model and the trial-informed cost estimates according to a previously developed evaluation framework. Incremental annual costs of intensive treatment over years 1 to 5 averaged £29.10 (standard error = £33.00) for consultations with general practitioners and nurses and £54.60 (standard error = £28.50) for metabolic and cardioprotective medication. For ADDITION-UK, over the 10-, 20-, and 30-year time horizon, adjusted incremental quality-adjusted life-years (QALYs) were 0.014, 0.043, and 0.048, and adjusted incremental costs were £1,021, £1,217, and £1,311, resulting in incremental cost-effectiveness ratios of £71,232/QALY, £28,444/QALY, and £27,549/QALY, respectively. Respective incremental cost-effectiveness ratios for ADDITION-Cambridge were slightly higher. The incremental costs of intensive treatment as delivered in the ADDITION-Cambridge trial were lower than expected. Given UK willingness-to-pay thresholds in patients with screen-detected diabetes, intensive treatment is of borderline cost effectiveness over a time horizon of 20 years and more. Copyright © 2017. Published by Elsevier Inc.
Working Memory Load Strengthens Reward Prediction Errors.
Collins, Anne G E; Ciullo, Brittany; Frank, Michael J; Badre, David
2017-04-19
Reinforcement learning (RL) in simple instrumental tasks is usually modeled as a monolithic process in which reward prediction errors (RPEs) are used to update expected values of choice options. This modeling ignores the different contributions of different memory and decision-making systems thought to contribute even to simple learning. In an fMRI experiment, we investigated how working memory (WM) and incremental RL processes interact to guide human learning. WM load was manipulated by varying the number of stimuli to be learned across blocks. Behavioral results and computational modeling confirmed that learning was best explained as a mixture of two mechanisms: a fast, capacity-limited, and delay-sensitive WM process together with slower RL. Model-based analysis of fMRI data showed that striatum and lateral prefrontal cortex were sensitive to RPE, as shown previously, but, critically, these signals were reduced when the learning problem was within capacity of WM. The degree of this neural interaction related to individual differences in the use of WM to guide behavioral learning. These results indicate that the two systems do not process information independently, but rather interact during learning. SIGNIFICANCE STATEMENT Reinforcement learning (RL) theory has been remarkably productive at improving our understanding of instrumental learning as well as dopaminergic and striatal network function across many mammalian species. However, this neural network is only one contributor to human learning and other mechanisms such as prefrontal cortex working memory also play a key role. Our results also show that these other players interact with the dopaminergic RL system, interfering with its key computation of reward prediction errors. Copyright © 2017 the authors 0270-6474/17/374332-11$15.00/0.
Kerr, Kathleen F.; Meisner, Allison; Thiessen-Philbrook, Heather; Coca, Steven G.
2014-01-01
The field of nephrology is actively involved in developing biomarkers and improving models for predicting patients’ risks of AKI and CKD and their outcomes. However, some important aspects of evaluating biomarkers and risk models are not widely appreciated, and statistical methods are still evolving. This review describes some of the most important statistical concepts for this area of research and identifies common pitfalls. Particular attention is paid to metrics proposed within the last 5 years for quantifying the incremental predictive value of a new biomarker. PMID:24855282
NASA Astrophysics Data System (ADS)
Frohn, Peter; Engel, Bernd; Groth, Sebastian
2018-05-01
Kinematic forming processes shape geometries by the process parameters to achieve a more universal process utilizations regarding geometric configurations. The kinematic forming process Incremental Swivel Bending (ISB) bends sheet metal strips or profiles in plane. The sequence for bending an arc increment is composed of the steps clamping, bending, force release and feed. The bending moment is frictionally engaged by two clamping units in a laterally adjustable bending pivot. A minimum clamping force hindering the material from slipping through the clamping units is a crucial criterion to achieve a well-defined incremental arc. Therefore, an analytic description of a singular bent increment is developed in this paper. The bending moment is calculated by the uniaxial stress distribution over the profiles' width depending on the bending pivot's position. By a Coulomb' based friction model, necessary clamping force is described in dependence of friction, offset, dimensions of the clamping tools and strip thickness as well as material parameters. Boundaries for the uniaxial stress calculation are given in dependence of friction, tools' dimensions and strip thickness. The results indicate that changing the bending pivot to an eccentric position significantly affects the process' bending moment and, hence, clamping force, which is given in dependence of yield stress and hardening exponent. FE simulations validate the model with satisfactory accordance.
Kunz, Wolfgang G; Hunink, M G Myriam; Sommer, Wieland H; Beyer, Sebastian E; Meinel, Felix G; Dorn, Franziska; Wirth, Stefan; Reiser, Maximilian F; Ertl-Wagner, Birgit; Thierfelder, Kolja M
2016-11-01
Endovascular therapy in addition to standard care (EVT+SC) has been demonstrated to be more effective than SC in acute ischemic large vessel occlusion stroke. Our aim was to determine the cost-effectiveness of EVT+SC depending on patients' initial National Institutes of Health Stroke Scale (NIHSS) score, time from symptom onset, Alberta Stroke Program Early CT Score (ASPECTS), and occlusion location. A decision model based on Markov simulations estimated lifetime costs and quality-adjusted life years (QALYs) associated with both strategies applied in a US setting. Model input parameters were obtained from the literature, including recently pooled outcome data of 5 randomized controlled trials (ESCAPE [Endovascular Treatment for Small Core and Proximal Occlusion Ischemic Stroke], EXTEND-IA [Extending the Time for Thrombolysis in Emergency Neurological Deficits-Intra-Arterial], MR CLEAN [Multicenter Randomized Clinical Trial of Endovascular Treatment for Acute Ischemic Stroke in the Netherlands], REVASCAT [Randomized Trial of Revascularization With Solitaire FR Device Versus Best Medical Therapy in the Treatment of Acute Stroke Due to Anterior Circulation Large Vessel Occlusion Presenting Within 8 Hours of Symptom Onset], and SWIFT PRIME [Solitaire With the Intention for Thrombectomy as Primary Endovascular Treatment]). Probabilistic sensitivity analysis was performed to estimate uncertainty of the model results. Net monetary benefits, incremental costs, incremental effectiveness, and incremental cost-effectiveness ratios were derived from the probabilistic sensitivity analysis. The willingness-to-pay was set to $50 000/QALY. Overall, EVT+SC was cost-effective compared with SC (incremental cost: $4938, incremental effectiveness: 1.59 QALYs, and incremental cost-effectiveness ratio: $3110/QALY) in 100% of simulations. In all patient subgroups, EVT+SC led to gained QALYs (range: 0.47-2.12), and mean incremental cost-effectiveness ratios were considered cost-effective. However, subgroups with ASPECTS ≤5 or with M2 occlusions showed considerably higher incremental cost-effectiveness ratios ($14 273/QALY and $28 812/QALY, respectively) and only reached suboptimal acceptability in the probabilistic sensitivity analysis (75.5% and 59.4%, respectively). All other subgroups had acceptability rates of 90% to 100%. EVT+SC is cost-effective in most subgroups. In patients with ASPECTS ≤5 or with M2 occlusions, cost-effectiveness remains uncertain based on current data. © 2016 American Heart Association, Inc.
Atmospheric response to Saharan dust deduced from ECMWF reanalysis increments
NASA Astrophysics Data System (ADS)
Kishcha, P.; Alpert, P.; Barkan, J.; Kirchner, I.; Machenhauer, B.
2003-04-01
This study focuses on the atmospheric temperature response to dust deduced from a new source of data - the European Reanalysis (ERA) increments. These increments are the systematic errors of global climate models, generated in reanalysis procedure. The model errors result not only from the lack of desert dust but also from a complex combination of many kinds of model errors. Over the Sahara desert the dust radiative effect is believed to be a predominant model defect which should significantly affect the increments. This dust effect was examined by considering correlation between the increments and remotely-sensed dust. Comparisons were made between April temporal variations of the ERA analysis increments and the variations of the Total Ozone Mapping Spectrometer aerosol index (AI) between 1979 and 1993. The distinctive structure was identified in the distribution of correlation composed of three nested areas with high positive correlation (> 0.5), low correlation, and high negative correlation (<-0.5). The innermost positive correlation area (PCA) is a large area near the center of the Sahara desert. For some local maxima inside this area the correlation even exceeds 0.8. The outermost negative correlation area (NCA) is not uniform. It consists of some areas over the eastern and western parts of North Africa with a relatively small amount of dust. Inside those areas both positive and negative high correlations exist at pressure levels ranging from 850 to 700 hPa, with the peak values near 775 hPa. Dust-forced heating (cooling) inside the PCA (NCA) is accompanied by changes in the static stability of the atmosphere above the dust layer. The reanalysis data of the European Center for Medium Range Weather Forecast(ECMWF) suggests that the PCA (NCA) corresponds mainly to anticyclonic (cyclonic) flow, negative (positive) vorticity, and downward (upward) airflow. These facts indicate an interaction between dust-forced heating /cooling and atmospheric circulation. The April correlation results are supported by the analysis of vertical distribution of dust concentration, derived from the 24-hour dust prediction system at Tel Aviv University (website: http://earth.nasa.proj.ac.il/dust/current/). For other months the analysis is more complicated because of the essential increasing of humidity along with the northward progress of the ITCZ and the significant impact on the increments.
Makowiec, Danuta; Struzik, Zbigniew; Graff, Beata; Wdowczyk-Szulc, Joanna; Zarczynska-Buchnowiecka, Marta; Gruchala, Marcin; Rynkiewicz, Andrzej
2013-01-01
Network models have been used to capture, represent and analyse characteristics of living organisms and general properties of complex systems. The use of network representations in the characterization of time series complexity is a relatively new but quickly developing branch of time series analysis. In particular, beat-to-beat heart rate variability can be mapped out in a network of RR-increments, which is a directed and weighted graph with vertices representing RR-increments and the edges of which correspond to subsequent increments. We evaluate entropy measures selected from these network representations in records of healthy subjects and heart transplant patients, and provide an interpretation of the results.
Kovic, Bruno; Guyatt, Gordon; Brundage, Michael; Thabane, Lehana; Bhatnagar, Neera; Xie, Feng
2016-09-02
There is an increasing number of new oncology drugs being studied, approved and put into clinical practice based on improvement in progression-free survival, when no overall survival benefits exist. In oncology, the association between progression-free survival and health-related quality of life is currently unknown, despite its importance for patients with cancer, and the unverified assumption that longer progression-free survival indicates improved health-related quality of life. Thus far, only 1 study has investigated this association, providing insufficient evidence and inconclusive results. The objective of this study protocol is to provide increased transparency in supporting a systematic summary of the evidence bearing on this association in oncology. Using the OVID platform in MEDLINE, Embase and Cochrane databases, we will conduct a systematic review of randomised controlled human trials addressing oncology issues published starting in 2000. A team of reviewers will, in pairs, independently screen and abstract data using standardised, pilot-tested forms. We will employ numerical integration to calculate mean incremental area under the curve between treatment groups in studies for health-related quality of life, along with total related error estimates, and a 95% CI around incremental area. To describe the progression-free survival to health-related quality of life association, we will construct a scatterplot for incremental health-related quality of life versus incremental progression-free survival. To estimate the association, we will use a weighted simple regression approach, comparing mean incremental health-related quality of life with either median incremental progression-free survival time or the progression-free survival HR, in the absence of overall survival benefit. Identifying direction and magnitude of association between progression-free survival and health-related quality of life is critically important in interpreting results of oncology trials. Systematic evidence produced from our study will contribute to improvement of patient care and practice of evidence-based medicine in oncology. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Incremental Bayesian Category Learning From Natural Language.
Frermann, Lea; Lapata, Mirella
2016-08-01
Models of category learning have been extensively studied in cognitive science and primarily tested on perceptual abstractions or artificial stimuli. In this paper, we focus on categories acquired from natural language stimuli, that is, words (e.g., chair is a member of the furniture category). We present a Bayesian model that, unlike previous work, learns both categories and their features in a single process. We model category induction as two interrelated subproblems: (a) the acquisition of features that discriminate among categories, and (b) the grouping of concepts into categories based on those features. Our model learns categories incrementally using particle filters, a sequential Monte Carlo method commonly used for approximate probabilistic inference that sequentially integrates newly observed data and can be viewed as a plausible mechanism for human learning. Experimental results show that our incremental learner obtains meaningful categories which yield a closer fit to behavioral data compared to related models while at the same time acquiring features which characterize the learned categories. (An earlier version of this work was published in Frermann and Lapata .). Copyright © 2015 Cognitive Science Society, Inc.
Weisbuch, Max; Grunberg, Rebecca L; Slepian, Michael L; Ambady, Nalini
2016-10-01
Beliefs about the malleability versus stability of traits (incremental vs. entity lay theories) have a profound impact on social cognition and self-regulation, shaping phenomena that range from the fundamental attribution error and group-based stereotyping to academic motivation and achievement. Less is known about the causes than the effects of these lay theories, and in the current work the authors examine the perception of facial emotion as a causal influence on lay theories. Specifically, they hypothesized that (a) within-person variability in facial emotion signals within-person variability in traits and (b) social environments replete with within-person variability in facial emotion encourage perceivers to endorse incremental lay theories. Consistent with Hypothesis 1, Study 1 participants were more likely to attribute dynamic (vs. stable) traits to a person who exhibited several different facial emotions than to a person who exhibited a single facial emotion across multiple images. Hypothesis 2 suggests that social environments support incremental lay theories to the extent that they include many people who exhibit within-person variability in facial emotion. Consistent with Hypothesis 2, participants in Studies 2-4 were more likely to endorse incremental theories of personality, intelligence, and morality after exposure to multiple individuals exhibiting within-person variability in facial emotion than after exposure to multiple individuals exhibiting a single emotion several times. Perceptions of within-person variability in facial emotion-rather than perceptions of simple diversity in facial emotion-were responsible for these effects. Discussion focuses on how social ecologies shape lay theories. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Planar isotropy of passive scalar turbulent mixing with a mean perpendicular gradient.
Danaila, L; Dusek, J; Le Gal, P; Anselmet, F; Brun, C; Pumir, A
1999-08-01
A recently proposed evolution equation [Vaienti et al., Physica D 85, 405 (1994)] for the probability density functions (PDF's) of turbulent passive scalar increments obtained under the assumptions of fully three-dimensional homogeneity and isotropy is submitted to validation using direct numerical simulation (DNS) results of the mixing of a passive scalar with a nonzero mean gradient by a homogeneous and isotropic turbulent velocity field. It is shown that this approach leads to a quantitatively correct balance between the different terms of the equation, in a plane perpendicular to the mean gradient, at small scales and at large Péclet number. A weaker assumption of homogeneity and isotropy restricted to the plane normal to the mean gradient is then considered to derive an equation describing the evolution of the PDF's as a function of the spatial scale and the scalar increments. A very good agreement between the theory and the DNS data is obtained at all scales. As a particular case of the theory, we derive a generalized form for the well-known Yaglom equation (the isotropic relation between the second-order moments for temperature increments and the third-order velocity-temperature mixed moments). This approach allows us to determine quantitatively how the integral scale properties influence the properties of mixing throughout the whole range of scales. In the simple configuration considered here, the PDF's of the scalar increments perpendicular to the mean gradient can be theoretically described once the sources of inhomogeneity and anisotropy at large scales are correctly taken into account.
Constraining the Carbon Cycle through Tree Rings: A Case Study of the Valles Caldera, NM
NASA Astrophysics Data System (ADS)
Alexander, M. R.; Babst, F.; Moore, D. J.; Trouet, V.
2013-12-01
Terrestrial ecosystems take up approximately 120 Gt of carbon as Gross Primary Productivity (GPP) from the atmosphere annually, but it is challenging to track the allocation of that carbon throughout the biosphere. Here, we combine eddy covariance measurements of net carbon uptake with above ground biomass increments derived from tree-ring data to better understand the interannual variability associated with biomass accumulation. In the summer of 2012, we collected tree cores near two eddy covariance towers in the Jemez Mountains of northern New Mexico. One tower was located in an upper elevation mixed-conifer forest, and the other in a lower elevation Pinus ponderosa forest. Our analysis shows that the annual above ground biomass increment accounted for approximately 40% of the GPP at the lower elevation Pinus ponderosa site and approximately 70% of GPP at the upper elevation mixed-conifer site. We have also used the above ground biomass increment to constrain the Simple Photosynthesis EvapoTranspiration (SiPNET) model to gain a better understanding of allocation within the forest. Tree growth at both elevations was negatively influenced by spring (March-June) temperature and positively by cool season (October-April) precipitation and warm (May-September) and cool season PDSI. We also analyzed the six most extreme temperature and moisture (PDSI) years of the record to determine the response of productivity to climatic forcing. During the driest years, biomass production was reduced by 40% at the upper elevation site and 43% at the lower elevation site. During the hottest years of the record the biomass decreased 28% at the upper site and 45% at the lower site. Our results indicate that tree rings can be used to effectively constrain the above ground biomass component of a forest's carbon budget and to estimate allocation of carbon to woody biomass as a function of climate. However, many variables remain unknown. The combined results of the extreme year analyses and the derived biomass increments illustrate that the forests at the Valles Caldera are considerably less productive during years of extreme drought and warmer than average temperatures. With future projections calling for consecutive years of extreme conditions in the American Southwest, this could have a substantial effect on the overall productivity of these forests.
Multi-scale finite element modeling allows the mechanics of amphibian neurulation to be elucidated
NASA Astrophysics Data System (ADS)
Chen, Xiaoguang; Brodland, G. Wayne
2008-03-01
The novel multi-scale computational approach introduced here makes possible a new means for testing hypotheses about the forces that drive specific morphogenetic movements. A 3D model based on this approach is used to investigate neurulation in the axolotl (Ambystoma mexicanum), a type of amphibian. The model is based on geometric data from 3D surface reconstructions of live embryos and from serial sections. Tissue properties are described by a system of cell-based constitutive equations, and parameters in the equations are determined from physical tests. The model includes the effects of Shroom-activated neural ridge reshaping and lamellipodium-driven convergent extension. A typical whole-embryo model consists of 10 239 elements and to run its 100 incremental time steps requires 2 days. The model shows that a normal phenotype does not result if lamellipodium forces are uniform across the width of the neural plate; but it can result if the lamellipodium forces decrease from a maximum value at the mid-sagittal plane to zero at the plate edge. Even the seemingly simple motions of neurulation are found to contain important features that would remain hidden, they were not studied using an advanced computational model. The present model operates in a setting where data are extremely sparse and an important outcome of the study is a better understanding of the role of computational models in such environments.
Multi-scale finite element modeling allows the mechanics of amphibian neurulation to be elucidated.
Chen, Xiaoguang; Brodland, G Wayne
2008-04-11
The novel multi-scale computational approach introduced here makes possible a new means for testing hypotheses about the forces that drive specific morphogenetic movements. A 3D model based on this approach is used to investigate neurulation in the axolotl (Ambystoma mexicanum), a type of amphibian. The model is based on geometric data from 3D surface reconstructions of live embryos and from serial sections. Tissue properties are described by a system of cell-based constitutive equations, and parameters in the equations are determined from physical tests. The model includes the effects of Shroom-activated neural ridge reshaping and lamellipodium-driven convergent extension. A typical whole-embryo model consists of 10,239 elements and to run its 100 incremental time steps requires 2 days. The model shows that a normal phenotype does not result if lamellipodium forces are uniform across the width of the neural plate; but it can result if the lamellipodium forces decrease from a maximum value at the mid-sagittal plane to zero at the plate edge. Even the seemingly simple motions of neurulation are found to contain important features that would remain hidden, they were not studied using an advanced computational model. The present model operates in a setting where data are extremely sparse and an important outcome of the study is a better understanding of the role of computational models in such environments.
NASA Astrophysics Data System (ADS)
Wang, Hongjin; Hsieh, Sheng-Jen; Peng, Bo; Zhou, Xunfei
2016-07-01
A method without requirements on knowledge about thermal properties of coatings or those of substrates will be interested in the industrial application. Supervised machine learning regressions may provide possible solution to the problem. This paper compares the performances of two regression models (artificial neural networks (ANN) and support vector machines for regression (SVM)) with respect to coating thickness estimations made based on surface temperature increments collected via time resolved thermography. We describe SVM roles in coating thickness prediction. Non-dimensional analyses are conducted to illustrate the effects of coating thicknesses and various factors on surface temperature increments. It's theoretically possible to correlate coating thickness with surface increment. Based on the analyses, the laser power is selected in such a way: during the heating, the temperature increment is high enough to determine the coating thickness variance but low enough to avoid surface melting. Sixty-one pain-coated samples with coating thicknesses varying from 63.5 μm to 571 μm are used to train models. Hyper-parameters of the models are optimized by 10-folder cross validation. Another 28 sets of data are then collected to test the performance of the three methods. The study shows that SVM can provide reliable predictions of unknown data, due to its deterministic characteristics, and it works well when used for a small input data group. The SVM model generates more accurate coating thickness estimates than the ANN model.
External Device to Incrementally Skid the Habitat (E-DISH)
NASA Technical Reports Server (NTRS)
Brazell, J. W.; Introne, Steve; Bedell, Lisa; Credle, Ben; Holp, Graham; Ly, Siao; Tait, Terry
1994-01-01
A Mars habitat transport system was designed as part of the NASA Mars exploration program. The transport system, the External Device to Incrementally Skid the Habitat (E - DISH), will be used to transport Mars habitats from their landing sites to the colony base and will be detached after unloading. The system requirements for Mars were calculated and scaled for model purposes. Specific model materials are commonly found and recommendations for materials for the Mars design are included.
NASA Technical Reports Server (NTRS)
Messina, Michael D.
1995-01-01
The method described in this report is intended to present an overview of a process developed to extract the forebody aerodynamic increments from flight tests. The process to determine the aerodynamic increments (rolling pitching, and yawing moments, Cl, Cm, Cn, respectively) for the forebody strake controllers added to the F/A - 18 High Alpha Research Vehicle (HARV) aircraft was developed to validate the forebody strake aerodynamic model used in simulation.
An innovative privacy preserving technique for incremental datasets on cloud computing.
Aldeen, Yousra Abdul Alsahib S; Salleh, Mazleena; Aljeroudi, Yazan
2016-08-01
Cloud computing (CC) is a magnificent service-based delivery with gigantic computer processing power and data storage across connected communications channels. It imparted overwhelming technological impetus in the internet (web) mediated IT industry, where users can easily share private data for further analysis and mining. Furthermore, user affable CC services enable to deploy sundry applications economically. Meanwhile, simple data sharing impelled various phishing attacks and malware assisted security threats. Some privacy sensitive applications like health services on cloud that are built with several economic and operational benefits necessitate enhanced security. Thus, absolute cyberspace security and mitigation against phishing blitz became mandatory to protect overall data privacy. Typically, diverse applications datasets are anonymized with better privacy to owners without providing all secrecy requirements to the newly added records. Some proposed techniques emphasized this issue by re-anonymizing the datasets from the scratch. The utmost privacy protection over incremental datasets on CC is far from being achieved. Certainly, the distribution of huge datasets volume across multiple storage nodes limits the privacy preservation. In this view, we propose a new anonymization technique to attain better privacy protection with high data utility over distributed and incremental datasets on CC. The proficiency of data privacy preservation and improved confidentiality requirements is demonstrated through performance evaluation. Copyright © 2016 Elsevier Inc. All rights reserved.
Topological structure and mechanics of glassy polymer networks.
Elder, Robert M; Sirk, Timothy W
2017-11-22
The influence of chain-level network architecture (i.e., topology) on mechanics was explored for unentangled polymer networks using a blend of coarse-grained molecular simulations and graph-theoretic concepts. A simple extension of the Watts-Strogatz model is proposed to control the graph properties of the network such that the corresponding physical properties can be studied with simulations. The architecture of polymer networks assembled with a dynamic curing approach were compared with the extended Watts-Strogatz model, and found to agree surprisingly well. The final cured structures of the dynamically-assembled networks were nearly an intermediate between lattice and random connections due to restrictions imposed by the finite length of the chains. Further, the uni-axial stress response, character of the bond breaking, and non-affine displacements of fully-cured glassy networks were analyzed as a function of the degree of disorder in the network architecture. It is shown that the architecture strongly affects the network stability, flow stress, onset of bond breaking, and ultimate stress while leaving the modulus and yield point nearly unchanged. The results show that internal restrictions imposed by the network architecture alter the chain-level response through changes to the crosslink dynamics in the flow regime and through the degree of coordinated chain failure at the ultimate stress. The properties considered here are shown to be sensitive to even incremental changes to the architecture and, therefore, the overall network architecture, beyond simple defects, is predicted to be a meaningful physical parameter in the mechanics of glassy polymer networks.
Health risk assessments for alumina refineries.
Donoghue, A Michael; Coffey, Patrick S
2014-05-01
To describe contemporary air dispersion modeling and health risk assessment methodologies applied to alumina refineries and to summarize recent results. Air dispersion models using emission source and meteorological data have been used to assess ground-level concentrations (GLCs) of refinery emissions. Short-term (1-hour and 24-hour average) GLCs and annual average GLCs have been used to assess acute health, chronic health, and incremental carcinogenic risks. The acute hazard index can exceed 1 close to refineries, but it is typically less than 1 at neighboring residential locations. The chronic hazard index is typically substantially less than 1. The incremental carcinogenic risk is typically less than 10(-6). The risks of acute health effects are adequately controlled, and the risks of chronic health effects and incremental carcinogenic risks are negligible around referenced alumina refineries.
A new technique for the characterization of chaff elements
NASA Astrophysics Data System (ADS)
Scholfield, David; Myat, Maung; Dauby, Jason; Fesler, Jonathon; Bright, Jonathan
2011-07-01
A new technique for the experimental characterization of electromagnetic chaff based on Inverse Synthetic Aperture Radar is presented. This technique allows for the characterization of as few as one filament of chaff in a controlled anechoic environment allowing for stability and repeatability of experimental results. This approach allows for a deeper understanding of the fundamental phenomena of electromagnetic scattering from chaff through an incremental analysis approach. Chaff analysis can now begin with a single element and progress through the build-up of particles into pseudo-cloud structures. This controlled incremental approach is supported by an identical incremental modeling and validation process. Additionally, this technique has the potential to produce considerable savings in financial and schedule cost and provides a stable and repeatable experiment to aid model valuation.
Incremental checking of Master Data Management model based on contextual graphs
NASA Astrophysics Data System (ADS)
Lamolle, Myriam; Menet, Ludovic; Le Duc, Chan
2015-10-01
The validation of models is a crucial step in distributed heterogeneous systems. In this paper, an incremental validation method is proposed in the scope of a Model Driven Engineering (MDE) approach, which is used to develop a Master Data Management (MDM) field represented by XML Schema models. The MDE approach presented in this paper is based on the definition of an abstraction layer using UML class diagrams. The validation method aims to minimise the model errors and to optimisethe process of model checking. Therefore, the notion of validation contexts is introduced allowing the verification of data model views. Description logics specify constraints that the models have to check. An experimentation of the approach is presented through an application developed in ArgoUML IDE.
Lamberts, Mark P; Özdemir, Cihan; Drenth, Joost P H; van Laarhoven, Cornelis J H M; Westert, Gert P; Kievit, Wietske
2017-06-01
The aim of this study was to determine the cost-effectiveness of a new strategy for the preoperative detection of patients that will likely benefit from a cholecystectomy, using simple criteria that can be applied by surgeons. Criteria for a cholecystectomy indication are: (1) having episodic pain; (2) onset of pain 1 year or less before the outpatient clinic visit. The cost-effectiveness of the new strategy was evaluated against current practice using a decision analytic model. The incremental cost-effectiveness of applying criteria for a cholecystectomy for a patient with abdominal pain and gallstones was compared to applying no criteria. The incremental cost-effectiveness ratio (ICER) was expressed as extra costs to be invested to gain one more patient with absence of pain. Scenarios were analyzed to assess the influence of applying different criteria. The new strategy of applying one out of two criteria resulted in a 4 % higher mean proportion of patients with absence of pain compared to current practice with similar costs. The 95 % upper limit of the ICER was €4114 ($4633) per extra patient with relief of upper abdominal pain. Application of two out of two criteria resulted in a 3 % lower mean proportion of patients with absence of pain with lower costs. The new strategy of using one out of two strict selection criteria may be an effective but also a cost-effective method to reduce the proportion of patients with pain after cholecystectomy.
NASA Astrophysics Data System (ADS)
de C. Teixeira, Antônio H.; Lopes, Hélio L.; Hernandez, Fernando B. T.; Scherer-Warren, Morris; Andrade, Ricardo G.; Neale, Christopher M. U.
2013-10-01
The Nilo Coelho irrigation scheme, located in the semi-arid region of Brazil, is highlighted as an important agricultural irrigated perimeter. Considering the scenario of this fast land use change, the development and application of suitable tools to quantify the trends of the water productivity parameters on a large scale is important. To analyse the effects of land use change within this perimeter, the large-scale values of biomass production (BIO) and actual evapotranspiration (ET) were quantified from 1992 to 2011, under the naturally driest conditions along the year. Monteith's radiation model was applied for estimating the absorbed photosynthetically active radiation (APAR), while the SAFER (Simple Algorithm For Evapotranspiration Retrieving) algorithm was used to retrieve ET. The highest incremental BIO values happened during the years of 1999 and 2005, as a result of the increased agricultural area under production inside the perimeter, when the average differences between irrigated crops and natural vegetation were more than 70 kg ha-1 d-1. Comparing the average ET rates of 1992 (1.6 mm d-1) with those for 2011 (3.1 mm d-1), it was verified that the extra water consumption doubled because of the increments of irrigated areas along the years. More uniformity along the years on both water productivity parameters occurred for natural vegetation, evidenced by the lower values of standard deviation when comparing to irrigated crops. The heterogeneity of ET values under irrigation conditions are due to the different species, crop stages, cultural and water managements.
Glyburide - Novel Prophylaxis and Effective Treatment for Traumatic Brain Injury
2010-08-01
tested for incremental lear ning and for rapid lear ning. Incremental learning was significantly abnormal on days 14–18, as were the memory probe and...Computational biology - modeling of primary blast effects on the central nervous system. Neuroimage. 47 Suppl 2, T10-T20. MOSS,W.C., KING ,M.J., and
The effects of thinning and gypsy moth defoliation on wood volume growth in oaks
Mary Ann Fajvan; Jim Rentch; Kurt Gottschalk
2008-01-01
Stem dissection and dendroecological methods were used to examine the effects of thinning and defoliation by gypsy moth (Lymantria dispar L.) on wood volume increment in oaks (Quercus rubra L., Q. alba L., Q. prinus L.). A model was developed to evaluate radial volume increment growth at three...
2014-01-01
Background Infantile Pompe disease is a rare metabolic disease. Patients generally do not survive the first year of life. Enzyme replacement therapy (ERT) has proven to have substantial effects on survival in infantile Pompe disease. However, the costs of therapy are very high. In this paper, we assess the cost-effectiveness of enzyme replacement therapy in infantile Pompe disease. Methods A patient simulation model was used to compare costs and effects of ERT with costs of effects of supportive therapy (ST). The model was filled with data on survival, quality of life and costs. For both arms of the model, data on survival were obtained from international literature. In addition, survival as observed among 20 classic-infantile Dutch patients, who all received ERT, was used. Quality of life was measured using the EQ-5D and assumed to be the same in both treatment groups. Costs included the costs of ERT (which depend on a child’s weight), infusions, costs of other health care utilization, and informal care. A lifetime time horizon was used, with 6-month time cycles. Results Life expectancy was significantly longer in the ERT group than in the ST group. On average, ST receiving patients were modelled not to survive the first half year of life; whereas the life expectancy in the ERT patients was modelled to be almost 14 years. Lifetime incremental QALYs were 6.8. Incremental costs were estimated to be € 7.0 million, which primarily consisted of treatment costs (95%). The incremental costs per QALY were estimated to be € 1.0 million (range sensitivity analyses: € 0.3 million - € 1.3 million). The incremental cost per life year gained was estimated to be € 0.5 million. Conclusions The incremental costs per QALY ratio is far above the conventional threshold values. Results from univariate and probabilistic sensitivity analyses showed the robustness of the results. PMID:24884717
Geometry Genetics and Evolution
NASA Astrophysics Data System (ADS)
Siggia, Eric
2011-03-01
Darwin argued that highly perfected organs such as the vertebrate eye could evolve by a series of small changes, each of which conferred a selective advantage. In the context of gene networks, this idea can be recast into a predictive algorithm, namely find networks that can be built by incremental adaptation (gradient search) to perform some task. It embodies a ``kinetic'' view of evolution where a solution that is quick to evolve is preferred over a global optimum. Examples of biochemical kinetic networks were evolved for temporal adaptation, temperature compensated entrainable clocks, explore-exploit trade off in signal discrimination, will be presented as well as networks that model the spatially periodic somites (vertebrae) and HOX gene expression in the vertebrate embryo. These models appear complex by the criterion of 19th century applied mathematics since there is no separation of time or spatial scales, yet they are all derivable by gradient optimization of simple functions (several in the Pareto evolution) often based on the Shannon entropy of the time or spatial response. Joint work with P. Francois, Physics Dept. McGill University. With P. Francois, Physics Dept. McGill University
The theoretical limit to plant productivity.
DeLucia, Evan H; Gomez-Casanovas, Nuria; Greenberg, Jonathan A; Hudiburg, Tara W; Kantola, Ilsa B; Long, Stephen P; Miller, Adam D; Ort, Donald R; Parton, William J
2014-08-19
Human population and economic growth are accelerating the demand for plant biomass to provide food, fuel, and fiber. The annual increment of biomass to meet these needs is quantified as net primary production (NPP). Here we show that an underlying assumption in some current models may lead to underestimates of the potential production from managed landscapes, particularly of bioenergy crops that have low nitrogen requirements. Using a simple light-use efficiency model and the theoretical maximum efficiency with which plant canopies convert solar radiation to biomass, we provide an upper-envelope NPP unconstrained by resource limitations. This theoretical maximum NPP approached 200 tC ha(-1) yr(-1) at point locations, roughly 2 orders of magnitude higher than most current managed or natural ecosystems. Recalculating the upper envelope estimate of NPP limited by available water reduced it by half or more in 91% of the land area globally. While the high conversion efficiencies observed in some extant plants indicate great potential to increase crop yields without changes to the basic mechanism of photosynthesis, particularly for crops with low nitrogen requirements, realizing such high yields will require improvements in water use efficiency.
Towards an entropy-based detached-eddy simulation
NASA Astrophysics Data System (ADS)
Zhao, Rui; Yan, Chao; Li, XinLiang; Kong, WeiXuan
2013-10-01
A concept of entropy increment ratio ( s¯) is introduced for compressible turbulence simulation through a series of direct numerical simulations (DNS). s¯ represents the dissipation rate per unit mechanical energy with the benefit of independence of freestream Mach numbers. Based on this feature, we construct the shielding function f s to describe the boundary layer region and propose an entropy-based detached-eddy simulation method (SDES). This approach follows the spirit of delayed detached-eddy simulation (DDES) proposed by Spalart et al. in 2005, but it exhibits much better behavior after their performances are compared in the following flows, namely, pure attached flow with thick boundary layer (a supersonic flat-plate flow with high Reynolds number), fully separated flow (the supersonic base flow), and separated-reattached flow (the supersonic cavity-ramp flow). The Reynolds-averaged Navier-Stokes (RANS) resolved region is reliably preserved and the modeled stress depletion (MSD) phenomenon which is inherent in DES and DDES is partly alleviated. Moreover, this new hybrid strategy is simple and general, making it applicable to other models related to the boundary layer predictions.
NASA Astrophysics Data System (ADS)
Badfar, Homayoun; Motlagh, Saber Yekani; Sharifi, Abbas
2017-10-01
In this paper, biomagnetic blood flow in the stenosis vessel under the effect of the solenoid magnetic field is studied using the ferrohydrodynamics (FHD) model. The parabolic profile is considered at an inlet of the axisymmetric stenosis vessel. Blood is modeled as electrically non-conducting, Newtonian and homogeneous fluid. Finite volume and the SIMPLE (Semi-Implicit Method for Pressure Linked Equations) algorithm are utilized to discretize governing equations. The investigation is studied at different magnetic numbers ( MnF=164, 328, 1640 and 3280) and the number of the coil loops (three, five and nine loops). Results indicate an increase in heat transfer, wall shear stress and energy loss (pressure drop) with an increment in the magnetic number (ratio of Kelvin force to dynamic pressure force), arising from the FHD, and the number of solenoid loops. Furthermore, the flow pattern is affected by the magnetic field, and the temperature of blood can be decreased up to 1.48 {}°C under the effect of the solenoid magnetic field with nine loops and reference magnetic field ( B0) of 2 tesla.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Milani, Gabriele, E-mail: milani@stru.polimi.it; Olivito, Renato S.; Tralli, Antonio
2014-10-06
The buckling behavior of slender unreinforced masonry (URM) walls subjected to axial compression and out-of-plane lateral loads is investigated through a combined experimental and numerical homogenizedapproach. After a preliminary analysis performed on a unit cell meshed by means of elastic FEs and non-linear interfaces, macroscopic moment-curvature diagrams so obtained are implemented at a structural level, discretizing masonry by means of rigid triangular elements and non-linear interfaces. The non-linear incremental response of the structure is accounted for a specific quadratic programming routine. In parallel, a wide experimental campaign is conducted on walls in two way bending, with the double aim ofmore » both validating the numerical model and investigating the behavior of walls that may not be reduced to simple cantilevers or simply supported beams. Panels investigated are dry-joint in scale square walls simply supported at the base and on a vertical edge, exhibiting the classical Rondelet’s mechanism. The results obtained are compared with those provided by the numerical model.« less
Stochastic Ocean Eddy Perturbations in a Coupled General Circulation Model.
NASA Astrophysics Data System (ADS)
Howe, N.; Williams, P. D.; Gregory, J. M.; Smith, R. S.
2014-12-01
High-resolution ocean models, which are eddy permitting and resolving, require large computing resources to produce centuries worth of data. Also, some previous studies have suggested that increasing resolution does not necessarily solve the problem of unresolved scales, because it simply introduces a new set of unresolved scales. Applying stochastic parameterisations to ocean models is one solution that is expected to improve the representation of small-scale (eddy) effects without increasing run-time. Stochastic parameterisation has been shown to have an impact in atmosphere-only models and idealised ocean models, but has not previously been studied in ocean general circulation models. Here we apply simple stochastic perturbations to the ocean temperature and salinity tendencies in the low-resolution coupled climate model, FAMOUS. The stochastic perturbations are implemented according to T(t) = T(t-1) + (ΔT(t) + ξ(t)), where T is temperature or salinity, ΔT is the corresponding deterministic increment in one time step, and ξ(t) is Gaussian noise. We use high-resolution HiGEM data coarse-grained to the FAMOUS grid to provide information about the magnitude and spatio-temporal correlation structure of the noise to be added to the lower resolution model. Here we present results of adding white and red noise, showing the impacts of an additive stochastic perturbation on mean climate state and variability in an AOGCM.
Using Hand Grip Force as a Correlate of Longitudinal Acceleration Comfort for Rapid Transit Trains
Guo, Beiyuan; Gan, Weide; Fang, Weining
2015-01-01
Longitudinal acceleration comfort is one of the essential metrics used to evaluate the ride comfort of train. The aim of this study was to investigate the effectiveness of using hand grip force as a correlate of longitudinal acceleration comfort of rapid transit trains. In the paper, a motion simulation system was set up and a two-stage experiment was designed to investigate the role of the grip force on the longitudinal comfort of rapid transit trains. The results of the experiment show that the incremental grip force was linearly correlated with the longitudinal acceleration value, while the incremental grip force had no correlation with the direction of the longitudinal acceleration vector. The results also show that the effects of incremental grip force and acceleration duration on the longitudinal comfort of rapid transit trains were significant. Based on multiple regression analysis, a step function model was established to predict the longitudinal comfort of rapid transit trains using the incremental grip force and the acceleration duration. The feasibility and practicably of the model was verified by a field test. Furthermore, a comparative analysis shows that the motion simulation system and the grip force based model were valid to support the laboratory studies on the longitudinal comfort of rapid transit trains. PMID:26147730
Alkoshi, Salem; Maimaiti, Namaitijiang; Dahlui, Maznah
2014-01-01
Background Rotavirus infection is a major cause of childhood diarrhea in Libya. The objective of this study is to evaluate the cost-effectiveness of rotavirus vaccination in that country. Methods We used a published decision tree model that has been adapted to the Libyan situation to analyze a birth cohort of 160,000 children. The evaluation of diarrhea events in three public hospitals helped to estimate the rotavirus burden. The economic analysis was done from two perspectives: health care provider and societal. Univariate sensitivity analyses were conducted to assess uncertainty in some values of the variables selected. Results The three hospitals received 545 diarrhea patients aged≤5 with 311 (57%) rotavirus positive test results during a 9-month period. The societal cost for treatment of a case of rotavirus diarrhea was estimated at US$ 661/event. The incremental cost-effectiveness ratio with a vaccine price of US$ 27 per course was US$ 8,972 per quality-adjusted life year gained from the health care perspective. From a societal perspective, the analysis shows cost savings of around US$ 16 per child. Conclusion The model shows that rotavirus vaccination could be economically a very attractive intervention in Libya. PMID:25499622
Developing Effective and Efficient care pathways in chronic Pain: DEEP study protocol.
Durham, Justin; Breckons, Matthew; Araujo-Soares, Vera; Exley, Catherine; Steele, Jimmy; Vale, Luke
2014-01-21
Pain affecting the face or mouth and lasting longer than three months ("chronic orofacial pain", COFP) is relatively common in the UK. This study aims to describe and model current care pathways for COFP patients, identify areas where current pathways could be modified, and model whether these changes would improve outcomes for patients and use resources more efficiently. The study takes a prospective operations research approach. A cohort of primary and secondary care COFP patients (n = 240) will be recruited at differing stages of their care in order to follow and analyse their journey through care. The cohort will be followed for two years with data collected at baseline 6, 12, 18, and 24 months on: 1) experiences of the care pathway and its impacts; 2) quality of life; 3) pain; 4) use of health services and costs incurred; 5) illness perceptions. Qualitative in-depth interviews will be used to collect data on patient experiences from a purposive sub-sample of the total cohort (n = 30) at baseline, 12 and 24 months. Four separate appraisal groups (public, patient, clincian, service manager/commissioning) will then be given data from the pathway analysis and asked to determine their priority areas for change. The proposals from appraisal groups will inform an economic modelling exercise. Findings from the economic modelling will be presented as incremental costs, Quality Adjusted Life Years (QALYs), and the incremental cost per QALY gained. At the end of the modelling a series of recommendations for service change will be available for implementation or further trial if necessary. The recent white paper on health and the report from the NHS Forum identified chronic conditions as priority areas and whilst technology can improve outcomes, so can simple, appropriate and well-defined clinical care pathways. Understanding the opportunity cost related to care pathways benefits the wider NHS. This research develops a method to help design efficient systems built around one condition (COFP), but the principles should be applicable to a wide range of other chronic and long-term conditions.
Westerhout, K Y; Verheggen, B G; Schreder, C H; Augustin, M
2012-01-01
An economic evaluation was conducted to assess the outcomes and costs as well as cost-effectiveness of the following grass-pollen immunotherapies: OA (Oralair; Stallergenes S.A., Antony, France) vs GRZ (Grazax; ALK-Abelló, Hørsholm, Denmark), and ALD (Alk Depot SQ; ALK-Abelló) (immunotherapy agents alongside symptomatic medication) and symptomatic treatment alone for grass pollen allergic rhinoconjunctivitis. The costs and outcomes of 3-year treatment were assessed for a period of 9 years using a Markov model. Treatment efficacy was estimated using an indirect comparison of available clinical trials with placebo as a common comparator. Estimates for immunotherapy discontinuation, occurrence of asthma, health state utilities, drug costs, resource use, and healthcare costs were derived from published sources. The analysis was conducted from the insurant's perspective including public and private health insurance payments and co-payments by insurants. Outcomes were reported as quality-adjusted life years (QALYs) and symptom-free days. The uncertainty around incremental model results was tested by means of extensive deterministic univariate and probabilistic multivariate sensitivity analyses. In the base case analysis the model predicted a cost-utility ratio of OA vs symptomatic treatment of €14,728 per QALY; incremental costs were €1356 (95%CI: €1230; €1484) and incremental QALYs 0.092 (95%CI: 0.052; 0.140). OA was the dominant strategy compared to GRZ and ALD, with estimated incremental costs of -€1142 (95%CI: -€1255; -€1038) and -€54 (95%CI: -€188; €85) and incremental QALYs of 0.015 (95%CI: -0.025; 0.056) and 0.027 (95%CI: -0.022; 0.075), respectively. At a willingness-to-pay threshold of €20,000, the probability of OA being the most cost-effective treatment was predicted to be 79%. Univariate sensitivity analyses show that incremental outcomes were moderately sensitive to changes in efficacy estimates. The main study limitation was the requirement of an indirect comparison involving several steps to assess relative treatment effects. The analysis suggests OA to be cost-effective compared to GRZ and ALD, and a symptomatic treatment. Sensitivity analyses showed that uncertainty surrounding treatment efficacy estimates affected the model outcomes.
Statistically Controlling for Confounding Constructs Is Harder than You Think
Westfall, Jacob; Yarkoni, Tal
2016-01-01
Social scientists often seek to demonstrate that a construct has incremental validity over and above other related constructs. However, these claims are typically supported by measurement-level models that fail to consider the effects of measurement (un)reliability. We use intuitive examples, Monte Carlo simulations, and a novel analytical framework to demonstrate that common strategies for establishing incremental construct validity using multiple regression analysis exhibit extremely high Type I error rates under parameter regimes common in many psychological domains. Counterintuitively, we find that error rates are highest—in some cases approaching 100%—when sample sizes are large and reliability is moderate. Our findings suggest that a potentially large proportion of incremental validity claims made in the literature are spurious. We present a web application (http://jakewestfall.org/ivy/) that readers can use to explore the statistical properties of these and other incremental validity arguments. We conclude by reviewing SEM-based statistical approaches that appropriately control the Type I error rate when attempting to establish incremental validity. PMID:27031707
Kerr, Kathleen F; Meisner, Allison; Thiessen-Philbrook, Heather; Coca, Steven G; Parikh, Chirag R
2014-08-07
The field of nephrology is actively involved in developing biomarkers and improving models for predicting patients' risks of AKI and CKD and their outcomes. However, some important aspects of evaluating biomarkers and risk models are not widely appreciated, and statistical methods are still evolving. This review describes some of the most important statistical concepts for this area of research and identifies common pitfalls. Particular attention is paid to metrics proposed within the last 5 years for quantifying the incremental predictive value of a new biomarker. Copyright © 2014 by the American Society of Nephrology.
Automated single-slide staining device
NASA Technical Reports Server (NTRS)
Wilkins, J. R.; Mills, S. M. (Inventor)
1977-01-01
A simple apparatus and method is disclosed for making individual single Gram stains on bacteria inoculated slides to assist in classifying bacteria in the laboratory as Gram-positive or Gram-negative. The apparatus involves positioning a single inoculated slide in a stationary position and thereafter automatically and sequentially flooding the slide with increments of a primary stain, a mordant, a decolorizer, a counterstain and a wash solution in a sequential manner without the individual lab technician touching the slide and with minimum danger of contamination thereof from other slides.
Incremental online learning in high dimensions.
Vijayakumar, Sethu; D'Souza, Aaron; Schaal, Stefan
2005-12-01
Locally weighted projection regression (LWPR) is a new algorithm for incremental nonlinear function approximation in high-dimensional spaces with redundant and irrelevant input dimensions. At its core, it employs nonparametric regression with locally linear models. In order to stay computationally efficient and numerically robust, each local model performs the regression analysis with a small number of univariate regressions in selected directions in input space in the spirit of partial least squares regression. We discuss when and how local learning techniques can successfully work in high-dimensional spaces and review the various techniques for local dimensionality reduction before finally deriving the LWPR algorithm. The properties of LWPR are that it (1) learns rapidly with second-order learning methods based on incremental training, (2) uses statistically sound stochastic leave-one-out cross validation for learning without the need to memorize training data, (3) adjusts its weighting kernels based on only local information in order to minimize the danger of negative interference of incremental learning, (4) has a computational complexity that is linear in the number of inputs, and (5) can deal with a large number of-possibly redundant-inputs, as shown in various empirical evaluations with up to 90 dimensional data sets. For a probabilistic interpretation, predictive variance and confidence intervals are derived. To our knowledge, LWPR is the first truly incremental spatially localized learning method that can successfully and efficiently operate in very high-dimensional spaces.
Health Risk Assessments for Alumina Refineries
Coffey, Patrick S.
2014-01-01
Objective: To describe contemporary air dispersion modeling and health risk assessment methodologies applied to alumina refineries and to summarize recent results. Methods: Air dispersion models using emission source and meteorological data have been used to assess ground-level concentrations (GLCs) of refinery emissions. Short-term (1-hour and 24-hour average) GLCs and annual average GLCs have been used to assess acute health, chronic health, and incremental carcinogenic risks. Results: The acute hazard index can exceed 1 close to refineries, but it is typically less than 1 at neighboring residential locations. The chronic hazard index is typically substantially less than 1. The incremental carcinogenic risk is typically less than 10−6. Conclusions: The risks of acute health effects are adequately controlled, and the risks of chronic health effects and incremental carcinogenic risks are negligible around referenced alumina refineries. PMID:24806721
Thermomechanical simulations and experimental validation for high speed incremental forming
NASA Astrophysics Data System (ADS)
Ambrogio, Giuseppina; Gagliardi, Francesco; Filice, Luigino; Romero, Natalia
2016-10-01
Incremental sheet forming (ISF) consists in deforming only a small region of the workspace through a punch driven by a NC machine. The drawback of this process is its slowness. In this study, a high speed variant has been investigated from both numerical and experimental points of view. The aim has been the design of a FEM model able to perform the material behavior during the high speed process by defining a thermomechanical model. An experimental campaign has been performed by a CNC lathe with high speed to test process feasibility. The first results have shown how the material presents the same performance than in conventional speed ISF and, in some cases, better material behavior due to the temperature increment. An accurate numerical simulation has been performed to investigate the material behavior during the high speed process confirming substantially experimental evidence.
A comparison of simple shear characterization methods for composite laminates
NASA Technical Reports Server (NTRS)
Yeow, Y. T.; Brinson, H. F.
1978-01-01
Various methods for the shear stress/strain characterization of composite laminates are examined and their advantages and limitations are briefly discussed. Experimental results and the necessary accompanying analysis are then presented and compared for three simple shear characterization procedures. These are the off-axis tensile test method, the (+/- 45 deg)s tensile test method and the (0/90 deg)s symmetric rail shear test method. It is shown that the first technique indicates the shear properties of the graphite/epoxy laminates investigated are fundamentally brittle in nature while the latter two methods tend to indicate that these laminates are fundamentally ductile in nature. Finally, predictions of incrementally determined tensile stress/strain curves utilizing the various different shear behaviour methods as input information are presented and discussed.
A comparison of simple shear characterization methods for composite laminates
NASA Technical Reports Server (NTRS)
Yeow, Y. T.; Brinson, H. F.
1977-01-01
Various methods for the shear stress-strain characterization of composite laminates are examined, and their advantages and limitations are briefly discussed. Experimental results and the necessary accompanying analysis are then presented and compared for three simple shear characterization procedures. These are the off-axis tensile test method, the + or - 45 degs tensile test method and the 0 deg/90 degs symmetric rail shear test method. It is shown that the first technique indicates that the shear properties of the G/E laminates investigated are fundamentally brittle in nature while the latter two methods tend to indicate that the G/E laminates are fundamentally ductile in nature. Finally, predictions of incrementally determined tensile stress-strain curves utilizing the various different shear behavior methods as input information are presented and discussed.
Modeling individual tree growth by fusing diameter tape and increment core data
Erin M. Schliep; Tracy Qi Dong; Alan E. Gelfand; Fan. Li
2014-01-01
Tree growth estimation is a challenging task as difficulties associated with data collection and inference often result in inaccurate estimates. Two main methods for tree growth estimation are diameter tape measurements and increment cores. The former involves repeatedly measuring tree diameters with a cloth or metal tape whose scale has been adjusted to give diameter...
High Spatial Resolution 40Ar/39Ar Geochronology of Lunar Impact Melt Rocks
NASA Astrophysics Data System (ADS)
Mercer, Cameron Mark
Impact cratering has played a key role in the evolution of the solid surfaces of Solar System bodies. While much of Earth’s impact record has been erased, its Moon preserves an extensive history of bombardment. Quantifying the timing of lunar impact events is crucial to understanding how impacts have shaped the evolution of early Earth, and provides the basis for estimating the ages of other cratered surfaces in the Solar System. Many lunar impact melt rocks are complex mixtures of glassy and crystalline “melt” materials and inherited clasts of pre-impact minerals and rocks. If analyzed in bulk, these samples can yield complicated incremental release 40Ar/39Ar spectra, making it challenging to uniquely interpret impact ages. Here, I have used a combination of high-spatial resolution 40Ar/39Ar geochronology and thermal-kinetic modeling to gain new insights into the impact histories recorded by such lunar samples. To compare my data to those of previous studies, I developed a software tool to account for differences in the decay, isotopic, and monitor age parameters used for different published 40Ar/39Ar datasets. Using an ultraviolet laser ablation microprobe (UVLAMP) system I selectively dated melt and clast components of impact melt rocks collected during the Apollo 16 and 17 missions. UVLAMP 40Ar/39Ar data for samples 77135, 60315, 61015, and 63355 show evidence of open-system behavior, and provide new insights into how to interpret some complexities of published incremental heating 40Ar/39Ar spectra. Samples 77115, 63525, 63549, and 65015 have relatively simple thermal histories, and UVLAMP 40Ar/39Ar data for the melt components of these rocks indicate the timing of impact events—spanning hundreds of millions of years—that influenced the Apollo 16 and 17 sites. My modeling and UVLAMP 40Ar/39Ar data for sample 73217 indicate that some impact melt rocks can quantitatively retain evidence for multiple melt-producing impact events, and imply that such polygenetic rocks should be regarded as high-value sampling opportunities during future exploration missions to cratered planetary surfaces. Collectively, my results complement previous incremental heating 40Ar/39Ar studies, and support interpretations that the Moon experienced a prolonged period of heavy bombardment early in its history.
Gao, Yaozong; Zhan, Yiqiang
2015-01-01
Image-guided radiotherapy (IGRT) requires fast and accurate localization of the prostate in 3-D treatment-guided radiotherapy, which is challenging due to low tissue contrast and large anatomical variation across patients. On the other hand, the IGRT workflow involves collecting a series of computed tomography (CT) images from the same patient under treatment. These images contain valuable patient-specific information yet are often neglected by previous works. In this paper, we propose a novel learning framework, namely incremental learning with selective memory (ILSM), to effectively learn the patient-specific appearance characteristics from these patient-specific images. Specifically, starting with a population-based discriminative appearance model, ILSM aims to “personalize” the model to fit patient-specific appearance characteristics. The model is personalized with two steps: backward pruning that discards obsolete population-based knowledge and forward learning that incorporates patient-specific characteristics. By effectively combining the patient-specific characteristics with the general population statistics, the incrementally learned appearance model can localize the prostate of a specific patient much more accurately. This work has three contributions: 1) the proposed incremental learning framework can capture patient-specific characteristics more effectively, compared to traditional learning schemes, such as pure patient-specific learning, population-based learning, and mixture learning with patient-specific and population data; 2) this learning framework does not have any parametric model assumption, hence, allowing the adoption of any discriminative classifier; and 3) using ILSM, we can localize the prostate in treatment CTs accurately (DSC ∼0.89) and fast (∼4 s), which satisfies the real-world clinical requirements of IGRT. PMID:24495983
Fine-tuning gene networks using simple sequence repeats
Egbert, Robert G.; Klavins, Eric
2012-01-01
The parameters in a complex synthetic gene network must be extensively tuned before the network functions as designed. Here, we introduce a simple and general approach to rapidly tune gene networks in Escherichia coli using hypermutable simple sequence repeats embedded in the spacer region of the ribosome binding site. By varying repeat length, we generated expression libraries that incrementally and predictably sample gene expression levels over a 1,000-fold range. We demonstrate the utility of the approach by creating a bistable switch library that programmatically samples the expression space to balance the two states of the switch, and we illustrate the need for tuning by showing that the switch’s behavior is sensitive to host context. Further, we show that mutation rates of the repeats are controllable in vivo for stability or for targeted mutagenesis—suggesting a new approach to optimizing gene networks via directed evolution. This tuning methodology should accelerate the process of engineering functionally complex gene networks. PMID:22927382
A Kp-based model of auroral boundaries
NASA Astrophysics Data System (ADS)
Carbary, James F.
2005-10-01
The auroral oval can serve as both a representation and a prediction of space weather on a global scale, so a competent model of the oval as a function of a geomagnetic index could conveniently appraise space weather itself. A simple model of the auroral boundaries is constructed by binning several months of images from the Polar Ultraviolet Imager by Kp index. The pixel intensities are first averaged into magnetic latitude-magnetic local time (MLT-MLAT) and local time bins, and intensity profiles are then derived for each Kp level at 1 hour intervals of MLT. After background correction, the boundary latitudes of each profile are determined at a threshold of 4 photons cm-2 s1. The peak locations and peak intensities are also found. The boundary and peak locations vary linearly with Kp index, and the coefficients of the linear fits are tabulated for each MLT. As a general rule of thumb, the UV intensity peak shifts 1° in magnetic latitude for each increment in Kp. The fits are surprisingly good for Kp < 6 but begin to deteriorate at high Kp because of auroral boundary irregularities and poor statistics. The statistical model allows calculation of the auroral boundaries at most MLTs as a function of Kp and can serve as an approximation to the shape and extent of the statistical oval.
NASA Astrophysics Data System (ADS)
Grimaldi, S.; Petroselli, A.; Romano, N.
2012-04-01
The Soil Conservation Service - Curve Number (SCS-CN) method is a popular rainfall-runoff model that is widely used to estimate direct runoff from small and ungauged basins. The SCS-CN is a simple and valuable approach to estimate the total stream-flow volume generated by a storm rainfall, but it was developed to be used with daily rainfall data. To overcome this drawback, we propose to include the Green-Ampt (GA) infiltration model into a mixed procedure, which is referred to as CN4GA (Curve Number for Green-Ampt), aiming to distribute in time the information provided by the SCS-CN method so as to provide estimation of sub-daily incremental rainfall excess. For a given storm, the computed SCS-CN total net rainfall amount is used to calibrate the soil hydraulic conductivity parameter of the Green-Ampt model. The proposed procedure was evaluated by analyzing 100 rainfall-runoff events observed in four small catchments of varying size. CN4GA appears an encouraging tool for predicting the net rainfall peak and duration values and has shown, at least for the test cases considered in this study, a better agreement with observed hydrographs than that of the classic SCS-CN method.
Incremental Validity of the Trait Emotional Intelligence Questionnaire-Short Form (TEIQue-SF).
Siegling, A B; Vesely, Ashley K; Petrides, K V; Saklofske, Donald H
2015-01-01
This study examined the incremental validity of the adult short form of the Trait Emotional Intelligence Questionnaire (TEIQue-SF) in predicting 7 construct-relevant criteria beyond the variance explained by the Five-factor model and coping strategies. Additionally, the relative contributions of the questionnaire's 4 subscales were assessed. Two samples of Canadian university students completed the TEIQue-SF, along with measures of the Big Five, coping strategies (Sample 1 only), and emotion-laden criteria. The TEIQue-SF showed consistent incremental effects beyond the Big Five or the Big Five and coping strategies, predicting all 7 criteria examined across the 2 samples. Furthermore, 2 of the 4 TEIQue-SF subscales accounted for the measure's incremental validity. Although the findings provide good support for the validity and utility of the TEIQue-SF, directions for further research are emphasized.
Analysis of Tube Hydroforming by means of an Inverse Approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nguyen, Ba Nghiep; Johnson, Kenneth I.; Khaleel, Mohammad A.
2003-05-01
This paper presents a computational tool for the analysis of freely hydroformed tubes by means of an inverse approach. The formulation of the inverse method developed by Guo et al. is adopted and extended to the tube hydrofoming problems in which the initial geometry is a round tube submitted to hydraulic pressure and axial feed at the tube ends (end-feed). A simple criterion based on a forming limit diagram is used to predict the necking regions in the deformed workpiece. Although the developed computational tool is a stand-alone code, it has been linked to the Marc finite element code formore » meshing and visualization of results. The application of the inverse approach to tube hydroforming is illustrated through the analyses of the aluminum alloy AA6061-T4 seamless tubes under free hydroforming conditions. The results obtained are in good agreement with those issued from a direct incremental approach. However, the computational time in the inverse procedure is much less than that in the incremental method.« less
Effect of Tail Surfaces on the Base Drag of a Body of Revolution at Mach Numbers of 1.5 and 2.0
NASA Technical Reports Server (NTRS)
Spahr, J Richard; Dickey, Robert R
1951-01-01
Wind-tunnel tests were performed at Mach numbers of 1.5 and 2.0 to investigate the influence of tail surfaces on the base drag of a body of revolution without boattailing and having a turbulent boundary layer. The tail surfaces were of rectangular plan form of aspect ratio 2.33 and has symmetrical, circular-arc airfoil section. The results of the investigation showed that the addition of these tail surfaces with the trailing edges at or near the body base incurred a large increase in the base-drag coefficient. For a cruciform tail having a 10-percent-thick airfoil section, this increase was about 70 percent at a Mach number of 1.5 and 35 percent at a Mach number of 2.0. As the trailing edge of the tail was moved forward or rearward of the base by about one tail-chord length, the base-drag increment was reduced to nearly zero. The increments in base-drag coefficient due to the presence of 10-percent-thick tail surfaces were generally twice those for 5-percent-thick surfaces. The base-drag increments due to the presence of a cruciform tail were less than twice those for a plane tail. An estimate of the change in base pressure due to the tail surfaces was made, based on a simple superposition of the airfoil-pressure field onto the base-pressure field behind the body. A comparison of the results with the experimental values indicated that in most cases the trend in the variation of the base-drag increment with changes in tail position could be predicted by this approximate method but that the quantitative agreement at most tail locations was poor.
The impact of therapeutic reference pricing on innovation in cardiovascular medicine.
Sheridan, Desmond; Attridge, Jim
2006-12-01
Therapeutic reference pricing (TRP) places medicines to treat the same medical condition into groups or 'clusters' with a single common reimbursed price. Underpinning this economic measure is an implicit assumption that the products included in the cluster have an equivalent effect on a typical patient with this disease. 'Truly innovative' products can be exempt from inclusion in the cluster. This increasingly common approach to cost containment allocates products into one of two categories - truly innovative or therapeutically equivalent. This study examines the implications of TRP against the step-wise evolution of drugs for cardiovascular conditions over the past 50 years. It illustrates the complex interactions between advances in understanding of cellular and molecular disease mechanisms, diagnostic techniques, treatment concepts, and the synthesis, testing and commercialisation of products. It confirms the highly unpredictable and incremental nature of the innovation process. Medical progress in terms of improvement in patient outcomes over the long-term depends on the cumulative effect of year after year of painstaking incremental advances. It shows that the parallel processes of advances in scientific knowledge and the industrial 'investment-innovative cycle' involve highly developed sets of complementary capabilities and resources. A framework is developed to assess the impact of TRP upon research and development investment decisions and the development of therapeutic classes. We conclude that a simple categorisation of products as either 'truly innovative' or 'therapeutically equivalent' is inconsistent with the incremental processes of innovation and the resulting differentiated product streams revealed by our analysis. Widespread introduction of TRP would probably have prematurely curtailed development of many incremental innovations that became the preferred 'product of choice' by physicians for some indications and patients in managing the incidence of cardiovascular disease.
[Economic impact of nosocomial bacteraemia. A comparison of three calculation methods].
Riu, Marta; Chiarello, Pietro; Terradas, Roser; Sala, Maria; Castells, Xavier; Knobel, Hernando; Cots, Francesc
2016-12-01
The excess cost associated with nosocomial bacteraemia (NB) is used as a measurement of the impact of these infections. However, some authors have suggested that traditional methods overestimate the incremental cost due to the presence of various types of bias. The aim of this study was to compare three assessment methods of NB incremental cost to correct biases in previous analyses. Patients who experienced an episode of NB between 2005 and 2007 were compared with patients grouped within the same All Patient Refined-Diagnosis-Related Group (APR-DRG) without NB. The causative organisms were grouped according to the Gram stain, and whether bacteraemia was caused by a single or multiple microorganisms, or by a fungus. Three assessment methods are compared: stratification by disease; econometric multivariate adjustment using a generalised linear model (GLM); and propensity score matching (PSM) was performed to control for biases in the econometric model. The analysis included 640 admissions with NB and 28,459 without NB. The observed mean cost was €24,515 for admissions with NB and €4,851.6 for controls (without NB). Mean incremental cost was estimated at €14,735 in stratified analysis. Gram positive microorganism had the lowest mean incremental cost, €10,051. In the GLM, mean incremental cost was estimated as €20,922, and adjusting with PSM, the mean incremental cost was €11,916. The three estimates showed important differences between groups of microorganisms. Using enhanced methodologies improves the adjustment in this type of study and increases the value of the results. Copyright © 2015 Elsevier España, S.L.U. and Sociedad Española de Enfermedades Infecciosas y Microbiología Clínica. All rights reserved.
Sugar-sweetened beverages and dental caries in adults: a 4-year prospective study.
Bernabé, Eduardo; Vehkalahti, Miira M; Sheiham, Aubrey; Aromaa, Arpo; Suominen, Anna L
2014-08-01
To explore the association between frequency of consumption of sugar-sweetened beverages (SSB) and caries increment over 4 years in adults. A second objective was to explore whether the association between frequency of SSB consumption and caries increment varied by socio-demographic characteristics and use of fluoride toothpaste. Data from 939 dentate adults who participated in both the Health 2000 Survey and the Follow-Up Study of Finnish Adults' Oral Health were analysed. At baseline, participants provided information on demographic characteristics, education and dental behaviours, including two questions on frequency of SSB consumption. The 4-year net DMFT increment was calculated using data from baseline and follow-up clinical oral examinations. The association was tested in negative binomial regression models and the moderating role of sex, age, education and use of fluoride toothpaste was examined by adding their two-way interaction with SSB consumption to the main effects model. A positive association was found between frequency of SBS consumption and 4-year net DMFT increment, regardless of participants' socio-demographic and behavioural characteristics. Adults drinking 1-2 and 3+ SSB daily had, respectively, 31% (Incidence Rate Ratio: 1.31; 95%CI: 1.02-1.67) and 33% (IRR: 1.33; 95%CI; 1.03-1.72) greater net DMFT increments than those not drinking any SSB. None of the four two-way interaction terms was significant (all p>0.05). There seems to be a dose-response relationship between frequency of SSB consumption and caries increment in adults. That association was consistent across socio-demographic characteristics, and more importantly, use of fluoride toothpaste. Drinking sugar-sweetened beverages on a daily basis is related to greater caries risk in adults. Copyright © 2014 Elsevier Ltd. All rights reserved.
Goldstein, Daniel A.; Chen, Qiushi; Ayer, Turgay; Howard, David H.; Lipscomb, Joseph; El-Rayes, Bassel F.; Flowers, Christopher R.
2015-01-01
Purpose The addition of bevacizumab to fluorouracil-based chemotherapy is a standard of care for previously untreated metastatic colorectal cancer. Continuation of bevacizumab beyond progression is an accepted standard of care based on a 1.4-month increase in median overall survival observed in a randomized trial. No United States–based cost-effectiveness modeling analyses are currently available addressing the use of bevacizumab in metastatic colorectal cancer. Our objective was to determine the cost effectiveness of bevacizumab in the first-line setting and when continued beyond progression from the perspective of US payers. Methods We developed two Markov models to compare the cost and effectiveness of fluorouracil, leucovorin, and oxaliplatin with or without bevacizumab in the first-line treatment and subsequent fluorouracil, leucovorin, and irinotecan with or without bevacizumab in the second-line treatment of metastatic colorectal cancer. Model robustness was addressed by univariable and probabilistic sensitivity analyses. Health outcomes were measured in life-years and quality-adjusted life-years (QALYs). Results Using bevacizumab in first-line therapy provided an additional 0.10 QALYs (0.14 life-years) at a cost of $59,361. The incremental cost-effectiveness ratio was $571,240 per QALY. Continuing bevacizumab beyond progression provided an additional 0.11 QALYs (0.16 life-years) at a cost of $39,209. The incremental cost-effectiveness ratio was $364,083 per QALY. In univariable sensitivity analyses, the variables with the greatest influence on the incremental cost-effectiveness ratio were bevacizumab cost, overall survival, and utility. Conclusion Bevacizumab provides minimal incremental benefit at high incremental cost per QALY in both the first- and second-line settings of metastatic colorectal cancer treatment. PMID:25691669
Kawasaki, Ryo; Akune, Yoko; Hiratsuka, Yoshimune; Fukuhara, Shunichi; Yamada, Masakazu
2015-02-01
To evaluate the cost-effectiveness for a screening interval longer than 1 year detecting diabetic retinopathy (DR) through the estimation of incremental costs per quality-adjusted life year (QALY) based on the best available clinical data in Japan. A Markov model with a probabilistic cohort analysis was framed to calculate incremental costs per QALY gained by implementing a screening program detecting DR in Japan. A 1-year cycle length and population size of 50,000 with a 50-year time horizon (age 40-90 years) was used. Best available clinical data from publications and national surveillance data was used, and a model was designed including current diagnosis and management of DR with corresponding visual outcomes. One-way and probabilistic sensitivity analyses were performed considering uncertainties in the parameters. In the base-case analysis, the strategy with a screening program resulted in an incremental cost of 5,147 Japanese yen (¥; US$64.6) and incremental effectiveness of 0.0054 QALYs per person screened. The incremental cost-effectiveness ratio was ¥944,981 (US$11,857) per QALY. The simulation suggested that screening would result in a significant reduction in blindness in people aged 40 years or over (-16%). Sensitivity analyses suggested that in order to achieve both reductions in blindness and cost-effectiveness in Japan, the screening program should screen those aged 53-84 years, at intervals of 3 years or less. An eye screening program in Japan would be cost-effective in detecting DR and preventing blindness from DR, even allowing for the uncertainties in estimates of costs, utility, and current management of DR.
As-Built documentation of programs to implement the Robertson and Doraiswamy/Thompson models
NASA Technical Reports Server (NTRS)
Valenziano, D. J. (Principal Investigator)
1981-01-01
The software which implements two spring wheat phenology models is described. The main program routines for the Doraiswamy/Thompson crop phenology model and the basic Robertson crop phenology model are DTMAIN and BRMAIN. These routines read meteorological data files and coefficient files, accept the planting date information and other information from the user, and initiate processing. Daily processing for the basic Robertson program consists only of calculation of the basic Robertson increment of crop development. Additional processing in the Doraiswamy/Thompson program includes the calculation of a moisture stress index and correction of the basic increment of development. Output for both consists of listings of the daily results.
Modeling nonstructural carbohydrate reserve dynamics in forest trees
NASA Astrophysics Data System (ADS)
Richardson, A. D.; Keenan, T. F.; Carbone, M. S.; Czimczik, C. I.; Hollinger, D. Y.; Murakami, P.; Schaberg, P.; Xu, X.
2012-12-01
Understanding the factors influencing the availability of nonstructural carbohydrate (NSC) reserves is essential for predicting the resilience of forests to climate change and environmental stress. However, carbon allocation processes remain poorly understood and many models either ignore NSC reserves, or use simple and untested representations of NSC allocation and pool dynamics. Using model-data fusion techniques, we combined a parsimonious model of forest ecosystem carbon cycling with novel field sampling and laboratory analyses of NSCs. Simulations were conducted for an evergreen conifer forest and a deciduous broadleaf forest in New England. We used radiocarbon methods based on the 14C "bomb spike" to estimate the age of NSC reserves, and used this to constrain the mean residence time of modeled NSCs. We used additional data, including tower-measured fluxes of CO2, soil and biomass carbon stocks, woody biomass increment, and leaf area index and litterfall, to further constrain the model's parameters and initial conditions. Three years of field measurements indicate that stemwood NSCs are highly dynamic on seasonal time scales. The modeled seasonal dynamics conform to expectations (accumulated in the growing season, depleted in the dormant season) but are inconsistent with the observational data (total stemwood NSC concentrations higher in March than November, lower in August than June). We interpret this contradiction to suggest that stemwood concentrations provide an incomplete picture of the whole-tree NSC budget. A two-pool model structure that accounted for both "fast" (active pool, MRT ≈1 y) and "slow" (passive pool, MRT ≥ 20 y) cycling reserves (1) gives reasonable estimates of the size and MRT of the total NSC pool; (2) greatly improves model predictions of interannual variability in woody biomass increment, compared to zero- or one-pool structures used in the majority of existing models; (3) provides a mechanism by which observations of a one-year lag between carbon uptake and growth can be explained; (4) reconciles the apparent contradiction of a reserve pool that is both highly dynamic over time, and also a decade old on average; and (5) shows how younger reserves can be preferentially used to support growth and metabolism, but allows for the older reserves to be drawn on if the younger reserves are depleted. The improved performance and greater realism of our model is achieved without requiring a substantial increase in model complexity. From the perspective of modeling forest responses to climate change, we expect that models incorporating dynamic stored reserves should be better able to represent the lagged effects of climate extremes and disturbance on ecosystem C fluxes.
Space-time quantitative source apportionment of soil heavy metal concentration increments.
Yang, Yong; Christakos, George; Guo, Mingwu; Xiao, Lu; Huang, Wei
2017-04-01
Assessing the space-time trends and detecting the sources of heavy metal accumulation in soils have important consequences in the prevention and treatment of soil heavy metal pollution. In this study, we collected soil samples in the eastern part of the Qingshan district, Wuhan city, Hubei Province, China, during the period 2010-2014. The Cd, Cu, Pb and Zn concentrations in soils exhibited a significant accumulation during 2010-2014. The spatiotemporal Kriging technique, based on a quantitative characterization of soil heavy metal concentration variations in terms of non-separable variogram models, was employed to estimate the spatiotemporal soil heavy metal distribution in the study region. Our findings showed that the Cd, Cu, and Zn concentrations have an obvious incremental tendency from the southwestern to the central part of the study region. However, the Pb concentrations exhibited an obvious tendency from the northern part to the central part of the region. Then, spatial overlay analysis was used to obtain absolute and relative concentration increments of adjacent 1- or 5-year periods during 2010-2014. The spatial distribution of soil heavy metal concentration increments showed that the larger increments occurred in the center of the study region. Lastly, the principal component analysis combined with the multiple linear regression method were employed to quantify the source apportionment of the soil heavy metal concentration increments in the region. Our results led to the conclusion that the sources of soil heavy metal concentration increments should be ascribed to industry, agriculture and traffic. In particular, 82.5% of soil heavy metal concentration increment during 2010-2014 was ascribed to industrial/agricultural activities sources. Using STK and SOA to obtain the spatial distribution of heavy metal concentration increments in soils. Using PCA-MLR to quantify the source apportionment of soil heavy metal concentration increments. Copyright © 2017 Elsevier Ltd. All rights reserved.
The Crucial Role of Error Correlation for Uncertainty Modeling of CFD-Based Aerodynamics Increments
NASA Technical Reports Server (NTRS)
Hemsch, Michael J.; Walker, Eric L.
2011-01-01
The Ares I ascent aerodynamics database for Design Cycle 3 (DAC-3) was built from wind-tunnel test results and CFD solutions. The wind tunnel results were used to build the baseline response surfaces for wind-tunnel Reynolds numbers at power-off conditions. The CFD solutions were used to build increments to account for Reynolds number effects. We calculate the validation errors for the primary CFD code results at wind tunnel Reynolds number power-off conditions and would like to be able to use those errors to predict the validation errors for the CFD increments. However, the validation errors are large compared to the increments. We suggest a way forward that is consistent with common practice in wind tunnel testing which is to assume that systematic errors in the measurement process and/or the environment will subtract out when increments are calculated, thus making increments more reliable with smaller uncertainty than absolute values of the aerodynamic coefficients. A similar practice has arisen for the use of CFD to generate aerodynamic database increments. The basis of this practice is the assumption of strong correlation of the systematic errors inherent in each of the results used to generate an increment. The assumption of strong correlation is the inferential link between the observed validation uncertainties at wind-tunnel Reynolds numbers and the uncertainties to be predicted for flight. In this paper, we suggest a way to estimate the correlation coefficient and demonstrate the approach using code-to-code differences that were obtained for quality control purposes during the Ares I CFD campaign. Finally, since we can expect the increments to be relatively small compared to the baseline response surface and to be typically of the order of the baseline uncertainty, we find that it is necessary to be able to show that the correlation coefficients are close to unity to avoid overinflating the overall database uncertainty with the addition of the increments.
The impact of cancer drug wastage on economic evaluations.
Truong, Judy; Cheung, Matthew C; Mai, Helen; Letargo, Jessa; Chambers, Alexandra; Sabharwal, Mona; Trudeau, Maureen E; Chan, Kelvin K W
2017-09-15
The objective of this study was to determine the impact of modeling cancer drug wastage in economic evaluations because wastage can result from single-dose vials on account of body surface area- or weight-based dosing. Intravenous chemotherapy drugs were identified from the pan-Canadian Oncology Drug Review (pCODR) program as of January 2015. Economic evaluations performed by drug manufacturers and pCODR were reviewed. Cost-effectiveness analyses and budget impact analyses were conducted for no-wastage and maximum-wastage scenarios (ie, the entire unused portion of the vial was discarded at each infusion). Sensitivity analyses were performed for a range of body surface areas and weights. Twelve drugs used for 17 indications were analyzed. Wastage was reported (ie, assumptions were explicit) in 71% of the models and was incorporated into 53% by manufacturers; this resulted in a mean incremental cost-effectiveness ratio increase of 6.1% (range, 1.3%-14.6%). pCODR reported and incorporated wastage for 59% of the models, and this resulted in a mean incremental cost-effectiveness ratio increase of 15.0% (range, 2.6%-48.2%). In the maximum-wastage scenario, there was a mean increase in the incremental cost-effectiveness ratio of 24.0% (range, 0.0%-97.2%), a mean increase in the 3-year total incremental budget costs of 26.0% (range, 0.0%-83.1%), and an increase in the 3-year total incremental drug budget cost of approximately CaD $102 million nationally. Changing the mean body surface area or body weight caused 45% of the drugs to have a change in the vial size and/or quantity, and this resulted in increased drug costs. Cancer drug wastage can increase drug costs but is not uniformly modeled in economic evaluations. Cancer 2017;123:3583-90. © 2017 American Cancer Society. © 2017 American Cancer Society.
Comparative effectiveness and cost-effectiveness of the implantable miniature telescope.
Brown, Gary C; Brown, Melissa M; Lieske, Heidi B; Lieske, Philip A; Brown, Kathryn S; Lane, Stephen S
2011-09-01
To assess the preference-based comparative effectiveness (human value gain) and the cost-utility (cost-effectiveness) of a telescope prosthesis (implantable miniature telescope) for the treatment of end-stage, age-related macular degeneration (AMD). A value-based medicine, second-eye model, cost-utility analysis was performed to quantify the comparative effectiveness and cost-effectiveness of therapy with the telescope prosthesis. Published, evidence-based data from the IMT002 Study Group clinical trial. Ophthalmic utilities were obtained from a validated cohort of >1000 patients with ocular diseases. Comparative effectiveness data were converted from visual acuity to utility (value-based) format. The incremental costs (Medicare) of therapy versus no therapy were integrated with the value gain conferred by the telescope prosthesis to assess its average cost-utility. The incremental value gains and incremental costs of therapy referent to (1) a fellow eye cohort and (2) a fellow eye cohort of those who underwent intra-study cataract surgery were integrated in incremental cost-utility analyses. All value outcomes and costs were discounted at a 3% annual rate, as per the Panel on Cost-Effectiveness in Health and Medicine. Comparative effectiveness was quantified using the (1) quality-adjusted life-year (QALY) gain and (2) percent human value gain (improvement in quality of life). The QALY gain was integrated with incremental costs into the cost-utility ratio ($/QALY, or US dollars expended per QALY gained). The mean, discounted QALY gain associated with use of the telescope prosthesis over 12 years was 0.7577. When the QALY loss of 0.0004 attributable to the adverse events was factored into the model, the final QALY gain was 0.7573. This resulted in a 12.5% quality of life gain for the average patient during the 12 years of the model. The average cost-utility versus no therapy for use of the telescope prosthesis was $14389/QALY. The incremental cost-utility referent to control fellow eyes was $14063/QALY, whereas the incremental cost-utility referent to fellow eyes that underwent intra-study cataract surgery was $11805/QALY. Therapy with the telescope prosthesis considerably improves quality of life and at the same time is cost-effective by conventional standards. Copyright © 2011 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
McGill, Ryan J.; Spurgin, Angelia R.
2016-01-01
The current study examined the incremental validity of the Luria interpretive scheme for the Kaufman Assessment Battery for Children-Second Edition (KABC-II) for predicting scores on the Kaufman Test of Educational Achievement-Second Edition (KTEA-II). All participants were children and adolescents (N = 2,025) drawn from the nationally…
Nasir, Hina; Javaid, Nadeem; Sher, Muhammad; Qasim, Umar; Khan, Zahoor Ali; Alrajeh, Nabil; Niaz, Iftikhar Azim
2016-01-01
This paper embeds a bi-fold contribution for Underwater Wireless Sensor Networks (UWSNs); performance analysis of incremental relaying in terms of outage and error probability, and based on the analysis proposition of two new cooperative routing protocols. Subject to the first contribution, a three step procedure is carried out; a system model is presented, the number of available relays are determined, and based on cooperative incremental retransmission methodology, closed-form expressions for outage and error probability are derived. Subject to the second contribution, Adaptive Cooperation in Energy (ACE) efficient depth based routing and Enhanced-ACE (E-ACE) are presented. In the proposed model, feedback mechanism indicates success or failure of data transmission. If direct transmission is successful, there is no need for relaying by cooperative relay nodes. In case of failure, all the available relays retransmit the data one by one till the desired signal quality is achieved at destination. Simulation results show that the ACE and E-ACE significantly improves network performance, i.e., throughput, when compared with other incremental relaying protocols like Cooperative Automatic Repeat reQuest (CARQ). E-ACE and ACE achieve 69% and 63% more throughput respectively as compared to CARQ in hard underwater environment. PMID:27420061
Analytical and Experimental Investigation of Process Loads on Incremental Severe Plastic Deformation
NASA Astrophysics Data System (ADS)
Okan Görtan, Mehmet
2017-05-01
From the processing point of view, friction is a major problem in the severe plastic deformation (SPD) using equal channel angular pressing (ECAP) process. Incremental ECAP can be used in order to optimize frictional effects during SPD. A new incremental ECAP has been proposed recently. This new process called as equal channel angular swaging (ECAS) combines the conventional ECAP and the incremental bulk metal forming method rotary swaging. ECAS tool system consists of two dies with an angled channel that contains two shear zones. During ECAS process, two forming tool halves, which are concentrically arranged around the workpiece, perform high frequency radial movements with short strokes, while samples are pushed through these. The oscillation direction nearly coincides with the shearing direction in the workpiece. The most important advantages in comparison to conventional ECAP are a significant reduction in the forces in material feeding direction plus the potential to be extended to continuous processing. In the current study, the mechanics of the ECAS process is investigated using slip line field approach. An analytical model is developed to predict process loads. The proposed model is validated using experiments and FE simulations.
Herler, Jürgen; Dirnwöber, Markus
2011-10-31
Estimating the impacts of global and local threats on coral reefs requires monitoring reef health and measuring coral growth and calcification rates at different time scales. This has traditionally been mostly performed in short-term experimental studies in which coral fragments were grown in the laboratory or in the field but measured ex situ. Practical techniques in which growth and measurements are performed over the long term in situ are rare. Apart from photographic approaches, weight increment measurements have also been applied. Past buoyant weight measurements under water involved a complicated and little-used apparatus. We introduce a new method that combines previous field and laboratory techniques to measure the buoyant weight of entire, transplanted corals under water. This method uses an electronic balance fitted into an acrylic glass underwater housing and placed atop of an acrylic glass cube. Within this cube, corals transplanted onto artificial bases can be attached to the balance and weighed at predetermined intervals while they continue growth in the field. We also provide a set of simple equations for the volume and weight determinations required to calculate net growth rates. The new technique is highly accurate: low error of weight determinations due to variation of coral density (< 0.08%) and low standard error (< 0.01%) for repeated measurements of the same corals. We outline a transplantation technique for properly preparing corals for such long-term in situ experiments and measurements.
Hu, Jing; Zhang, Xiaolong; Liu, Xiaoming; Tang, Jinshan
2015-06-01
Discovering hot regions in protein-protein interaction is important for drug and protein design, while experimental identification of hot regions is a time-consuming and labor-intensive effort; thus, the development of predictive models can be very helpful. In hot region prediction research, some models are based on structure information, and others are based on a protein interaction network. However, the prediction accuracy of these methods can still be improved. In this paper, a new method is proposed for hot region prediction, which combines density-based incremental clustering with feature-based classification. The method uses density-based incremental clustering to obtain rough hot regions, and uses feature-based classification to remove the non-hot spot residues from the rough hot regions. Experimental results show that the proposed method significantly improves the prediction performance of hot regions. Copyright © 2015 Elsevier Ltd. All rights reserved.
Murphy, Brett; Lilienfeld, Scott; Skeem, Jennifer; Edens, John
2016-01-01
Researchers are vigorously debating whether psychopathic personality includes seemingly adaptive traits, especially social and physical boldness. In a large sample (N=1565) of adult offenders, we examined the incremental validity of two operationalizations of boldness (Fearless Dominance traits in the Psychopathy Personality Inventory, Lilienfeld & Andrews, 1996; Boldness traits in the Triarchic Model of Psychopathy, Patrick et al, 2009), above and beyond other characteristics of psychopathy, in statistically predicting scores on four psychopathy-related measures, including the Psychopathy Checklist-Revised (PCL-R). The incremental validity added by boldness traits in predicting the PCL-R’s representation of psychopathy was especially pronounced for interpersonal traits (e.g., superficial charm, deceitfulness). Our analyses, however, revealed unexpected sex differences in the relevance of these traits to psychopathy, with boldness traits exhibiting reduced importance for psychopathy in women. We discuss the implications of these findings for measurement models of psychopathy. PMID:26866795
Cross, Zachariah R.; Kohler, Mark J.; Schlesewsky, Matthias; Gaskell, M. G.; Bornkessel-Schlesewsky, Ina
2018-01-01
We hypothesize a beneficial influence of sleep on the consolidation of the combinatorial mechanisms underlying incremental sentence comprehension. These predictions are grounded in recent work examining the effect of sleep on the consolidation of linguistic information, which demonstrate that sleep-dependent neurophysiological activity consolidates the meaning of novel words and simple grammatical rules. However, the sleep-dependent consolidation of sentence-level combinatorics has not been studied to date. Here, we propose that dissociable aspects of sleep neurophysiology consolidate two different types of combinatory mechanisms in human language: sequence-based (order-sensitive) and dependency-based (order-insensitive) combinatorics. The distinction between the two types of combinatorics is motivated both by cross-linguistic considerations and the neurobiological underpinnings of human language. Unifying this perspective with principles of sleep-dependent memory consolidation, we posit that a function of sleep is to optimize the consolidation of sequence-based knowledge (the when) and the establishment of semantic schemas of unordered items (the what) that underpin cross-linguistic variations in sentence comprehension. This hypothesis builds on the proposal that sleep is involved in the construction of predictive codes, a unified principle of brain function that supports incremental sentence comprehension. Finally, we discuss neurophysiological measures (EEG/MEG) that could be used to test these claims, such as the quantification of neuronal oscillations, which reflect basic mechanisms of information processing in the brain. PMID:29445333
Payne, Brennan R.; Lee, Chia-Lin; Federmeier, Kara D.
2015-01-01
The amplitude of the N400— an event-related potential (ERP) component linked to meaning processing and initial access to semantic memory— is inversely related to the incremental build-up of semantic context over the course of a sentence. We revisited the nature and scope of this incremental context effect, adopting a word-level linear mixed-effects modeling approach, with the goal of probing the continuous and incremental effects of semantic and syntactic context on multiple aspects of lexical processing during sentence comprehension (i.e., effects of word frequency and orthographic neighborhood). First, we replicated the classic word position effect at the single-word level: open-class words showed reductions in N400 amplitude with increasing word position in semantically congruent sentences only. Importantly, we found that accruing sentence context had separable influences on the effects of frequency and neighborhood on the N400. Word frequency effects were reduced with accumulating semantic context. However, orthographic neighborhood was unaffected by accumulating context, showing robust effects on the N400 across all words, even within congruent sentences. Additionally, we found that N400 amplitudes to closed-class words were reduced with incrementally constraining syntactic context in sentences that provided only syntactic constraints. Taken together, our findings indicate that modeling word-level variability in ERPs reveals mechanisms by which different sources of information simultaneously contribute to the unfolding neural dynamics of comprehension. PMID:26311477
Payne, Brennan R; Lee, Chia-Lin; Federmeier, Kara D
2015-11-01
The amplitude of the N400-an event-related potential (ERP) component linked to meaning processing and initial access to semantic memory-is inversely related to the incremental buildup of semantic context over the course of a sentence. We revisited the nature and scope of this incremental context effect, adopting a word-level linear mixed-effects modeling approach, with the goal of probing the continuous and incremental effects of semantic and syntactic context on multiple aspects of lexical processing during sentence comprehension (i.e., effects of word frequency and orthographic neighborhood). First, we replicated the classic word-position effect at the single-word level: Open-class words showed reductions in N400 amplitude with increasing word position in semantically congruent sentences only. Importantly, we found that accruing sentence context had separable influences on the effects of frequency and neighborhood on the N400. Word frequency effects were reduced with accumulating semantic context. However, orthographic neighborhood was unaffected by accumulating context, showing robust effects on the N400 across all words, even within congruent sentences. Additionally, we found that N400 amplitudes to closed-class words were reduced with incrementally constraining syntactic context in sentences that provided only syntactic constraints. Taken together, our findings indicate that modeling word-level variability in ERPs reveals mechanisms by which different sources of information simultaneously contribute to the unfolding neural dynamics of comprehension. © 2015 Society for Psychophysiological Research.
On double shearing in frictional materials
NASA Astrophysics Data System (ADS)
Teunissen, J. A. M.
2007-01-01
This paper evaluates the mechanical behaviour of yielding frictional geomaterials. The general Double Shearing model describes this behaviour. Non-coaxiality of stress and plastic strain increments for plane strain conditions forms an important part of this model. The model is based on a micro-mechanical and macro-mechanical formulation. The stress-dilatancy theory in the model combines the mechanical behaviour on both scales.It is shown that the general Double Shearing formulation comprises other Double Shearing models. These models differ in the relation between the mobilized friction and dilatancy and in non-coaxiality. In order to describe reversible and irreversible deformations the general Double Shearing model is extended with elasticity.The failure of soil masses is controlled by shear mechanisms. These shear mechanisms are determined by the conditions along the shear band. The shear stress ratio of a shear band depends on the orientation of the stress in the shear band. There is a difference between the peak strength and the residual strength in the shear band. While peak stress depends on strength properties only, the residual strength depends upon the yield conditions and the plastic deformation mechanisms and is generally considerably lower than the maximum strength. It is shown that non-coaxial models give non-unique solutions for the shear stress ratio on the shear band. The Double Shearing model is applied to various failure problems of soils such as the direct simple shear test, the biaxial test, infinite slopes, interfaces and for the calculation of the undrained shear strength. Copyright
eFSM--a novel online neural-fuzzy semantic memory model.
Tung, Whye Loon; Quek, Chai
2010-01-01
Fuzzy rule-based systems (FRBSs) have been successfully applied to many areas. However, traditional fuzzy systems are often manually crafted, and their rule bases that represent the acquired knowledge are static and cannot be trained to improve the modeling performance. This subsequently leads to intensive research on the autonomous construction and tuning of a fuzzy system directly from the observed training data to address the knowledge acquisition bottleneck, resulting in well-established hybrids such as neural-fuzzy systems (NFSs) and genetic fuzzy systems (GFSs). However, the complex and dynamic nature of real-world problems demands that fuzzy rule-based systems and models be able to adapt their parameters and ultimately evolve their rule bases to address the nonstationary (time-varying) characteristics of their operating environments. Recently, considerable research efforts have been directed to the study of evolving Tagaki-Sugeno (T-S)-type NFSs based on the concept of incremental learning. In contrast, there are very few incremental learning Mamdani-type NFSs reported in the literature. Hence, this paper presents the evolving neural-fuzzy semantic memory (eFSM) model, a neural-fuzzy Mamdani architecture with a data-driven progressively adaptive structure (i.e., rule base) based on incremental learning. Issues related to the incremental learning of the eFSM rule base are carefully investigated, and a novel parameter learning approach is proposed for the tuning of the fuzzy set parameters in eFSM. The proposed eFSM model elicits highly interpretable semantic knowledge in the form of Mamdani-type if-then fuzzy rules from low-level numeric training data. These Mamdani fuzzy rules define the computing structure of eFSM and are incrementally learned with the arrival of each training data sample. New rules are constructed from the emergence of novel training data and obsolete fuzzy rules that no longer describe the recently observed data trends are pruned. This enables eFSM to maintain a current and compact set of Mamdani-type if-then fuzzy rules that collectively generalizes and describes the salient associative mappings between the inputs and outputs of the underlying process being modeled. The learning and modeling performances of the proposed eFSM are evaluated using several benchmark applications and the results are encouraging.
Chen, Ingrid T; Aung, Tin; Thant, Hnin Nwe Nwe; Sudhinaraset, May; Kahn, James G
2015-02-05
The emergence of artemisinin-resistant Plasmodium falciparum parasites in Southeast Asia threatens global malaria control efforts. One strategy to counter this problem is a subsidy of malaria rapid diagnostic tests (RDTs) and artemisinin-based combination therapy (ACT) within the informal private sector, where the majority of malaria care in Myanmar is provided. A study in Myanmar evaluated the effectiveness of financial incentives vs information, education and counselling (IEC) in driving the proper use of subsidized malaria RDTs among informal private providers. This cost-effectiveness analysis compares intervention options. A decision tree was constructed in a spreadsheet to estimate the incremental cost-effectiveness ratios (ICERs) among four strategies: no intervention, simple subsidy, subsidy with financial incentives, and subsidy with IEC. Model inputs included programmatic costs (in dollars), malaria epidemiology and observed study outcomes. Data sources included expenditure records, study data and scientific literature. Model outcomes included the proportion of properly and improperly treated individuals with and without P. falciparum malaria, and associated disability-adjusted life years (DALYs). Results are reported as ICERs in US dollars per DALY averted. One-way sensitivity analysis assessed how outcomes depend on uncertainty in inputs. ICERs from the least to most expensive intervention are: $1,169/DALY averted for simple subsidy vs no intervention, $185/DALY averted for subsidy with financial incentives vs simple subsidy, and $200/DALY averted for a subsidy with IEC vs subsidy with financial incentives. Due to decreasing ICERs, each strategy was also compared to no intervention. The subsidy with IEC was the most favourable, costing $639/DALY averted compared with no intervention. One-way sensitivity analysis shows that ICERs are most affected by programme costs, RDT uptake, treatment-seeking behaviour, and the prevalence and virulence of non-malarial fevers. In conclusion, private provider subsidies with IEC or a combination of IEC and financial incentives may be a good investment for malaria control.
NASA Astrophysics Data System (ADS)
Yokoi, S.
2014-12-01
This study conducts a comparison of three reanalysis products (JRA-55, JRA-25, and ERA-Interim) in representation of Madden-Julian Oscillation (MJO), focusing on column-integrated water vapor (CWV) that is considered as an essential variable for discussing MJO dynamics. Besides the analysis fields of CWV, which exhibit spatio-temporal distributions that are quite similar to satellite observations, CWV tendency simulated by forecast models and analysis increment calculated by data assimilation are examined. For JRA-55, it is revealed that, while its forecast model is able to simulate eastward propagation of the CWV anomaly, it tends to weaken the amplitude, and data assimilation process sustains the amplitude. The multi-reanalysis comparison of the analysis increment further reveals that this weakening bias is probably caused by excessively weak cloud-radiative feedback represented by the model. This bias in the feedback strength makes anomalous moisture supply by the vertical advection term in the CWV budget equation too insensitive to precipitation anomaly, resulting in reduction of the amplitude of CWV anomaly. ERA-Interim has a nearly opposite feature; the forecast model represents excessively strong feedback and unrealistically strengthens the amplitude, while the data assimilation weakens it. These results imply the necessity of accurate representation of the cloud-radiative feedback strength for a short-term MJO forecast, and may be evidence to support the argument that this feedback is essential for the existence of MJO. Furthermore, this study demonstrates that the multi-reanalysis comparison of the analysis increment will provide useful information for identifying model biases and, potentially, for estimating parameters that are difficult to estimate solely from observation data, such as gross moist stability.
Mittmann, Nicole; Chan, Brian C; Craven, B Cathy; Isogai, Pierre K; Houghton, Pamela
2011-06-01
To evaluate the incremental cost-effectiveness of electrical stimulation (ES) plus standard wound care (SWC) as compared with SWC only in a spinal cord injury (SCI) population with grade III/IV pressure ulcers (PUs) from the public payer perspective. A decision analytic model was constructed for a 1-year time horizon to determine the incremental cost-effectiveness of ES plus SWC to SWC in a cohort of participants with SCI and grade III/IV PUs. Model inputs for clinical probabilities were based on published literature. Model inputs, namely clinical probabilities and direct health system and medical resources were based on a randomized controlled trial of ES plus SWC versus SWC. Costs (Can $) included outpatient (clinic, home care, health professional) and inpatient management (surgery, complications). One way and probabilistic sensitivity (1000 Monte Carlo iterations) analyses were conducted. The perspective of this analysis is from a Canadian public health system payer. Model target population was an SCI cohort with grade III/IV PUs. Not applicable. Incremental cost per PU healed. ES plus SWC were associated with better outcomes and lower costs. There was a 16.4% increase in the PUs healed and a cost savings of $224 at 1 year. ES plus SWC were thus considered a dominant economic comparator. Probabilistic sensitivity analysis resulted in economic dominance for ES plus SWC in 62%, with another 35% having incremental cost-effectiveness ratios of $50,000 or less per PU healed. The largest driver of the economic model was the percentage of PU healed with ES plus SWC. The addition of ES to SWC improved healing in grade III/IV PU and reduced costs in an SCI population. Copyright © 2011 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.
Borisenko, Oleg; Mann, Oliver; Duprée, Anna
2017-08-03
The objective was to evaluate cost-utility of bariatric surgery in Germany for a lifetime and 10-year horizon from a health care payer perspective. State-transition Markov model provided absolute and incremental clinical and monetary results. In the model, obese patients could undergo surgery, develop post-surgery complications, experience diabetes type II, cardiovascular diseases or die. German Quality Assurance in Bariatric Surgery Registry and literature sources provided data on clinical effectiveness and safety. The model considered three types of surgeries: gastric bypass, sleeve gastrectomy, and adjustable gastric banding. The model was extensively validated, and deterministic and probabilistic sensitivity analyses were performed to evaluate uncertainty. Cost data were obtained from German sources and presented in 2012 euros (€). Over 10 years, bariatric surgery led to the incremental cost of €2909, generated additional 0.03 years of life and 1.2 quality-adjusted life years (QALYs). Bariatric surgery was cost-effective at 10 years with an incremental cost-effectiveness ratio of €2457 per QALY. Over a lifetime, surgery led to savings of €8522 and generated an increment of 0.7 years of life or 3.2 QALYs. The analysis also depicted an association between surgery and a reduction of obesity-related adverse events (diabetes, cardiovascular disorders). Delaying surgery for up to 3 years, resulted in a reduction of life years and QALYs gained, in addition to a moderate reduction in associated healthcare costs. Bariatric surgery is cost-effective at 10 years post-surgery and may result in a substantial reduction in the financial burden on the healthcare system over the lifetime of the treated individuals. It is also observed that delays in the provision of surgery may lead to a significant loss of clinical benefits.
NASA Technical Reports Server (NTRS)
Smith, Mark S.; Bui, Trong T.; Garcia, Christian A.; Cumming, Stephen B.
2016-01-01
A pair of compliant trailing edge flaps was flown on a modified GIII airplane. Prior to flight test, multiple analysis tools of various levels of complexity were used to predict the aerodynamic effects of the flaps. Vortex lattice, full potential flow, and full Navier-Stokes aerodynamic analysis software programs were used for prediction, in addition to another program that used empirical data. After the flight-test series, lift and pitching moment coefficient increments due to the flaps were estimated from flight data and compared to the results of the predictive tools. The predicted lift increments matched flight data well for all predictive tools for small flap deflections. All tools over-predicted lift increments for large flap deflections. The potential flow and Navier-Stokes programs predicted pitching moment coefficient increments better than the other tools.
Li, Meina; Kwak, Keun-Chang; Kim, Youn Tae
2016-01-01
Conventionally, indirect calorimetry has been used to estimate oxygen consumption in an effort to accurately measure human body energy expenditure. However, calorimetry requires the subject to wear a mask that is neither convenient nor comfortable. The purpose of our study is to develop a patch-type sensor module with an embedded incremental radial basis function neural network (RBFNN) for estimating the energy expenditure. The sensor module contains one ECG electrode and a three-axis accelerometer, and can perform real-time heart rate (HR) and movement index (MI) monitoring. The embedded incremental network includes linear regression (LR) and RBFNN based on context-based fuzzy c-means (CFCM) clustering. This incremental network is constructed by building a collection of information granules through CFCM clustering that is guided by the distribution of error of the linear part of the LR model. PMID:27669249
Lakdawalla, Darius N; Chou, Jacquelyn W; Linthicum, Mark T; MacEwan, Joanna P; Zhang, Jie; Goldman, Dana P
2015-05-01
Surrogate end points may be used as proxy for more robust clinical end points. One prominent example is the use of progression-free survival (PFS) as a surrogate for overall survival (OS) in trials for oncologic treatments. Decisions based on surrogate end points may expedite regulatory approval but may not accurately reflect drug efficacy. Payers and clinicians must balance the potential benefits of earlier treatment access based on surrogate end points against the risks of clinical uncertainty. To present a framework for evaluating the expected net benefit or cost of providing early access to new treatments on the basis of evidence of PFS benefits before OS results are available, using non-small-cell lung cancer (NSCLC) as an example. A probabilistic decision model was used to estimate expected incremental social value of the decision to grant access to a new treatment on the basis of PFS evidence. The model analyzed a hypothetical population of patients with NSCLC who could be treated during the period between PFS and OS evidence publication. Estimates for delay in publication of OS evidence following publication of PFS evidence, expected OS benefit given PFS benefit, incremental cost of new treatment, and other parameters were drawn from the literature on treatment of NSCLC. Incremental social value of early access for each additional patient per month (in 2014 US dollars). For "medium-value" model parameters, early reimbursement of drugs with any PFS benefit yields an incremental social cost of more than $170,000 per newly treated patient per month. In contrast, granting early access on the basis of PFS benefit between 1 and 3.5 months produces more than $73,000 in incremental social value. Across the full range of model parameter values, granting access for drugs with PFS benefit between 3 and 3.5 months is robustly beneficial, generating incremental social value ranging from $38,000 to more than $1 million per newly treated patient per month, whereas access for all drugs with any PFS benefit is usually not beneficial. The value of providing access to new treatments on the basis of surrogate end points, and PFS in particular, likely varies considerably. Payers and clinicians should carefully consider how to use PFS data in balancing potential benefits against costs in each particular disease.
Araya, Ricardo; Flynn, Terry; Rojas, Graciela; Fritsch, Rosemarie; Simon, Greg
2006-08-01
The authors compared the incremental cost-effectiveness of a stepped-care, multicomponent program with usual care for the treatment of depressed women in primary care in Santiago, Chile. A cost-effectiveness study was conducted of a previous randomized controlled trial involving 240 eligible women with DSM-IV major depression who were selected from a consecutive sample of adult women attending primary care clinics. The patients were randomly allocated to usual care or a multicomponent stepped-care program led by a nonmedical health care worker. Depression-free days and health care costs derived from local sources were assessed after 3 and 6 months. A health service perspective was used in the economic analysis. Complete data were determined for 80% of the randomly assigned patients. After we adjusted for initial severity, women receiving the stepped-care program had a mean of 50 additional depression-free days over 6 months relative to patients allocated to usual care. The stepped-care program was marginally more expensive than usual care (an extra 216 Chilean pesos per depression-free day). There was a 90% probability that the incremental cost of obtaining an extra depression-free day with the intervention would not exceed 300 pesos (1.04 US dollars). The stepped-care program was significantly more effective and marginally more expensive than usual care for the treatment of depressed women in primary care. Small investments to improve depression appear to yield larger gains in poorer environments. Simple and inexpensive treatment programs tested in developing countries might provide good study models for developed countries.
A moist Boussinesq shallow water equations set for testing atmospheric models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zerroukat, M., E-mail: mohamed.zerroukat@metoffice.gov.uk; Allen, T.
The shallow water equations have long been used as an initial test for numerical methods applied to atmospheric models with the test suite of Williamson et al. being used extensively for validating new schemes and assessing their accuracy. However the lack of physics forcing within this simplified framework often requires numerical techniques to be reworked when applied to fully three dimensional models. In this paper a novel two-dimensional shallow water equations system that retains moist processes is derived. This system is derived from three-dimensional Boussinesq approximation of the hydrostatic Euler equations where, unlike the classical shallow water set, we allowmore » the density to vary slightly with temperature. This results in extra (or buoyancy) terms for the momentum equations, through which a two-way moist-physics dynamics feedback is achieved. The temperature and moisture variables are advected as separate tracers with sources that interact with the mean-flow through a simplified yet realistic bulk moist-thermodynamic phase-change model. This moist shallow water system provides a unique tool to assess the usually complex and highly non-linear dynamics–physics interactions in atmospheric models in a simple yet realistic way. The full non-linear shallow water equations are solved numerically on several case studies and the results suggest quite realistic interaction between the dynamics and physics and in particular the generation of cloud and rain. - Highlights: • Novel shallow water equations which retains moist processes are derived from the three-dimensional hydrostatic Boussinesq equations. • The new shallow water set can be seen as a more general one, where the classical equations are a special case of these equations. • This moist shallow water system naturally allows a feedback mechanism from the moist physics increments to the momentum via buoyancy. • Like full models, temperature and moistures are advected as tracers that interact through a simplified yet realistic phase-change model. • This model is a unique tool to test numerical methods for atmospheric models, and physics–dynamics coupling, in a very realistic and simple way.« less
Transport of Internetwork Magnetic Flux Elements in the Solar Photosphere
NASA Astrophysics Data System (ADS)
Agrawal, Piyush; Rast, Mark P.; Gošić, Milan; Bellot Rubio, Luis R.; Rempel, Matthias
2018-02-01
The motions of small-scale magnetic flux elements in the solar photosphere can provide some measure of the Lagrangian properties of the convective flow. Measurements of these motions have been critical in estimating the turbulent diffusion coefficient in flux-transport dynamo models and in determining the Alfvén wave excitation spectrum for coronal heating models. We examine the motions of internetwork flux elements in Hinode/Narrowband Filter Imager magnetograms and study the scaling of their mean squared displacement and the shape of their displacement probability distribution as a function of time. We find that the mean squared displacement scales super-diffusively with a slope of about 1.48. Super-diffusive scaling has been observed in other studies for temporal increments as small as 5 s, increments over which ballistic scaling would be expected. Using high-cadence MURaM simulations, we show that the observed super-diffusive scaling at short increments is a consequence of random changes in barycenter positions due to flux evolution. We also find that for long temporal increments, beyond granular lifetimes, the observed displacement distribution deviates from that expected for a diffusive process, evolving from Rayleigh to Gaussian. This change in distribution can be modeled analytically by accounting for supergranular advection along with granular motions. These results complicate the interpretation of magnetic element motions as strictly advective or diffusive on short and long timescales and suggest that measurements of magnetic element motions must be used with caution in turbulent diffusion or wave excitation models. We propose that passive tracer motions in measured photospheric flows may yield more robust transport statistics.
Power-Stepped HF Cross-Modulation Experiments: Simulations and Experimental Observations
NASA Astrophysics Data System (ADS)
Greene, S.; Moore, R. C.
2014-12-01
High frequency (HF) cross modulation experiments are a well established means for probing the HF-modified characteristics of the D-region ionosphere. The interaction between the heating wave and the probing pulse depends on the ambient and modified conditions of the D-region ionosphere. Cross-modulation observations are employed as a measure of the HF-modified refractive index. We employ an optimized version of Fejer's method that we developed during previous experiments. Experiments were performed in March 2013 at the High Frequency Active Auroral Research Program (HAARP) observatory in Gakona, Alaska. During these experiments, the power of the HF heating signal incrementally increased in order to determine the dependence of cross-modulation on HF power. We found that a simple power law relationship does not hold at high power levels, similar to previous ELF/VLF wave generation experiments. In this paper, we critically compare these experimental observations with the predictions of a numerical ionospheric HF heating model and demonstrate close agreement.
Effect of structural changes of lignocelluloses material upon pre-treatment using green solvents
NASA Astrophysics Data System (ADS)
Gunny, Ahmad Anas Nagoor; Arbain, Dachyar; Jamal, Parveen
2017-04-01
The Malaysia Biomass strategy 2020 stated that the key step of biofuel production from biomass lies on the pretreatment process. Conventional `pre-treatment' methods are `non-green" and costly. The recent green and cost-effective biomass pretreatment is using new generation of Ionic Liquids also known as Deep Eutectic Solvents (DESs). DESs are made of renewable components are cheaper, greener and the process synthesis are easier. Thus, the present paper concerns with the preparation of various combination of DES and to study the effect of DESs pretreatment process on microcrystalline cellulose (MCC), a model substrate. The crystalline structural changes were studied using using X-ray Diffraction Methods, Fourier Transformed Infrared Spectroscopy (FTIR) and surface area and pore size analysis. Results showed reduction of crystalline structure of MCC treated with the DESs and increment of surface area and pore size of MCC after pre-treatment process. These results indicated the DES has successfully converted the lignocelluloses material in the form suitable for hydrolysis and conversion to simple sugar.
Why has the tropical lower stratosphere stopped cooling since 1997?
NASA Astrophysics Data System (ADS)
Polvani, Lorenzo; Wang, Lei; Aquila, Valentina; Waugh, Darryn
2017-04-01
The impact of ozone depleting substances on global lower stratospheric temperature trends is widely recognized. In the tropics, however, understanding lower stratospheric temperature trends has proven more challenging. While the tropical lower stratospheric cooling observed from 1979 to 1997 has been linked to tropical ozone decreases, those ozone trends cannot be of chemical origin, as active chlorine is not abundant in the tropical lower stratosphere. The 1979-1997 tropical ozone trends are believed to originate from enhanced upwelling which, it is often stated, would be driven by increasing concentrations of well-mixed greenhouse gases. Using simple arguments based on observational evidence after 1997, combined with model integrations with incrementally added single forcings, we argue that ozone depleting substances, not well-mixed greenhouse gases, have been the primary driver of temperature and ozone trends in the tropical lower stratosphere until 1997, and this has occurred because ozone depleting substances are key drivers of tropical upwelling and of the entire Brewer-Dobson circulation.
Design of Low Complexity Model Reference Adaptive Controllers
NASA Technical Reports Server (NTRS)
Hanson, Curt; Schaefer, Jacob; Johnson, Marcus; Nguyen, Nhan
2012-01-01
Flight research experiments have demonstrated that adaptive flight controls can be an effective technology for improving aircraft safety in the event of failures or damage. However, the nonlinear, timevarying nature of adaptive algorithms continues to challenge traditional methods for the verification and validation testing of safety-critical flight control systems. Increasingly complex adaptive control theories and designs are emerging, but only make testing challenges more difficult. A potential first step toward the acceptance of adaptive flight controllers by aircraft manufacturers, operators, and certification authorities is a very simple design that operates as an augmentation to a non-adaptive baseline controller. Three such controllers were developed as part of a National Aeronautics and Space Administration flight research experiment to determine the appropriate level of complexity required to restore acceptable handling qualities to an aircraft that has suffered failures or damage. The controllers consist of the same basic design, but incorporate incrementally-increasing levels of complexity. Derivations of the controllers and their adaptive parameter update laws are presented along with details of the controllers implementations.
Contingency Planning for Planetary Rovers
NASA Technical Reports Server (NTRS)
Dearden, Richard; Meuleau, Nicolas; Ramakrishnan, Sailesh; Smith, David; Washington, Rich; Clancy, Daniel (Technical Monitor)
2002-01-01
There has been considerable work in AI on planning under uncertainty. But this work generally assumes an extremely simple model of action that does not consider continuous time and resources. These assumptions are not reasonable for a Mars rover, which must cope with uncertainty about the duration of tasks, the power required, the data storage necessary, along with its position and orientation. In this paper, we outline an approach to generating contingency plans when the sources of uncertainty involve continuous quantities such as time and resources. The approach involves first constructing a "seed" plan, and then incrementally adding contingent branches to this plan in order to improve utility. The challenge is to figure out the best places to insert contingency branches. This requires an estimate of how much utility could be gained by building a contingent branch at any given place in the seed plan. Computing this utility exactly is intractable, but we outline an approximation method that back propagates utility distributions through a graph structure similar to that of a plan graph.
NASA Technical Reports Server (NTRS)
Kohl, F. J.
1982-01-01
The methodology to predict deposit evolution (deposition rate and subsequent flow of liquid deposits) as a function of fuel and air impurity content and relevant aerodynamic parameters for turbine airfoils is developed in this research. The spectrum of deposition conditions encountered in gas turbine operations includes the mechanisms of vapor deposition, small particle deposition with thermophoresis, and larger particle deposition with inertial effects. The focus is on using a simplified version of the comprehensive multicomponent vapor diffusion formalism to make deposition predictions for: (1) simple geometry collectors; and (2) gas turbine blade shapes, including both developing laminar and turbulent boundary layers. For the gas turbine blade the insights developed in previous programs are being combined with heat and mass transfer coefficient calculations using the STAN 5 boundary layer code to predict vapor deposition rates and corresponding liquid layer thicknesses on turbine blades. A computer program is being written which utilizes the local values of the calculated deposition rate and skin friction to calculate the increment in liquid condensate layer growth along a collector surface.
Impacts of snow cover fraction data assimilation on modeled energy and moisture budgets
NASA Astrophysics Data System (ADS)
Arsenault, Kristi R.; Houser, Paul R.; De Lannoy, Gabriëlle J. M.; Dirmeyer, Paul A.
2013-07-01
Two data assimilation (DA) methods, a simple rule-based direct insertion (DI) approach and a one-dimensional ensemble Kalman filter (EnKF) method, are evaluated by assimilating snow cover fraction observations into the Community Land surface Model. The ensemble perturbation needed for the EnKF resulted in negative snowpack biases. Therefore, a correction is made to the ensemble bias using an approach that constrains the ensemble forecasts with a single unperturbed deterministic LSM run. This is shown to improve the final snow state analyses. The EnKF method produces slightly better results in higher elevation locations, whereas results indicate that the DI method has a performance advantage in lower elevation regions. In addition, the two DA methods are evaluated in terms of their overall impacts on the other land surface state variables (e.g., soil moisture) and fluxes (e.g., latent heat flux). The EnKF method is shown to have less impact overall than the DI method and causes less distortion of the hydrological budget. However, the land surface model adjusts more slowly to the smaller EnKF increments, which leads to smaller but slightly more persistent moisture budget errors than found with the DI updates. The DI method can remove almost instantly much of the modeled snowpack, but this also allows the model system to quickly revert to hydrological balance for nonsnowpack conditions.
Hoyle, Martin; Cresswell, James E
2007-09-07
We present a spatially implicit analytical model of forager movement, designed to address a simple scenario common in nature. We assume minimal depression of patch resources, and discrete foraging bouts, during which foragers fill to capacity. The model is particularly suitable for foragers that search systematically, foragers that deplete resources in a patch only incrementally, and for sit-and-wait foragers, where harvesting does not affect the rate of arrival of forage. Drawing on the theory of job search from microeconomics, we estimate the expected number of patches visited as a function of just two variables: the coefficient of variation of the rate of energy gain among patches, and the ratio of the expected time exploiting a randomly chosen patch and the expected time travelling between patches. We then consider the forager as a pollinator and apply our model to estimate gene flow. Under model assumptions, an upper bound for animal-mediated gene flow between natural plant populations is approximately proportional to the probability that the animal rejects a plant population. In addition, an upper bound for animal-mediated gene flow in any animal-pollinated agricultural crop from a genetically modified (GM) to a non-GM field is approximately proportional to the proportion of fields that are GM and the probability that the animal rejects a field.
Basal area increment and growth efficiency as functions of canopy dynamics and stem mechanics
Thomas J. Dean
2004-01-01
Crown and canopy structurecorrelate with growth efficiency and also determine stem size and taper as described by the uniform stress principle of stem formation. A regression model was derived from this principle that expresses basal area increment in terms of the amount and vertical distribution of leaf area and change in these variables during a growth period. This...
A Model Based Approach to Increase the Part Accuracy in Robot Based Incremental Sheet Metal Forming
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meier, Horst; Laurischkat, Roman; Zhu Junhong
One main influence on the dimensional accuracy in robot based incremental sheet metal forming results from the compliance of the involved robot structures. Compared to conventional machine tools the low stiffness of the robot's kinematic results in a significant deviation of the planned tool path and therefore in a shape of insufficient quality. To predict and compensate these deviations offline, a model based approach, consisting of a finite element approach, to simulate the sheet forming, and a multi body system, modeling the compliant robot structure, has been developed. This paper describes the implementation and experimental verification of the multi bodymore » system model and its included compensation method.« less
NASA Astrophysics Data System (ADS)
Hemmat Esfe, Mohammad; Nadooshan, Afshin Ahmadi; Arshi, Ali; Alirezaie, Ali
2018-03-01
In this study, experimental data related to the Nusselt number and pressure drop of aqueous nanofluids of Titania is modeled and estimated by using ANN with 2 hidden layers and 8 neurons in each layer. Also in this study the effect of various effective variables in the Nusselt number and pressure drop is surveyed. This study indicated that the neural network modeling has been able to model experimental data with great accuracy. The modeling regression coefficient for the data of Nusselt number and relative pressure drop is 99.94% and 99.97% respectively. Besides, it represented that the increment of the Reynolds number and concentration made the increment of Nusselt number and pressure drop of aqueous nanofluid.
An Incremental Weighted Least Squares Approach to Surface Lights Fields
NASA Astrophysics Data System (ADS)
Coombe, Greg; Lastra, Anselmo
An Image-Based Rendering (IBR) approach to appearance modelling enables the capture of a wide variety of real physical surfaces with complex reflectance behaviour. The challenges with this approach are handling the large amount of data, rendering the data efficiently, and previewing the model as it is being constructed. In this paper, we introduce the Incremental Weighted Least Squares approach to the representation and rendering of spatially and directionally varying illumination. Each surface patch consists of a set of Weighted Least Squares (WLS) node centers, which are low-degree polynomial representations of the anisotropic exitant radiance. During rendering, the representations are combined in a non-linear fashion to generate a full reconstruction of the exitant radiance. The rendering algorithm is fast, efficient, and implemented entirely on the GPU. The construction algorithm is incremental, which means that images are processed as they arrive instead of in the traditional batch fashion. This human-in-the-loop process enables the user to preview the model as it is being constructed and to adapt to over-sampling and under-sampling of the surface appearance.
Gong, Yaping; Wu, Junfeng; Song, Lynda Jiwen; Zhang, Zhen
2017-05-01
Intrinsic and extrinsic motivational orientations often coexist and can serve important functions. We develop and test a model in which intrinsic and extrinsic motivational orientations interact positively to influence personal creativity goal. Personal creativity goal, in turn, has a positive relationship with incremental creativity and an inverted U-shaped relationship with radical creativity. In a pilot study, we validated the personal creativity goal measure using 180 (Sample 1) and 69 (Sample 2) employees from a consulting firm. In the primary study, we tested the overall model using a sample of 657 research and development employees and their direct supervisors from an automobile firm. The results support the hypothesized model and yield several new insights. Intrinsic and extrinsic motivational orientations synergize with each other to strengthen personal creativity goal. Personal creativity goal in turn benefits incremental and radical creativity, but only up to a certain point for the latter. In addition to its linear indirect relationship with incremental creativity, intrinsic motivational orientation has an inverted U-shaped indirect relationship with radical creativity via personal creativity goal. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Two models of minimalist, incremental syntactic analysis.
Stabler, Edward P
2013-07-01
Minimalist grammars (MGs) and multiple context-free grammars (MCFGs) are weakly equivalent in the sense that they define the same languages, a large mildly context-sensitive class that properly includes context-free languages. But in addition, for each MG, there is an MCFG which is strongly equivalent in the sense that it defines the same language with isomorphic derivations. However, the structure-building rules of MGs but not MCFGs are defined in a way that generalizes across categories. Consequently, MGs can be exponentially more succinct than their MCFG equivalents, and this difference shows in parsing models too. An incremental, top-down beam parser for MGs is defined here, sound and complete for all MGs, and hence also capable of parsing all MCFG languages. But since the parser represents its grammar transparently, the relative succinctness of MGs is again evident. Although the determinants of MG structure are narrowly and discretely defined, probabilistic influences from a much broader domain can influence even the earliest analytic steps, allowing frequency and context effects to come early and from almost anywhere, as expected in incremental models. Copyright © 2013 Cognitive Science Society, Inc.
NASA Astrophysics Data System (ADS)
Kacarab, Mary; Li, Lijie; Carter, William P. L.; Cocker, David R., III
2016-04-01
Two different surrogate mixtures of anthropogenic and biogenic volatile organic compounds (VOCs) were developed to study secondary organic aerosol (SOA) formation at atmospheric reactivities similar to urban regions with varying biogenic influence levels. Environmental chamber simulations were designed to enable the study of the incremental aerosol formation from select anthropogenic (m-Xylene, 1,2,4-Trimethylbenzene, and 1-Methylnaphthalene) and biogenic (α-pinene) precursors under the chemical reactivity set by the two different surrogate mixtures. The surrogate reactive organic gas (ROG) mixtures were based on that used to develop the maximum incremental reactivity (MIR) factors for evaluation of O3 forming potential. Multiple incremental aerosol formation experiments were performed in the University of California Riverside (UCR) College of Engineering Center for Environmental Research and Technology (CE-CERT) dual 90m3 environmental chambers. Incremental aerosol yields were determined for each of the VOCs studied and compared to yields found from single precursor studies. Aerosol physical properties of density, volatility, and hygroscopicity were monitored throughout experiments. Bulk elemental chemical composition from high-resolution time of flight aerosol mass spectrometer (HR-ToF-AMS) data will also be presented. Incremental yields and SOA chemical and physical characteristics will be compared with data from previous single VOC studies conducted for these aerosol precursors following traditional VOC/NOx chamber experiments. Evaluation of the incremental effects of VOCs on SOA formation and properties are paramount in evaluating how to best extrapolate environmental chamber observations to the ambient atmosphere and provides useful insights into current SOA formation models. Further, the comparison of incremental SOA from VOCs in varying surrogate urban atmospheres (with and without strong biogenic influence) allows for a unique perspective on the impacts different compounds have on aerosol formation in different urban regions.
A new pattern associative memory model for image recognition based on Hebb rules and dot product
NASA Astrophysics Data System (ADS)
Gao, Mingyue; Deng, Limiao; Wang, Yanjiang
2018-04-01
A great number of associative memory models have been proposed to realize information storage and retrieval inspired by human brain in the last few years. However, there is still much room for improvement for those models. In this paper, we extend a binary pattern associative memory model to accomplish real-world image recognition. The learning process is based on the fundamental Hebb rules and the retrieval is implemented by a normalized dot product operation. Our proposed model can not only fulfill rapid memory storage and retrieval for visual information but also have the ability on incremental learning without destroying the previous learned information. Experimental results demonstrate that our model outperforms the existing Self-Organizing Incremental Neural Network (SOINN) and Back Propagation Neuron Network (BPNN) on recognition accuracy and time efficiency.
Chang, Sheng; Bi, Yunlong; Meng, Xiangwei; Qu, Lin; Cao, Yang
2018-03-21
The blood-spinal cord barrier (BSCB) plays a key role in maintaining the microenvironment and is primarily composed of tight junction proteins and nonfenestrated capillary endothelial cells. After injury, BSCB damage results in increasing capillary permeability and release of inflammatory factors. Recent studies have reported that haem oxygenase-1 (HO-1) fragments lacking 23 amino acids at the C-terminus (HO-1C[INCREMENT]23) exert novel anti-inflammatory and antioxidative effects in vitro. However, no study has identified the role of HO-1C[INCREMENT]23 in vivo. We aimed to investigate the protective effects of HO-1C[INCREMENT]23 on the BSCB after spinal cord injury (SCI) in a rat model. Here, adenoviral HO-1C[INCREMENT]23 (Ad-GFP-HO-1C[INCREMENT]23) was intrathecally injected into the 10th thoracic spinal cord segment (T10) 7 days before SCI. In addition, nuclear and cytoplasmic extraction and immunofluorescence staining of HO-1 were used to examine the effect of Ad-GFP-HO-1C[INCREMENT]23 on HO-1 nuclear translocation. Evan's blue staining served as an index of capillary permeability and was detected by fluorescence microscopy at 633 nm. Western blotting was also performed to detect tight junction protein expression. The Basso, Beattie and Bresnahan score was used to evaluate kinematic functional recovery through the 28th day after SCI. In this study, the Ad-GFP-HO-1C[INCREMENT]23 group showed better kinematic functional recovery after SCI than the Ad-GFP and Vehicle groups, as well as smaller reductions in TJ proteins and capillary permeability compared with those in the Ad-GFP and Vehicle groups. These findings indicated that Ad-GFP-HO-1C[INCREMENT]23 might have a potential therapeutic effect that is mediated by its protection of BSCB integrity.
Arnold, Elizabeth; Yuan, Yong; Iloeje, Uchenna; Cook, Greg
2008-01-01
Chronic hepatitis B (CHB) virus infection is a major global healthcare problem. The recent introduction of entecavir in Australia for the treatment of CHB patients in the naive treatment setting has triggered significant optimism with regards to improved clinical outcomes for CHB patients. To estimate, from an Australian healthcare perspective, the cost effectiveness of entecavir 0.5 mg/day versus lamivudine 100 mg/day in the treatment of CHB patients naive to nucleos(t)ide therapy. A cost-utility analysis to project the clinical and economic outcomes associated with CHB disease and treatment was conducted by developing two decision-tree models specific to hepatitis B e antigen-positive (HBeAg+ve) and HBeAg-ve CHB patient subsets. This analysis was constructed using the Australian payer perspective of direct costs and outcomes, with indirect medical costs and lost productivity not being included. The study population comprised a hypothetical cohort of 1000 antiviral treatment-naive CHB patients who received either entecavir 0.5 mg/day or lamivudine 100 mg/day at model entry. The population of patients used in this analysis was representative of those patients likely to receive initial antiviral therapy in clinical practice in Australia. The long-term cost effectiveness of entecavir compared with lamivudine in the first-line treatment of CHB patients was expressed as an incremental cost per life-year gained (LYG) or QALY gained. Results revealed that the availability of entecavir 0.5 mg/day as part of the Australian hepatologist's treatment armamentarium should result in significantly lower future rates of compensated cirrhosis (CC), decompensated cirrhosis (DC), and hepatocellular carcinoma (HCC) events (i.e. 54 fewer cases of CC, seven fewer cases of DC, and 20 fewer cases of HCC over the model's timeframe for HBeAg+ve CHB patients, and 69 fewer cases of CC, eight fewer cases of DC and 25 fewer cases of HCC over the model's timeframe for HBeAg-ve CHB patients). Compared with lamivudine 100 mg/day, entecavir 0.5 mg/day generated an estimated incremental cost per LYG of Australian dollars ($A, year 2006 values) 5046 and an estimated incremental cost per QALY of $A5952 in the HBeAg+ve CHB patient population, an estimated incremental cost per LYG of $A7063 and an estimated incremental cost per QALY of $A8003 in the HBeAg-ve CHB patient population, and an overall estimated incremental cost per LYG of $A5853 and an estimated incremental cost per QALY of $A6772 in the general CHB population. The availability of entecavir in Australian clinical practice should make long-term suppression of hepatitis B virus replication increasingly attainable, resulting in fewer CHB sequelae, at an acceptable financial cost.
Ruffing, Stephanie; Wach, F. -Sophie; Spinath, Frank M.; Brünken, Roland; Karbach, Julia
2015-01-01
Recent research has revealed that learning behavior is associated with academic achievement at the college level, but the impact of specific learning strategies on academic success as well as gender differences therein are still not clear. Therefore, the aim of this study was to investigate gender differences in the incremental contribution of learning strategies over general cognitive ability in the prediction of academic achievement. The relationship between these variables was examined by correlation analyses. A set of t-tests was used to test for gender differences in learning strategies, whereas structural equation modeling as well as multi-group analyses were applied to investigate the incremental contribution of learning strategies for male and female students’ academic performance. The sample consisted of 461 students (mean age = 21.2 years, SD = 3.2). Correlation analyses revealed that general cognitive ability as well as the learning strategies effort, attention, and learning environment were positively correlated with academic achievement. Gender differences were found in the reported application of many learning strategies. Importantly, the prediction of achievement in structural equation modeling revealed that only effort explained incremental variance (10%) over general cognitive ability. Results of multi-group analyses showed no gender differences in this prediction model. This finding provides further knowledge regarding gender differences in learning research and the specific role of learning strategies for academic achievement. The incremental assessment of learning strategy use as well as gender-differences in their predictive value contributes to the understanding and improvement of successful academic development. PMID:26347698
Ruffing, Stephanie; Wach, F-Sophie; Spinath, Frank M; Brünken, Roland; Karbach, Julia
2015-01-01
Recent research has revealed that learning behavior is associated with academic achievement at the college level, but the impact of specific learning strategies on academic success as well as gender differences therein are still not clear. Therefore, the aim of this study was to investigate gender differences in the incremental contribution of learning strategies over general cognitive ability in the prediction of academic achievement. The relationship between these variables was examined by correlation analyses. A set of t-tests was used to test for gender differences in learning strategies, whereas structural equation modeling as well as multi-group analyses were applied to investigate the incremental contribution of learning strategies for male and female students' academic performance. The sample consisted of 461 students (mean age = 21.2 years, SD = 3.2). Correlation analyses revealed that general cognitive ability as well as the learning strategies effort, attention, and learning environment were positively correlated with academic achievement. Gender differences were found in the reported application of many learning strategies. Importantly, the prediction of achievement in structural equation modeling revealed that only effort explained incremental variance (10%) over general cognitive ability. Results of multi-group analyses showed no gender differences in this prediction model. This finding provides further knowledge regarding gender differences in learning research and the specific role of learning strategies for academic achievement. The incremental assessment of learning strategy use as well as gender-differences in their predictive value contributes to the understanding and improvement of successful academic development.
Gupte-Singh, Komal; Singh, Rakesh R; Lawson, Kenneth A
2017-04-01
To determine the adjusted incremental total costs (direct and indirect) for patients (aged 3-17 years) with attention-deficit/hyperactivity disorder (ADHD) and the differences in the adjusted incremental direct expenditures with respect to age groups (preschoolers, 0-5 years; children, 6-11 years; and adolescents, 12-17 years). The 2011 Medical Expenditure Panel Survey was used as the data source. The ADHD cohort consisted of patients aged 0 to 17 years with a diagnosis of ADHD, whereas the non-ADHD cohort consisted of subjects in the same age range without a diagnosis of ADHD. The annual incremental total cost of ADHD is composed of the incremental direct expenditures and indirect costs. A two-part model with a logistic regression (first part) and a generalized linear model (second part) was used to estimate the incremental costs of ADHD while controlling for patient characteristics and access-to-care variables. The 2011 Medical Expenditure Panel Survey database included 9108 individuals aged 0 to 17 years, with 458 (5.0%) having an ADHD diagnosis. The ADHD cohort was 4.90 times more likely (95% confidence interval [CI] 2.97-8.08; P < 0.001) than the non-ADHD cohort to have an expenditure of at least $1, and among those with positive expenditures, the ADHD cohort had 58.4% higher expenditures than the non-ADHD cohort (P < 0.001). The estimated adjusted annual total incremental cost of ADHD was $949.24 (95% CI $593.30-$1305.18; P < 0.001). The adjusted annual incremental total direct expenditure for ADHD was higher among preschoolers ($989.34; 95% CI $402.70-$1575.98; P = 0.001) than among adolescents ($894.94; 95% CI $428.16-$1361.71; P < 0.001) or children ($682.71; 95% CI $347.94-$1017.48; P < 0.001). Early diagnosis and use of evidence-based treatments may address the substantial burden of ADHD. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Hammer, Joseph H; Brenner, Rachel E
2017-07-14
This study extended our theoretical and applied understanding of gratitude through a psychometric examination of the most popular multidimensional measure of gratitude, the Gratitude, Resentment, and Appreciation Test-Revised Short form (GRAT-RS). Namely, the dimensionality of the GRAT-RS, the model-based reliability of the GRAT-RS total score and 3 subscale scores, and the incremental evidence of validity for its latent factors were assessed. Dimensionality measures (e.g., explained common variance) and confirmatory factor analysis results with 426 community adults indicated that the GRAT-RS conformed to a multidimensional (bifactor) structure. Model-based reliability measures (e.g., omega hierarchical) provided support for the future use of the Lack of a Sense of Deprivation raw subscale score, but not for the raw GRAT-RS total score, Simple Appreciation subscale score, or Appreciation of Others subscale score. Structural equation modeling results indicated that only the general gratitude factor and the lack of a sense of deprivation specific factor accounted for significant variance in life satisfaction, positive affect, and distress. These findings support the 3 pillars of gratitude conceptualization of gratitude over competing conceptualizations, the position that the specific forms of gratitude are theoretically distinct, and the argument that appreciation is distinct from the superordinate construct of gratitude.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dalimunthe, Amty Ma’rufah Ardhiyah; Mindara, Jajat Yuda; Panatarani, Camellia
Smart grid and distributed generation should be the solution of the global climate change and the crisis energy of the main source of electrical power generation which is fossil fuel. In order to meet the rising electrical power demand and increasing service quality demands, as well as reduce pollution, the existing power grid infrastructure should be developed into a smart grid and distributed power generation which provide a great opportunity to address issues related to energy efficiency, energy security, power quality and aging infrastructure systems. The conventional of the existing distributed generation system is an AC grid while for amore » renewable resources requires a DC grid system. This paper explores the model of smart DC grid by introducing a model of smart DC grid with the stable power generation give a minimal and compressed circuitry that can be implemented very cost-effectively with simple components. The PC based application software for controlling was developed to show the condition of the grid and to control the grid become ‘smart’. The model is then subjected to a severe system perturbation, such as incremental change in loads to test the performance of the system again stability. It is concluded that the system able to detect and controlled the voltage stability which indicating the ability of power system to maintain steady voltage within permissible rangers in normal condition.« less
Assimilation of ZDR Columns for Improving the Spin-Up and Forecasts of Convective Storms
NASA Astrophysics Data System (ADS)
Carlin, J.; Gao, J.; Snyder, J.; Ryzhkov, A.
2017-12-01
A primary motivation for assimilating radar reflectivity data is the reduction of spin-up time for modeled convection. To accomplish this, cloud analysis techniques seek to induce and sustain convective updrafts in storm-scale models by inserting temperature and moisture increments and hydrometeor mixing ratios into the model analysis from simple relations with reflectivity. Polarimetric radar data provide additional insight into the microphysical and dynamic structure of convection. In particular, the radar meteorology community has known for decades that convective updrafts cause, and are typically co-located with, differential reflectivity (ZDR) columns - vertical protrusions of enhanced ZDR above the environmental 0˚C level. Despite these benefits, limited work has been done thus far to assimilate dual-polarization radar data into numerical weather prediction models. In this study, we explore the utility of assimilating ZDR columns to improve storm-scale model analyses and forecasts of convection. We modify the existing Advanced Regional Prediction System's (ARPS) cloud analysis routine to adjust model temperature and moisture state variables using detected ZDR columns as proxies for convective updrafts, and compare the resultant cycled analyses and forecasts with those from the original reflectivity-based cloud analysis formulation. Results indicate qualitative and quantitative improvements from assimilating ZDR columns, including more coherent analyzed updrafts, forecast updraft helicity swaths that better match radar-derived rotation tracks, more realistic forecast reflectivity fields, and larger equitable threat scores. These findings support the use of dual-polarization radar signatures to improve storm-scale model analyses and forecasts.
NASA Astrophysics Data System (ADS)
Molz, F. J.; Kozubowski, T. J.; Miller, R. S.; Podgorski, K.
2005-12-01
The theory of non-stationary stochastic processes with stationary increments gives rise to stochastic fractals. When such fractals are used to represent measurements of (assumed stationary) physical properties, such as ln(k) increments in sediments or velocity increments "delta(v)" in turbulent flows, the resulting measurements exhibit scaling, either spatial, temporal or both. (In the present context, such scaling refers to systematic changes in the statistical properties of the increment distributions, such as variance, with the lag size over which the increments are determined.) Depending on the class of probability density functions (PDFs) that describe the increment distributions, the resulting stochastic fractals will display different properties. Until recently, the stationary increment process was represented using mainly Gaussian, Gamma or Levy PDFs. However, measurements in both sediments and fluid turbulence indicate that these PDFs are not commonly observed. Based on recent data and previous studies referenced and discussed in Meerschaert et al. (2004) and Molz et al. (2005), the measured increment PDFs display an approximate double exponential (Laplace) shape at smaller lags, and this shape evolves towards Gaussian at larger lags. A model for this behavior based on the Generalized Laplace PDF family called fractional Laplace motion, in analogy with its Gaussian counterpart - fractional Brownian motion, has been suggested (Meerschaert et al., 2004) and the necessary mathematics elaborated (Kozubowski et al., 2005). The resulting stochastic fractal is not a typical self-affine monofractal, but it does exhibit monofractal-like scaling in certain lag size ranges. To date, it has been shown that the Generalized Laplace family fits ln(k) increment distributions and reproduces the original 1941 theory of Kolmogorov when applied to Eulerian turbulent velocity increments. However, to make a physically self-consistent application to turbulence, one must adopt a Lagrangian viewpoint, and the details of this approach are still being developed. The potential analogy between turbulent delta(v) and sediment delta[ln(k)] is intriguing, and perhaps offers insight into the underlying chaotic processes that constitute turbulence and may result also in the pervasive heterogeneity observed in most natural sediments. Properties of the new Laplace fractal are presented, and potential applications to both sediments and fluid turbulence are discussed.
Equilibrium Configurations of a Fiber in a Flow
NASA Astrophysics Data System (ADS)
Guerron, Pamela; Berghout, Christopher; Nita, Bogdan; Vaidya, Ashwin
2013-11-01
The aim of this study is to understand the coupled dynamics of flexible fibers in a fluid flow. In particular, we examine the equilibrium configurations of the fiber with changing Reynolds numbers, orientations and lengths of the fiber. Our study is motivated by biological phenomena such as ciliary bending, flexing of plants and trees in winds etc. Our approach to resolving this problem has been threefold: experimental, numerical and theoretical. In our experiments we create physical models of variable length fibers inserted into a basal body structure, which is then suspended in a flow tank and positioned at different angles. The structure (fibers) are subjected to different velocities of water flow, ranging from 0m/s to 0.53 m/s in increments of 0.038 m/s. The results of the experiment were analyzed using Adobe Photoshop and the effect of the above mentioned parameters upon the shape of the fiber is analyzed. In addition, we also simulate this problem using the software Comsol and also create a simple, toy mathematical model incorporating the competing effects of tension and fluid drag on the fiber to obtain a closed form expression. Our various approaches point to consistent results.
NASA Astrophysics Data System (ADS)
Zulkifli, Muhammad Nubli; Ilias, Izzudin; Abas, Amir; Muhamad, Wan Mansor Wan
2017-09-01
Thermoelectric generator (TEG) is the solid state device that converts the thermal gradient into electrical energy. TEG is widely used as the renewable energy source especially for the electronic equipment that operates with the small amount of electrical power. In the present analysis, the finite element analysis (FEA) using ANSYS is conducted on a model of the TEG attached with the aluminium, Al plate on the hot side of the TEG. This simple construction of TEG model was built in order to be used in the waste heat recovery of solar application. It was shown that the changes of the area and thickness of the Al plate increased the temperature gradient between hot and cold sides of TEG. This directly increase the voltage produced by the TEG based on the Seeback effect. The increase of the thermal gradient due to the increment of thickness and width of Al plate might be because of the increase of thermal resistance of Al plate. This finding provides a valuable data in design process to build a good TEG attached with Al plate for the waste heat recovery of solar application.
NASA Technical Reports Server (NTRS)
Avouac, Jean-Philippe; Peltzer, Gilles
1993-01-01
The northern piedmont of the western Kunlun mountains (Xinjiang, China) is marked at its easternmost extremity, south of the Hotan-Qira oases, by a set of normal faults trending N50E for nearly 70 km. Conspicuous on Landsat and SPOT images, these faults follow the southeastern border of a deep flexural basin and may be related to the subsidence of the Tarim platform loaded by the western Kunlun northward overthrust. The Hotan-Qira normal fault system vertically offsets the piedmont slope by 70 m. Highest fault scarps reach 20 m and often display evidence for recent reactivations about 2 m high. Successive stream entrenchments in uplifted footwallls have formed inset terraces. We have leveled topographic profiles across fault scarps and transverse abandoned terrace risers. The state of degradation of each terrace edge has been characterized by a degradation coefficient tau, derived by comparison with analytical erosion models. Edges of highest abandoned terraces yield a degradation coefficient of 33 +/- 4 sq.m. Profiles of cumulative fault scarps have been analyzed in a similar way using synthetic profiles generated with a simple incremental fault scarp model.
An, R; Xue, H; Wang, L; Wang, Y
2017-09-22
This study aimed to project the societal cost and benefit of an expansion of a water access intervention that promotes lunchtime plain water consumption by placing water dispensers in New York school cafeterias to all schools nationwide. A decision model was constructed to simulate two events under Markov chain processes - placing water dispensers at lunchtimes in school cafeterias nationwide vs. no action. The incremental cost pertained to water dispenser purchase and maintenance, whereas the incremental benefit was resulted from cases of childhood overweight/obesity prevented and corresponding lifetime direct (medical) and indirect costs saved. Based on the decision model, the estimated incremental cost of the school-based water access intervention is $18 per student, and the corresponding incremental benefit is $192, resulting in a net benefit of $174 per student. Subgroup analysis estimates the net benefit per student to be $199 and $149 among boys and girls, respectively. Nationwide adoption of the intervention would prevent 0.57 million cases of childhood overweight, resulting in a lifetime cost saving totalling $13.1 billion. The estimated total cost saved per dollar spent was $14.5. The New York school-based water access intervention, if adopted nationwide, may have a considerably favourable benefit-cost portfolio. © 2017 World Obesity Federation.
Real-time model learning using Incremental Sparse Spectrum Gaussian Process Regression.
Gijsberts, Arjan; Metta, Giorgio
2013-05-01
Novel applications in unstructured and non-stationary human environments require robots that learn from experience and adapt autonomously to changing conditions. Predictive models therefore not only need to be accurate, but should also be updated incrementally in real-time and require minimal human intervention. Incremental Sparse Spectrum Gaussian Process Regression is an algorithm that is targeted specifically for use in this context. Rather than developing a novel algorithm from the ground up, the method is based on the thoroughly studied Gaussian Process Regression algorithm, therefore ensuring a solid theoretical foundation. Non-linearity and a bounded update complexity are achieved simultaneously by means of a finite dimensional random feature mapping that approximates a kernel function. As a result, the computational cost for each update remains constant over time. Finally, algorithmic simplicity and support for automated hyperparameter optimization ensures convenience when employed in practice. Empirical validation on a number of synthetic and real-life learning problems confirms that the performance of Incremental Sparse Spectrum Gaussian Process Regression is superior with respect to the popular Locally Weighted Projection Regression, while computational requirements are found to be significantly lower. The method is therefore particularly suited for learning with real-time constraints or when computational resources are limited. Copyright © 2012 Elsevier Ltd. All rights reserved.
Fergus, Thomas A; Kelley, Lance P; Griggs, Jackson O
2017-10-01
There is growing support for a bifactor conceptualization of the Anxiety Sensitivity Index-3 (ASI-3; Taylor et al., 2007), consisting of a General factor and 3 domain-specific factors (i.e., Physical, Cognitive, Social). Earlier studies supporting a bifactor model of the ASI-3 used samples that consisted of predominantly White respondents. In addition, extant research has yet to support the incremental validity of the Physical domain-specific factor while controlling for the General factor. The present study is an examination of a bifactor model of the ASI-3 and the measurement invariance of that model among an ethnoracially diverse sample of primary-care patients (N = 533). Results from multiple-group confirmatory factor analysis supported the configural and metric/scalar invariance of the bifactor model of the ASI-3 across self-identifying Black, Latino, and White respondents. The Physical domain-specific factor accounted for unique variance in an index of health anxiety beyond the General factor. These results provide support for the generalizability of a bifactor model of the ASI-3 across 3 ethnoracial groups, as well as indication of the incremental explanatory power of the Physical domain-specific factor. Study implications are discussed. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Thermal performance modeling of NASA s scientific balloons
NASA Astrophysics Data System (ADS)
Franco, H.; Cathey, H.
The flight performance of a scientific balloon is highly dependant on the interaction between the balloon and its environment. The balloon is a thermal vehicle. Modeling a scientific balloon's thermal performance has proven to be a difficult analytical task. Most previous thermal models have attempted these analyses by using either a bulk thermal model approach, or by simplified representations of the balloon. These approaches to date have provided reasonable, but not very accurate results. Improvements have been made in recent years using thermal analysis tools developed for the thermal modeling of spacecraft and other sophisticated heat transfer problems. These tools, which now allow for accurate modeling of highly transmissive materials, have been applied to the thermal analysis of NASA's scientific balloons. A research effort has been started that utilizes the "Thermal Desktop" addition to AUTO CAD. This paper will discuss the development of thermal models for both conventional and Ultra Long Duration super-pressure balloons. This research effort has focused on incremental analysis stages of development to assess the accuracy of the tool and the required model resolution to produce usable data. The first stage balloon thermal analyses started with simple spherical balloon models with a limited number of nodes, and expanded the number of nodes to determine required model resolution. These models were then modified to include additional details such as load tapes. The second stage analyses looked at natural shaped Zero Pressure balloons. Load tapes were then added to these shapes, again with the goal of determining the required modeling accuracy by varying the number of gores. The third stage, following the same steps as the Zero Pressure balloon efforts, was directed at modeling super-pressure pumpkin shaped balloons. The results were then used to develop analysis guidelines and an approach for modeling balloons for both simple first order estimates and detailed full models. The development of the radiative environment and program input files, the development of the modeling techniques for balloons, and the development of appropriate data output handling techniques for both the raw data and data plots will be discussed. A general guideline to match predicted balloon performance with known flight data will also be presented. One long-term goal of this effort is to develop simplified approaches and techniques to include results in performance codes being developed.
Incremental dynamical downscaling for probabilistic analysis based on multiple GCM projections
NASA Astrophysics Data System (ADS)
Wakazuki, Y.
2015-12-01
A dynamical downscaling method for probabilistic regional scale climate change projections was developed to cover an uncertainty of multiple general circulation model (GCM) climate simulations. The climatological increments (future minus present climate states) estimated by GCM simulation results were statistically analyzed using the singular vector decomposition. Both positive and negative perturbations from the ensemble mean with the magnitudes of their standard deviations were extracted and were added to the ensemble mean of the climatological increments. The analyzed multiple modal increments were utilized to create multiple modal lateral boundary conditions for the future climate regional climate model (RCM) simulations by adding to an objective analysis data. This data handling is regarded to be an advanced method of the pseudo-global-warming (PGW) method previously developed by Kimura and Kitoh (2007). The incremental handling for GCM simulations realized approximated probabilistic climate change projections with the smaller number of RCM simulations. Three values of a climatological variable simulated by RCMs for a mode were used to estimate the response to the perturbation of the mode. For the probabilistic analysis, climatological variables of RCMs were assumed to show linear response to the multiple modal perturbations, although the non-linearity was seen for local scale rainfall. Probability of temperature was able to be estimated within two modes perturbation simulations, where the number of RCM simulations for the future climate is five. On the other hand, local scale rainfalls needed four modes simulations, where the number of the RCM simulations is nine. The probabilistic method is expected to be used for regional scale climate change impact assessment in the future.
Wagner, Monika; Lavoie, Louis; Goetghebeur, Mireille
2014-03-01
Clostridium difficile infection (CDI) represents a public health problem with increasing incidence and severity. To evaluate the clinical and economic consequences of vancomycin compared with fidaxomicin in the treatment of CDI from the Canadian health care system perspective. A decision-tree model was developed to compare vancomycin and fidaxomicin for the treatment of severe CDI. The model assumed identical initial cure rates and included first recurrent episodes of CDI (base case). Treatment of patients presenting with recurrent CDI was examined as an alternative analysis. Costs included were for study medication, physician services and hospitalization. Cost effectiveness was measured as incremental cost per recurrence avoided. Sensitivity analyses of key input parameters were performed. In a cohort of 1000 patients with an initial episode of severe CDI, treatment with fidaxomicin led to 137 fewer recurrences at an incremental cost of $1.81 million, resulting in an incremental cost of $13,202 per recurrence avoided. Among 1000 patients with recurrent CDI, 113 second recurrences were avoided at an incremental cost of $18,190 per second recurrence avoided. Incremental costs per recurrence avoided increased with increasing proportion of cases caused by the NAP1/B1/027 strain. Results were sensitive to variations in recurrence rates and treatment duration but were robust to variations in other parameters. The use of fidaxomicin is associated with a cost increase for the Canadian health care system. Clinical benefits of fidaxomicin compared with vancomycin depend on the proportion of cases caused by the NAP1/B1/027 strain in patients with severe CDI.
Quist, M.C.; Guy, C.S.; Schultz, R.D.; Stephen, J.L.
2003-01-01
We compared the growth of walleyes Stizostedion vitreum in Kansas to that of other populations throughout North America and determined the effects of the abundance of gizzard shad Dorosoma cepedianum and temperature on the growth of walleyes in Kansas reservoirs. Age was estimated from scales and otoliths collected from walleyes (N = 2,072) sampled with gill nets from eight Kansas reservoirs during fall in 1991-1999. Age-0 gizzard shad abundance was indexed based on summer seining information, and temperature data were obtained from the National Oceanic and Atmospheric Administration. Parameter estimates of von Bertalanffy growth models indicated that the growth of walleyes in Kansas was more similar to that of southern latitude populations (e.g., Mississippi and Texas) than to that of northern (e.g., Manitoba, Minnesota and South Dakota) or middle latitude (e.g., Colorado and Iowa) populations. Northern and middle latitude populations had lower mean back-calculated lengths at age 1, lower growth coefficients, and greater longevity than southern and Kansas populations. A relative growth index (RGI; [Lt/Ls ] ?? 100, where Lt is the observed length at age and Ls is the age-specific standard length derived from a pooled von Bertalanffy growth model) and standardized percentile values (percentile values of mean back-calculated lengths at age) indicated that the growth of walleyes in Kansas was above average compared with that of other populations in North America. The annual growth increments of Kansas walleyes were more variable among years than among reservoirs. The growth increments of age-0 and age-1 walleyes were positively related to the catch rates of gizzard shad smaller than 80 mm, whereas the growth of age-2 and age-3 walleyes was inversely related to mean summer air temperature. Our results provide a framework for comparing North American walleye populations, and our proposed RGI provides a simple, easily interpreted index of growth.
Mandavia, Amar D; Bonanno, George A
2018-04-29
To determine whether there were incremental mental health impacts, specifically on depression trajectories, as a result of the 2008 economic crisis (the Great Recession) and subsequent Hurricane Sandy. Using latent growth mixture modeling and the ORANJ BOWL dataset, we examined prospective trajectories of depression among older adults (mean age, 60.67; SD, 6.86) who were exposed to the 2 events. We also collected community economic and criminal justice data to examine their impact upon depression trajectories. Participants (N=1172) were assessed at 3 times for affect, successful aging, and symptoms of depression. We additionally assessed posttraumatic stress disorder (PTSD) symptomology after Hurricane Sandy. We identified 3 prospective trajectories of depression. The majority (83.6%) had no significant change in depression from before to after these events (resilience), while 7.2% of the sample increased in depression incrementally after each event (incremental depression). A third group (9.2%) went from high to low depression symptomology following the 2 events (depressive-improving). Only those in the incremental depression group had significant PTSD symptoms following Hurricane Sandy. We identified a small group of individuals for whom the experience of multiple stressful events had an incremental negative effect on mental health outcomes. These results highlight the importance of understanding the perseveration of depression symptomology from one event to another. (Disaster Med Public Health Preparedness. 2018;page 1 of 10).
A Numerical Process Control Method for Circular-Tube Hydroforming Prediction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Kenneth I.; Nguyen, Ba Nghiep; Davies, Richard W.
2004-03-01
This paper describes the development of a solution control method that tracks the stresses, strains and mechanical behavior of a tube during hydroforming to estimate the proper axial feed (end-feed) and internal pressure loads through time. The analysis uses the deformation theory of plasticity and Hill?s criterion to describe the plastic flow. Before yielding, the pressure and end-feed increments are estimated based on the initial tube geometry, elastic properties and yield stress. After yielding, the pressure increment is calculated based on the tube geometry at the previous solution increment and the current hoop stress increment. The end-feed increment is computedmore » from the increment of the axial plastic strain. Limiting conditions such as column buckling (of long tubes), local axi-symmetric wrinkling of shorter tubes, and bursting due to localized wall thinning are considered. The process control method has been implemented in the Marc finite element code. Hydroforming simulations using this process control method were conducted to predict the load histories for controlled expansion of 6061-T4 aluminum tubes within a conical die shape and under free hydroforming conditions. The predicted loading paths were transferred to the hydroforming equipment to form the conical and free-formed tube shapes. The model predictions and experimental results are compared for deformed shape, strains and the extent of forming at rupture.« less
Air pollution and health risks due to vehicle traffic.
Zhang, Kai; Batterman, Stuart
2013-04-15
Traffic congestion increases vehicle emissions and degrades ambient air quality, and recent studies have shown excess morbidity and mortality for drivers, commuters and individuals living near major roadways. Presently, our understanding of the air pollution impacts from congestion on roads is very limited. This study demonstrates an approach to characterize risks of traffic for on- and near-road populations. Simulation modeling was used to estimate on- and near-road NO2 concentrations and health risks for freeway and arterial scenarios attributable to traffic for different traffic volumes during rush hour periods. The modeling used emission factors from two different models (Comprehensive Modal Emissions Model and Motor Vehicle Emissions Factor Model version 6.2), an empirical traffic speed-volume relationship, the California Line Source Dispersion Model, an empirical NO2-NOx relationship, estimated travel time changes during congestion, and concentration-response relationships from the literature, which give emergency doctor visits, hospital admissions and mortality attributed to NO2 exposure. An incremental analysis, which expresses the change in health risks for small increases in traffic volume, showed non-linear effects. For a freeway, "U" shaped trends of incremental risks were predicted for on-road populations, and incremental risks are flat at low traffic volumes for near-road populations. For an arterial road, incremental risks increased sharply for both on- and near-road populations as traffic increased. These patterns result from changes in emission factors, the NO2-NOx relationship, the travel delay for the on-road population, and the extended duration of rush hour for the near-road population. This study suggests that health risks from congestion are potentially significant, and that additional traffic can significantly increase risks, depending on the type of road and other factors. Further, evaluations of risk associated with congestion must consider travel time, the duration of rush-hour, congestion-specific emission estimates, and uncertainties. Copyright © 2013 Elsevier B.V. All rights reserved.
Air pollution and health risks due to vehicle traffic
Zhang, Kai; Batterman, Stuart
2014-01-01
Traffic congestion increases vehicle emissions and degrades ambient air quality, and recent studies have shown excess morbidity and mortality for drivers, commuters and individuals living near major roadways. Presently, our understanding of the air pollution impacts from congestion on roads is very limited. This study demonstrates an approach to characterize risks of traffic for on- and near-road populations. Simulation modeling was used to estimate on- and near-road NO2 concentrations and health risks for freeway and arterial scenarios attributable to traffic for different traffic volumes during rush hour periods. The modeling used emission factors from two different models (Comprehensive Modal Emissions Model and Motor Vehicle Emissions Factor Model version 6.2), an empirical traffic speed–volume relationship, the California Line Source Dispersion Model, an empirical NO2–NOx relationship, estimated travel time changes during congestion, and concentration–response relationships from the literature, which give emergency doctor visits, hospital admissions and mortality attributed to NO2 exposure. An incremental analysis, which expresses the change in health risks for small increases in traffic volume, showed non-linear effects. For a freeway, “U” shaped trends of incremental risks were predicted for on-road populations, and incremental risks are flat at low traffic volumes for near-road populations. For an arterial road, incremental risks increased sharply for both on- and near-road populations as traffic increased. These patterns result from changes in emission factors, the NO2–NOx relationship, the travel delay for the on-road population, and the extended duration of rush hour for the near-road population. This study suggests that health risks from congestion are potentially significant, and that additional traffic can significantly increase risks, depending on the type of road and other factors. Further, evaluations of risk associated with congestion must consider travel time, the duration of rush-hour, congestion-specific emission estimates, and uncertainties. PMID:23500830
Contrast discrimination, non-uniform patterns and change blindness.
Scott-Brown, K C; Orbach, H S
1998-01-01
Change blindness--our inability to detect large changes in natural scenes when saccades, blinks and other transients interrupt visual input--seems to contradict psychophysical evidence for our exquisite sensitivity to contrast changes. Can the type of effects described as 'change blindness' be observed with simple, multi-element stimuli, amenable to psychophysical analysis? Such stimuli, composed of five mixed contrast elements, elicited a striking increase in contrast increment thresholds compared to those for an isolated element. Cue presentation prior to the stimulus substantially reduced thresholds, as for change blindness with natural scenes. On one hand, explanations for change blindness based on abstract and sketchy representations in short-term visual memory seem inappropriate for this low-level image property of contrast where there is ample evidence for exquisite performance on memory tasks. On the other hand, the highly increased thresholds for mixed contrast elements, and the decreased thresholds when a cue is present, argue against any simple early attentional or sensory explanation for change blindness. Thus, psychophysical results for very simple patterns cannot straightforwardly predict results even for the slightly more complicated patterns studied here. PMID:9872004
NASA Astrophysics Data System (ADS)
Kiesewetter, G.; Borken-Kleefeld, J.; Schöpp, W.; Heyes, C.; Thunis, P.; Bessagnet, B.; Gsella, A.; Amann, M.
2013-08-01
NO2 concentrations at the street level are a major concern for urban air quality in Europe and have been regulated under the EU Thematic Strategy on Air Pollution. Despite the legal requirements, limit values are exceeded at many monitoring stations with little or no improvement during recent years. In order to assess the effects of future emission control regulations on roadside NO2 concentrations, a downscaling module has been implemented in the GAINS integrated assessment model. The module follows a hybrid approach based on atmospheric dispersion calculations and observations from the AirBase European air quality data base that are used to estimate site-specific parameters. Pollutant concentrations at every monitoring site with sufficient data coverage are disaggregated into contributions from regional background, urban increment, and local roadside increment. The future evolution of each contribution is assessed with a model of the appropriate scale - 28 × 28 km grid based on the EMEP Model for the regional background, 7 × 7 km urban increment based on the CHIMERE Chemistry Transport Model, and a chemical box model for the roadside increment. Thus, different emission scenarios and control options for long-range transport, regional and local emissions can be analysed. Observed concentrations and historical trends are well captured, in particular the differing NO2 and total NOx = NO + NO2 trends. Altogether, more than 1950 air quality monitoring stations in the EU are covered by the model, including more than 400 traffic stations and 70% of the critical stations. Together with its well-established bottom-up emission and dispersion calculation scheme, GAINS is thus able to bridge the scales from European-wide policies to impacts in street canyons. As an application of the model, we assess the evolution of attainment of NO2 limit values under current legislation until 2030. Strong improvements are expected with the introduction of the Euro 6 emission standard for light duty vehicles; however, for some major European cities, further measures may be required, in particular if aiming to achieve compliance at an earlier time.
NASA Astrophysics Data System (ADS)
Kiesewetter, G.; Borken-Kleefeld, J.; Schöpp, W.; Heyes, C.; Thunis, P.; Bessagnet, B.; Terrenoire, E.; Gsella, A.; Amann, M.
2014-01-01
NO2 concentrations at the street level are a major concern for urban air quality in Europe and have been regulated under the EU Thematic Strategy on Air Pollution. Despite the legal requirements, limit values are exceeded at many monitoring stations with little or no improvement in recent years. In order to assess the effects of future emission control regulations on roadside NO2 concentrations, a downscaling module has been implemented in the GAINS integrated assessment model. The module follows a hybrid approach based on atmospheric dispersion calculations and observations from the AirBase European air quality database that are used to estimate site-specific parameters. Pollutant concentrations at every monitoring site with sufficient data coverage are disaggregated into contributions from regional background, urban increment, and local roadside increment. The future evolution of each contribution is assessed with a model of the appropriate scale: 28 × 28 km grid based on the EMEP Model for the regional background, 7 × 7 km urban increment based on the CHIMERE Chemistry Transport Model, and a chemical box model for the roadside increment. Thus, different emission scenarios and control options for long-range transport as well as regional and local emissions can be analysed. Observed concentrations and historical trends are well captured, in particular the differing NO2 and total NOx = NO + NO2 trends. Altogether, more than 1950 air quality monitoring stations in the EU are covered by the model, including more than 400 traffic stations and 70% of the critical stations. Together with its well-established bottom-up emission and dispersion calculation scheme, GAINS is thus able to bridge the scales from European-wide policies to impacts in street canyons. As an application of the model, we assess the evolution of attainment of NO2 limit values under current legislation until 2030. Strong improvements are expected with the introduction of the Euro 6 emission standard for light duty vehicles; however, for some major European cities, further measures may be required, in particular if aiming to achieve compliance at an earlier time.
Numerical study of aero-excitation of steam-turbine rotor blade self-oscillations
NASA Astrophysics Data System (ADS)
Galaev, S. A.; Makhnov, V. Yu.; Ris, V. V.; Smirnov, E. M.
2018-05-01
Blade aero-excitation increment is evaluated by numerical solution of the full 3D unsteady Reynolds-averaged Navier-Stokes equations governing wet steam flow in a powerful steam-turbine last stage. The equilibrium wet steam model was adopted. Blade surfaces oscillations are defined by eigen-modes of a row of blades bounded by a shroud. Grid dependency study was performed with a reduced model being a set of blades multiple an eigen-mode nodal diameter. All other computations were carried out for the entire blade row. Two cases are considered, with an original-blade row and with a row of modified (reinforced) blades. Influence of eigen-mode nodal diameter and blade reinforcing on aero-excitation increment is analyzed. It has been established, in particular, that maximum value of the aero-excitation increment for the reinforced-blade row is two times less as compared with the original-blade row. Generally, results of the study point definitely to less probability of occurrence of blade self-oscillations in case of the reinforced blade-row.
SfM with MRFs: discrete-continuous optimization for large-scale structure from motion.
Crandall, David J; Owens, Andrew; Snavely, Noah; Huttenlocher, Daniel P
2013-12-01
Recent work in structure from motion (SfM) has built 3D models from large collections of images downloaded from the Internet. Many approaches to this problem use incremental algorithms that solve progressively larger bundle adjustment problems. These incremental techniques scale poorly as the image collection grows, and can suffer from drift or local minima. We present an alternative framework for SfM based on finding a coarse initial solution using hybrid discrete-continuous optimization and then improving that solution using bundle adjustment. The initial optimization step uses a discrete Markov random field (MRF) formulation, coupled with a continuous Levenberg-Marquardt refinement. The formulation naturally incorporates various sources of information about both the cameras and points, including noisy geotags and vanishing point (VP) estimates. We test our method on several large-scale photo collections, including one with measured camera positions, and show that it produces models that are similar to or better than those produced by incremental bundle adjustment, but more robustly and in a fraction of the time.
How to set the stage for a full-fledged clinical trial testing 'incremental haemodialysis'.
Casino, Francesco Gaetano; Basile, Carlo
2017-07-21
Most people who make the transition to maintenance haemodialysis (HD) therapy are treated with a fixed dose of thrice-weekly HD (3HD/week) regimen without consideration of their residual kidney function (RKF). The RKF provides an effective and naturally continuous clearance of both small and middle molecules, plays a major role in metabolic homeostasis, nutritional status and cardiovascular health, and aids in fluid management. The RKF is associated with better patient survival and greater health-related quality of life. Its preservation is instrumental to the prescription of incremental (1HD/week to 2HD/week) HD. The recently heightened interest in incremental HD has been hindered by the current limitations of the urea kinetic model (UKM), which tend to overestimate the needed dialysis dose in the presence of a substantial RKF. A recent paper by Casino and Basile suggested a variable target model (VTM), which gives more clinical weight to the RKF and allows less frequent HD treatments at lower RKF as opposed to the fixed target model, based on the wrong concept of the clinical equivalence between renal and dialysis clearance. A randomized controlled trial (RCT) enrolling incident patients and comparing incremental HD (prescribed according to the VTM) with the standard 3HD/week schedule and focused on hard outcomes, such as survival and health-related quality of life of patients, is urgently needed. The first step in designing such a study is to compute the 'adequacy lines' and the associated fitting equations necessary for the most appropriate allocation of the patients in the two arms and their correct and safe follow-up. In conclusion, the potentially important clinical and financial implications of the incremental HD render it highly promising and warrant RCTs. The UKM is the keystone for conducting such studies. © The Author 2017. Published by Oxford University Press on behalf of ERA-EDTA. All rights reserved.
AskIT Service Desk Support Value Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ashcraft, Phillip Lynn; Cummings, Susan M.; Fogle, Blythe G.
The value model discussed herein provides an accurate and simple calculation of the funding required to adequately staff the AskIT Service Desk (SD). The model is incremental – only technical labor cost is considered. All other costs, such as management, equipment, buildings, HVAC, and training are considered common elements of providing any labor related IT Service. Depending on the amount of productivity loss and the number of hours the defect was unresolved, the value of resolving work from the SD is unquestionably an economic winner; the average cost of $16 per SD resolution can commonly translate to cost avoidance exceeding well overmore » $100. Attempting to extract too much from the SD will likely create a significant downside. The analysis used to develop the value model indicates that the utilization of the SD is very high (approximately 90%). As a benchmark, consider a comment from a manager at Vitalyst (a commercial IT service desk) that their utilization target is approximately 60%. While high SD utilization is impressive, over the long term it is likely to cause unwanted consequences to staff such as higher turnover, illness, or burnout. A better solution is to staff the SD so that analysts have time to improve skills through training, develop knowledge, improve processes, collaborate with peers, and improve customer relationship skills.« less
Is incremental hemodialysis ready to return on the scene? From empiricism to kinetic modelling.
Basile, Carlo; Casino, Francesco Gaetano; Kalantar-Zadeh, Kamyar
2017-08-01
Most people who make the transition to maintenance dialysis therapy are treated with a fixed dose thrice-weekly hemodialysis regimen without considering their residual kidney function (RKF). The RKF provides effective and naturally continuous clearance of both small and middle molecules, plays a major role in metabolic homeostasis, nutritional status, and cardiovascular health, and aids in fluid management. The RKF is associated with better patient survival and greater health-related quality of life, although these effects may be confounded by patient comorbidities. Preservation of the RKF requires a careful approach, including regular monitoring, avoidance of nephrotoxins, gentle control of blood pressure to avoid intradialytic hypotension, and an individualized dialysis prescription including the consideration of incremental hemodialysis. There is currently no standardized method for applying incremental hemodialysis in practice. Infrequent (once- to twice-weekly) hemodialysis regimens are often used arbitrarily, without knowing which patients would benefit the most from them or how to escalate the dialysis dose as RKF declines over time. The recently heightened interest in incremental hemodialysis has been hindered by the current limitations of the urea kinetic models (UKM) which tend to overestimate the dialysis dose required in the presence of substantial RKF. This is due to an erroneous extrapolation of the equivalence between renal urea clearance (Kru) and dialyser urea clearance (Kd), correctly assumed by the UKM, to the clinical domain. In this context, each ml/min of Kd clears the urea from the blood just as 1 ml/min of Kru does. By no means should such kinetic equivalence imply that 1 ml/min of Kd is clinically equivalent to 1 ml/min of urea clearance provided by the native kidneys. A recent paper by Casino and Basile suggested a variable target model (VTM) as opposed to the fixed model, because the VTM gives more clinical weight to the RKF and allows less frequent hemodialysis treatments at lower RKF. The potentially important clinical and financial implications of incremental hemodialysis render it highly promising and warrant randomized controlled trials.
A Group Increment Scheme for Infrared Absorption Intensities of Greenhouse Gases
NASA Technical Reports Server (NTRS)
Kokkila, Sara I.; Bera, Partha P.; Francisco, Joseph S.; Lee, Timothy J.
2012-01-01
A molecule's absorption in the atmospheric infrared (IR) window (IRW) is an indicator of its efficiency as a greenhouse gas. A model for estimating the absorption of a fluorinated molecule within the IRW was developed to assess its radiative impact. This model will be useful in comparing different hydrofluorocarbons and hydrofluoroethers contribution to global warming. The absorption of radiation by greenhouse gases, in particular hydrofluoroethers and hydrofluorocarbons, was investigated using ab initio quantum mechanical methods. Least squares regression techniques were used to create a model based on this data. The placement and number of fluorines in the molecule were found to affect the absorption in the IR window and were incorporated into the model. Several group increment models are discussed. An additive model based on one-carbon groups is found to work satisfactorily in predicting the ab initio calculated vibrational intensities.
Numerical Simulation of Current Artillery Charges Using the TDNOVA Code.
1986-06-01
behavior was occasionally observed, particularly near the ends of the charge and particularly at increment-to-increment interfaces . Rather than expanding...between the charge sidewalls and the tube, had been observed at an early date by Kent. 3 The influence of axial ullage. or spaces between the ends of...subsided to within a user -selectable tolerance, the model is converted to a quasi-two-dimensional representation based on coupled regions of coaxial one
Cost-effectiveness analysis of interventions for migraine in four low- and middle-income countries.
Linde, Mattias; Steiner, Timothy J; Chisholm, Dan
2015-02-18
Evidence of the cost and effects of interventions for reducing the global burden of migraine remains scarce. Our objective was to estimate the population-level cost-effectiveness of evidence-based migraine interventions and their contributions towards reducing current burden in low- and middle-income countries. Using a standard WHO approach to cost-effectiveness analysis (CHOICE), we modelled core set intervention strategies for migraine, taking account of coverage and efficacy as well as non-adherence. The setting was primary health care including pharmacies. We modelled 26 intervention strategies implemented during 10 years. These included first-line acute and prophylactic drugs, and the expected consequences of adding consumer-education and provider-training. Total population-level costs and effectiveness (healthy life years [HLY] gained) were combined to form average and incremental cost-effectiveness ratios. We executed runs of the model for the general populations of China, India, Russia and Zambia. Of the strategies considered, acute treatment of attacks with acetylsalicylic acid (ASA) was by far the most cost-effective and generated a HLY for less than US$ 100. Adding educational actions increased annual costs by 1-2 US cents per capita of the population. Cost-effectiveness ratios then became slightly less favourable but still less than US$ 100 per HLY gained for ASA. An incremental cost of > US$ 10,000 would have to be paid per extra HLY by adding a triptan in a stepped-care treatment paradigm. For prophylaxis, amitriptyline was more cost-effective than propranolol or topiramate. Self-management with simple analgesics was by far the most cost-effective strategy for migraine treatment in low- and middle-income countries and represents a highly efficient use of health resources. Consumer education and provider training are expected to accelerate progress towards desired levels of coverage and adherence, cost relatively little to implement, and can therefore be considered also economically attractive. Evidence-based interventions for migraine should have as much a claim on scarce health resources as those for other chronic, non-communicable conditions that impose a significant burden on societies.
Gao, Xiang; Ouyang, Wei; Hao, Zengchao; Shi, Yandan; Wei, Peng; Hao, Fanghua
2017-02-01
Although climate warming and agricultural land use changes are two of the primary instigators of increased diffuse pollution, they are usually considered separately or additively. This likely lead to poor decisions regarding climate adaptation. Climate warming and farmland responses have synergistic consequences for diffuse nitrogen pollution, which are hypothesized to present different spatio-temporal patterns. In this study, we propose a modeling framework to simulate the synergistic impacts of climate warming and warming-induced farmland shifts on diffuse pollution. Active accumulated temperature response for latitudinal and altitudinal directions was predicted based on a simple agro-climate model under different temperature increments (△T 0 is from 0.8°C to 1.4°C at an interval of 0.2°C). Spatial distributions of dryland shift to paddy land were determined by considering accumulated temperature. Different temperature increments and crop distributions were inserted into Soil and Water Assessment Tool model, which quantified the spatio-temporal changes of nitrogen. Warming led to a decrease of the annual total nitrogen loading (2.6%-14.2%) in the low latitudes compared with baseline, which was larger than the decrease (0.8%-6.2%) in the high latitudes. The synergistic impacts amplified the decrease of the loading in the low and high latitudes at the sub-basin scale. Warming led to a decrease of the loading at a rate of 0.35kg/ha/°C, which was lower than the synergistic impacts (3.67kg/ha/°C) at the watershed level. However, warming led to the slight increase of the annual averaged NO3 (LAT) (0.16kg/ha/°C), which was amplified by the synergistic impacts (0.22kg/ha/°C). Expansion of paddy fields led to a decrease in the monthly total nitrogen loading throughout the year, but amplified an increase in the loading in August and September. The decreased response in spatio-temporal nitrogen patterns is substantially amplified by farmland-atmosphere feedbacks associated with farmland shifts in response to warming. Copyright © 2016 Elsevier B.V. All rights reserved.
Rolling Maneuver Load Alleviation using active controls
NASA Technical Reports Server (NTRS)
Woods-Vedeler, Jessica A.; Pototzky, Anthony S.
1992-01-01
Rolling Maneuver Load Alleviation (RMLA) has been demonstrated on the Active Flexible Wing (AFW) wind tunnel model in the NASA Langley Transonic Dynamics Tunnel. The design objective was to develop a systematic approach for developing active control laws to alleviate wing incremental loads during roll maneuvers. Using linear load models for the AFW wind-tunnel model which were based on experimental measurements, two RMLA control laws were developed based on a single-degree-of-freedom roll model. The RMLA control laws utilized actuation of outboard control surface pairs to counteract incremental loads generated during rolling maneuvers and actuation of the trailing edge inboard control surface pairs to maintain roll performance. To evaluate the RMLA control laws, roll maneuvers were performed in the wind tunnel at dynamic pressures of 150, 200, and 250 psf and Mach numbers of 0.33, .38 and .44, respectively. Loads obtained during these maneuvers were compared to baseline maneuver loads. For both RMLA controllers, the incremental torsion moments were reduced by up to 60 percent at all dynamic pressures and performance times. Results for bending moment load reductions during roll maneuvers varied. In addition, in a multiple function test, RMLA and flutter suppression system control laws were operated simultaneously during roll maneuvers at dynamic pressures 11 percent above the open-loop flutter dynamic pressure.
López-Haro, S A; Gutiérrez, M I; Vera, A; Leija, L
2015-10-01
To evaluate the effects of thermal dependence of speed of sound (SOS) and acoustic absorption of biological tissues during noninvasive focused ultrasound (US) hyperthermia therapy. A finite element (FE) model was used to simulate hyperthermia therapy in the liver by noninvasive focused US. The model consisted of an ultrasonic focused transducer radiating a four-layer biological medium composed of skin, fat, muscle, and liver. The acoustic field and temperature distribution along the layers were obtained after 15 s of hyperthermia therapy using the bio-heat equation. The model solution was found with and without the thermal dependence of SOS and acoustic absorption of biological tissues. The inclusion of the thermal dependence of the SOS generated an increment of 0.4 mm in the longitudinal focus axis of the acoustic field. Moreover, results indicate an increment of the hyperthermia area (zone with temperature above 43 °C), and a maximum temperature difference of almost 3.5 °C when the thermal dependence of absorption was taken into account. The increment of the achieved temperatures at the treatment zone indicated that the effects produced by the thermal dependence of SOS and absorption must be accounted for when planning hyperthermia treatment in order to avoid overheating undesired regions.
iSentenizer-μ: multilingual sentence boundary detection model.
Wong, Derek F; Chao, Lidia S; Zeng, Xiaodong
2014-01-01
Sentence boundary detection (SBD) system is normally quite sensitive to genres of data that the system is trained on. The genres of data are often referred to the shifts of text topics and new languages domains. Although new detection models can be retrained for different languages or new text genres, previous model has to be thrown away and the creation process has to be restarted from scratch. In this paper, we present a multilingual sentence boundary detection system (iSentenizer-μ) for Danish, German, English, Spanish, Dutch, French, Italian, Portuguese, Greek, Finnish, and Swedish languages. The proposed system is able to detect the sentence boundaries of a mixture of different text genres and languages with high accuracy. We employ i (+)Learning algorithm, an incremental tree learning architecture, for constructing the system. iSentenizer-μ, under the incremental learning framework, is adaptable to text of different topics and Roman-alphabet languages, by merging new data into existing model to learn the new knowledge incrementally by revision instead of retraining. The system has been extensively evaluated on different languages and text genres and has been compared against two state-of-the-art SBD systems, Punkt and MaxEnt. The experimental results show that the proposed system outperforms the other systems on all datasets.
An Approach to Verification and Validation of a Reliable Multicasting Protocol
NASA Technical Reports Server (NTRS)
Callahan, John R.; Montgomery, Todd L.
1994-01-01
This paper describes the process of implementing a complex communications protocol that provides reliable delivery of data in multicast-capable, packet-switching telecommunication networks. The protocol, called the Reliable Multicasting Protocol (RMP), was developed incrementally using a combination of formal and informal techniques in an attempt to ensure the correctness of its implementation. Our development process involved three concurrent activities: (1) the initial construction and incremental enhancement of a formal state model of the protocol machine; (2) the initial coding and incremental enhancement of the implementation; and (3) model-based testing of iterative implementations of the protocol. These activities were carried out by two separate teams: a design team and a V&V team. The design team built the first version of RMP with limited functionality to handle only nominal requirements of data delivery. In a series of iterative steps, the design team added new functionality to the implementation while the V&V team kept the state model in fidelity with the implementation. This was done by generating test cases based on suspected errant or offnominal behaviors predicted by the current model. If the execution of a test was different between the model and implementation, then the differences helped identify inconsistencies between the model and implementation. The dialogue between both teams drove the co-evolution of the model and implementation. Testing served as the vehicle for keeping the model and implementation in fidelity with each other. This paper describes (1) our experiences in developing our process model; and (2) three example problems found during the development of RMP.
An approach to verification and validation of a reliable multicasting protocol
NASA Technical Reports Server (NTRS)
Callahan, John R.; Montgomery, Todd L.
1995-01-01
This paper describes the process of implementing a complex communications protocol that provides reliable delivery of data in multicast-capable, packet-switching telecommunication networks. The protocol, called the Reliable Multicasting Protocol (RMP), was developed incrementally using a combination of formal and informal techniques in an attempt to ensure the correctness of its implementation. Our development process involved three concurrent activities: (1) the initial construction and incremental enhancement of a formal state model of the protocol machine; (2) the initial coding and incremental enhancement of the implementation; and (3) model-based testing of iterative implementations of the protocol. These activities were carried out by two separate teams: a design team and a V&V team. The design team built the first version of RMP with limited functionality to handle only nominal requirements of data delivery. In a series of iterative steps, the design team added new functionality to the implementation while the V&V team kept the state model in fidelity with the implementation. This was done by generating test cases based on suspected errant or off-nominal behaviors predicted by the current model. If the execution of a test was different between the model and implementation, then the differences helped identify inconsistencies between the model and implementation. The dialogue between both teams drove the co-evolution of the model and implementation. Testing served as the vehicle for keeping the model and implementation in fidelity with each other. This paper describes (1) our experiences in developing our process model; and (2) three example problems found during the development of RMP.
Denaturation process of laccase in various media by refractive index measurements.
Saoudi, O; Ghaouar, N; Ben Salah, S; Othman, T
2017-09-01
In this work, we are interested in the denaturation process of a laccase from Tramates versicolor via the determination of the refractive index, the refractive index increment and the specific volume in various media. The measurements were carried out using an Abbe refractometer. We have shown that the refractive index increment values obtained from the slope of the variation of the refractive index vs. Concentration are outside the range refractive index increments of proteins. To correct the results, we have followed the theoretical predictions based on the knowledge of the protein refractive index from its amino acids composition. The denaturation process was studied by calculating the specific volume variation where its determination was related to the Gladstone-Dale and the Lorentz-Lorentz models.
An empiric estimate of the value of life: updating the renal dialysis cost-effectiveness standard.
Lee, Chris P; Chertow, Glenn M; Zenios, Stefanos A
2009-01-01
Proposals to make decisions about coverage of new technology by comparing the technology's incremental cost-effectiveness with the traditional benchmark of dialysis imply that the incremental cost-effectiveness ratio of dialysis is seen a proxy for the value of a statistical year of life. The frequently used ratio for dialysis has, however, not been updated to reflect more recently available data on dialysis. We developed a computer simulation model for the end-stage renal disease population and compared cost, life expectancy, and quality adjusted life expectancy of current dialysis practice relative to three less costly alternatives and to no dialysis. We estimated incremental cost-effectiveness ratios for these alternatives relative to the next least costly alternative and no dialysis and analyzed the population distribution of the ratios. Model parameters and costs were estimated using data from the Medicare population and a large integrated health-care delivery system between 1996 and 2003. The sensitivity of results to model assumptions was tested using 38 scenarios of one-way sensitivity analysis, where parameters informing the cost, utility, mortality and morbidity, etc. components of the model were by perturbed +/-50%. The incremental cost-effectiveness ratio of dialysis of current practice relative to the next least costly alternative is on average $129,090 per quality-adjusted life-year (QALY) ($61,294 per year), but its distribution within the population is wide; the interquartile range is $71,890 per QALY, while the 1st and 99th percentiles are $65,496 and $488,360 per QALY, respectively. Higher incremental cost-effectiveness ratios were associated with older age and more comorbid conditions. Sensitivity to model parameters was comparatively small, with most of the scenarios leading to a change of less than 10% in the ratio. The value of a statistical year of life implied by dialysis practice currently averages $129,090 per QALY ($61,294 per year), but is distributed widely within the dialysis population. The spread suggests that coverage decisions using dialysis as the benchmark may need to incorporate percentile values (which are higher than the average) to be consistent with the Rawlsian principles of justice of preserving the rights and interests of society's most vulnerable patient groups.
Cho, Iksung; Al'Aref, Subhi J; Berger, Adam; Ó Hartaigh, Bríain; Gransar, Heidi; Valenti, Valentina; Lin, Fay Y; Achenbach, Stephan; Berman, Daniel S; Budoff, Matthew J; Callister, Tracy Q; Al-Mallah, Mouaz H; Cademartiri, Filippo; Chinnaiyan, Kavitha; Chow, Benjamin J W; DeLago, Augustin; Villines, Todd C; Hadamitzky, Martin; Hausleiter, Joerg; Leipsic, Jonathon; Shaw, Leslee J; Kaufmann, Philipp A; Feuchtner, Gudrun; Kim, Yong-Jin; Maffei, Erica; Raff, Gilbert; Pontone, Gianluca; Andreini, Daniele; Marques, Hugo; Rubinshtein, Ronen; Chang, Hyuk-Jae; Min, James K
2018-03-14
The long-term prognostic benefit of coronary computed tomographic angiography (CCTA) findings of coronary artery disease (CAD) in asymptomatic populations is unknown. From the prospective multicentre international CONFIRM long-term study, we evaluated asymptomatic subjects without known CAD who underwent both coronary artery calcium scoring (CACS) and CCTA (n = 1226). Coronary computed tomographic angiography findings included the severity of coronary artery stenosis, plaque composition, and coronary segment location. Using the C-statistic and likelihood ratio tests, we evaluated the incremental prognostic utility of CCTA findings over a base model that included a panel of traditional risk factors (RFs) as well as CACS to predict long-term all-cause mortality. During a mean follow-up of 5.9 ± 1.2 years, 78 deaths occurred. Compared with the traditional RF alone (C-statistic 0.64), CCTA findings including coronary stenosis severity, plaque composition, and coronary segment location demonstrated improved incremental prognostic utility beyond traditional RF alone (C-statistics range 0.71-0.73, all P < 0.05; incremental χ2 range 20.7-25.5, all P < 0.001). However, no added prognostic benefit was offered by CCTA findings when added to a base model containing both traditional RF and CACS (C-statistics P > 0.05, for all). Coronary computed tomographic angiography improved prognostication of 6-year all-cause mortality beyond a set of conventional RF alone, although, no further incremental value was offered by CCTA when CCTA findings were added to a model incorporating RF and CACS.
Charokopou, M; McEwan, P; Lister, S; Callan, L; Bergenheim, K; Tolley, K; Postema, R; Townsend, R; Roudaut, M
2015-07-01
To assess the cost-effectiveness of dapagliflozin, a sodium-glucose co-transporter-2 (SGLT-2) inhibitor, compared with a sulfonylurea, when added to metformin for treatment of UK people with Type 2 diabetes mellitus inadequately controlled on metformin alone. Clinical inputs sourced from a head-to-head randomized controlled trial (RCT) informed the Cardiff diabetes decision model. Risk equations developed from the United Kingdom Prospective Diabetes Study (UKPDS) were used in conjunction with the clinical inputs to predict disease progression and the incidence of micro- and macrovascular complications over a lifetime horizon. Cost and utility data were generated to present the incremental cost-effectiveness ratio (ICER) for both treatment arms, and sensitivity and scenario analyses were conducted to assess the impact of uncertainty on the final model results. The dapagliflozin treatment arm was associated with a mean incremental benefit of 0.467 quality-adjusted life years (QALYs) [95% confidence interval (CI): 0.420; 0.665], with an incremental cost of £1246 (95% CI: £613; £1637). This resulted in an ICER point estimate of £2671 per QALY gained. Incremental costs were shown to be insensitive to parameter variation, with only treatment-related weight change having a significant impact on the incremental QALYs. Probabilistic sensitivity analysis determined that dapagliflozin had a 100% probability of being cost-effective at a willingness-to-pay threshold of £20,000 per QALY. Dapagliflozin in combination with metformin was shown to be a cost-effective treatment option compared with sulfonylurea from a UK healthcare perspective for people with Type 2 diabetes mellitus who are inadequately controlled on metformin monotherapy. © 2015 The Authors. Diabetic Medicine © 2015 Diabetes UK.
Moriwaki, K; Mouri, M; Hagino, H
2017-06-01
Model-based economic evaluation was performed to assess the cost-effectiveness of zoledronic acid. Although zoledronic acid was dominated by alendronate, the incremental quality-adjusted life year (QALY) was quite small in extent. Considering the advantage of once-yearly injection of zoledronic acid in persistence, zoledronic acid might be a cost-effective treatment option compared to once-weekly oral alendronate. The purpose of this study was to estimate the cost-effectiveness of once-yearly injection of zoledronic acid for the treatment of osteoporosis in Japan. A patient-level state-transition model was developed to predict the outcome of patients with osteoporosis who have experienced a previous vertebral fracture. The efficacy of zoledronic acid was derived from a published network meta-analysis. Lifetime cost and QALYs were estimated for patients who had received zoledronic acid, alendronate, or basic treatment alone. The incremental cost-effectiveness ratio (ICER) of zoledronic acid was estimated. For patients 70 years of age, zoledronic acid was dominated by alendronate with incremental QALY of -0.004 to -0.000 and incremental cost of 430 USD to 493 USD. Deterministic sensitivity analysis indicated that the relative risk of hip fracture and drug cost strongly affected the cost-effectiveness of zoledronic acid compared to alendronate. Scenario analysis considering treatment persistence showed that the ICER of zoledronic acid compared to alendronate was estimated to be 47,435 USD, 27,018 USD, and 10,749 USD per QALY gained for patients with a T-score of -2.0, -2.5, or -3.0, respectively. Although zoledronic acid is dominated by alendronate, the incremental QALY is quite small in extent. Considering the advantage of annual zoledronic acid treatment in compliance and persistence, zoledronic acid may be a cost-effective treatment option compared to alendronate.
Tso, Peggy; Walker, Kevin; Mahomed, Nizar; Coyte, Peter C.; Rampersaud, Y. Raja
2012-01-01
Background Demand for surgery to treat osteoarthritis (OA) of the hip, knee and spine has risen dramatically. Whereas total hip (THA) and total knee arthroplasty (TKA) have been widely accepted as cost-effective, spine surgeries (decompression, decompression with fusion) to treat degenerative conditions remain underfunded compared with other surgeries. Methods An incremental cost–utility analysis comparing decompression and decompression with fusion to THA and TKA, from the perspective of the provincial health insurance system, was based on an observational matched-cohort study of prospectively collected outcomes and retrospectively collected costs. Patient outcomes were measured using short-form (SF)-36 surveys over a 2-year follow-up period. Utility was modelled over the lifetime, and quality-adjusted life years (QALYs) were determined. We calculated the incremental cost per QALY gained by estimating mean incremental lifetime costs and QALYs of surgery compared with medical management of each diagnosis group after discounting costs and QALYs at 3%. Sensitivity analyses were also conducted. Results The lifetime incremental cost:utility ratios (ICURs) discounted at 3% were $5321 per QALY for THA, $11 275 per QALY for TKA, $2307 per QALY for spinal decompression and $7153 per QALY for spinal decompression with fusion. The sensitivity analyses did not alter the ranking of the lifetime ICURs. Conclusion In appropriately selected patients with leg-dominant symptoms secondary to focal lumbar spinal stenosis who have failed medical management, the lifetime ICUR for surgical treatment of lumbar spinal stenosis is similar to those of THA and TKA for the treatment of OA. PMID:22630061
Recent advances in the modelling of crack growth under fatigue loading conditions
NASA Technical Reports Server (NTRS)
Dekoning, A. U.; Tenhoeve, H. J.; Henriksen, T. K.
1994-01-01
Fatigue crack growth associated with cyclic (secondary) plastic flow near a crack front is modelled using an incremental formulation. A new description of threshold behaviour under small load cycles is included. Quasi-static crack extension under high load excursions is described using an incremental formulation of the R-(crack growth resistance)- curve concept. The integration of the equations is discussed. For constant amplitude load cycles the results will be compared with existing crack growth laws. It will be shown that the model also properly describes interaction effects of fatigue crack growth and quasi-static crack extension. To evaluate the more general applicability the model is included in the NASGRO computer code for damage tolerance analysis. For this purpose the NASGRO program was provided with the CORPUS and the STRIP-YIELD models for computation of the crack opening load levels. The implementation is discussed and recent results of the verification are presented.
A new approach to impulsive rendezvous near circular orbit
NASA Astrophysics Data System (ADS)
Carter, Thomas; Humi, Mayer
2012-04-01
A new approach is presented for the problem of planar optimal impulsive rendezvous of a spacecraft in an inertial frame near a circular orbit in a Newtonian gravitational field. The total characteristic velocity to be minimized is replaced by a related characteristic-value function and this related optimization problem can be solved in closed form. The solution of this problem is shown to approach the solution of the original problem in the limit as the boundary conditions approach those of a circular orbit. Using a form of primer-vector theory the problem is formulated in a way that leads to relatively easy calculation of the optimal velocity increments. A certain vector that can easily be calculated from the boundary conditions determines the number of impulses required for solution of the optimization problem and also is useful in the computation of these velocity increments. Necessary and sufficient conditions for boundary conditions to require exactly three nonsingular non-degenerate impulses for solution of the related optimal rendezvous problem, and a means of calculating these velocity increments are presented. A simple example of a three-impulse rendezvous problem is solved and the resulting trajectory is depicted. Optimal non-degenerate nonsingular two-impulse rendezvous for the related problem is found to consist of four categories of solutions depending on the four ways the primer vector locus intersects the unit circle. Necessary and sufficient conditions for each category of solutions are presented. The region of the boundary values that admit each category of solutions of the related problem are found, and in each case a closed-form solution of the optimal velocity increments is presented. Similar results are presented for the simpler optimal rendezvous that require only one-impulse. For brevity degenerate and singular solutions are not discussed in detail, but should be presented in a following study. Although this approach is thought to provide simpler computations than existing methods, its main contribution may be in establishing a new approach to the more general problem.
NASA Astrophysics Data System (ADS)
Yan, Y.; Barth, A.; Beckers, J. M.; Brankart, J. M.; Brasseur, P.; Candille, G.
2017-07-01
In this paper, three incremental analysis update schemes (IAU 0, IAU 50 and IAU 100) are compared in the same assimilation experiments with a realistic eddy permitting primitive equation model of the North Atlantic Ocean using the Ensemble Kalman Filter. The difference between the three IAU schemes lies on the position of the increment update window. The relevance of each IAU scheme is evaluated through analyses on both thermohaline and dynamical variables. The validation of the assimilation results is performed according to both deterministic and probabilistic metrics against different sources of observations. For deterministic validation, the ensemble mean and the ensemble spread are compared to the observations. For probabilistic validation, the continuous ranked probability score (CRPS) is used to evaluate the ensemble forecast system according to reliability and resolution. The reliability is further decomposed into bias and dispersion by the reduced centred random variable (RCRV) score. The obtained results show that 1) the IAU 50 scheme has the same performance as the IAU 100 scheme 2) the IAU 50/100 schemes outperform the IAU 0 scheme in error covariance propagation for thermohaline variables in relatively stable region, while the IAU 0 scheme outperforms the IAU 50/100 schemes in dynamical variables estimation in dynamically active region 3) in case with sufficient number of observations and good error specification, the impact of IAU schemes is negligible. The differences between the IAU 0 scheme and the IAU 50/100 schemes are mainly due to different model integration time and different instability (density inversion, large vertical velocity, etc.) induced by the increment update. The longer model integration time with the IAU 50/100 schemes, especially the free model integration, on one hand, allows for better re-establishment of the equilibrium model state, on the other hand, smooths the strong gradients in dynamically active region.
Pouwels, Xavier Ghislain Léon Victor; Ramaekers, Bram L T; Joore, Manuela A
2017-10-01
To provide an overview of model characteristics and outcomes of model-based economic evaluations concerning chemotherapy and targeted therapy (TT) for metastatic breast cancer (MBC); to assess the quality of the studies; to analyse the association between model characteristics and study quality and outcomes. PubMED and NHS EED were systematically searched. Inclusion criteria were as follows: English or Dutch language, model-based economic evaluation, chemotherapy or TT as intervention, population diagnosed with MBC, published between 2000 and 2014, reporting life years (LY) or quality-adjusted life-year (QALY) and an incremental cost-effectiveness ratio. General characteristics, model characteristics and outcomes of the studies were extracted. Quality of the studies was assessed through a checklist. 24 studies were included, considering 50 comparisons (20 concerning chemotherapy and 30 TT). Seven comparisons were represented in multiple studies. A health state-transition model including the following health states: stable/progression-free disease, progression and death was used in 18 studies. Studies fulfilled on average 14 out of the 26 items of the quality checklist, mostly due to a lack of transparency in reporting. Thirty-one per cent of the incremental net monetary benefit was positive. TT led to higher iQALY gained, and industry-sponsored studies reported more favourable cost-effectiveness outcomes. The development of a disease-specific reference model would improve the transparency and quality of model-based cost-effectiveness assessments for MBC treatments. Incremental health benefits increased over time, but were outweighed by the increased treatment costs. Consequently, increased health benefits led to lower value for money.
Analytical Finite Element Simulation Model for Structural Crashworthiness Prediction
DOT National Transportation Integrated Search
1974-02-01
The analytical development and appropriate derivations are presented for a simulation model of vehicle crashworthiness prediction. Incremental equations governing the nonlinear elasto-plastic dynamic response of three-dimensional frame structures are...
NASA Technical Reports Server (NTRS)
Marr, W. A., Jr.
1972-01-01
The behavior of finite element models employing different constitutive relations to describe the stress-strain behavior of soils is investigated. Three models, which assume small strain theory is applicable, include a nondilatant, a dilatant and a strain hardening constitutive relation. Two models are formulated using large strain theory and include a hyperbolic and a Tresca elastic perfectly plastic constitutive relation. These finite element models are used to analyze retaining walls and footings. Methods of improving the finite element solutions are investigated. For nonlinear problems better solutions can be obtained by using smaller load increment sizes and more iterations per load increment than by increasing the number of elements. Suitable methods of treating tension stresses and stresses which exceed the yield criteria are discussed.
Abstraction and Assume-Guarantee Reasoning for Automated Software Verification
NASA Technical Reports Server (NTRS)
Chaki, S.; Clarke, E.; Giannakopoulou, D.; Pasareanu, C. S.
2004-01-01
Compositional verification and abstraction are the key techniques to address the state explosion problem associated with model checking of concurrent software. A promising compositional approach is to prove properties of a system by checking properties of its components in an assume-guarantee style. This article proposes a framework for performing abstraction and assume-guarantee reasoning of concurrent C code in an incremental and fully automated fashion. The framework uses predicate abstraction to extract and refine finite state models of software and it uses an automata learning algorithm to incrementally construct assumptions for the compositional verification of the abstract models. The framework can be instantiated with different assume-guarantee rules. We have implemented our approach in the COMFORT reasoning framework and we show how COMFORT out-performs several previous software model checking approaches when checking safety properties of non-trivial concurrent programs.
Numerical modeling of axi-symmetrical cold forging process by ``Pseudo Inverse Approach''
NASA Astrophysics Data System (ADS)
Halouani, A.; Li, Y. M.; Abbes, B.; Guo, Y. Q.
2011-05-01
The incremental approach is widely used for the forging process modeling, it gives good strain and stress estimation, but it is time consuming. A fast Inverse Approach (IA) has been developed for the axi-symmetric cold forging modeling [1-2]. This approach exploits maximum the knowledge of the final part's shape and the assumptions of proportional loading and simplified tool actions make the IA simulation very fast. The IA is proved very useful for the tool design and optimization because of its rapidity and good strain estimation. However, the assumptions mentioned above cannot provide good stress estimation because of neglecting the loading history. A new approach called "Pseudo Inverse Approach" (PIA) was proposed by Batoz, Guo et al.. [3] for the sheet forming modeling, which keeps the IA's advantages but gives good stress estimation by taking into consideration the loading history. Our aim is to adapt the PIA for the cold forging modeling in this paper. The main developments in PIA are resumed as follows: A few intermediate configurations are generated for the given tools' positions to consider the deformation history; the strain increment is calculated by the inverse method between the previous and actual configurations. An incremental algorithm of the plastic integration is used in PIA instead of the total constitutive law used in the IA. An example is used to show the effectiveness and limitations of the PIA for the cold forging process modeling.
Incremental Support Vector Machine Framework for Visual Sensor Networks
NASA Astrophysics Data System (ADS)
Awad, Mariette; Jiang, Xianhua; Motai, Yuichi
2006-12-01
Motivated by the emerging requirements of surveillance networks, we present in this paper an incremental multiclassification support vector machine (SVM) technique as a new framework for action classification based on real-time multivideo collected by homogeneous sites. The technique is based on an adaptation of least square SVM (LS-SVM) formulation but extends beyond the static image-based learning of current SVM methodologies. In applying the technique, an initial supervised offline learning phase is followed by a visual behavior data acquisition and an online learning phase during which the cluster head performs an ensemble of model aggregations based on the sensor nodes inputs. The cluster head then selectively switches on designated sensor nodes for future incremental learning. Combining sensor data offers an improvement over single camera sensing especially when the latter has an occluded view of the target object. The optimization involved alleviates the burdens of power consumption and communication bandwidth requirements. The resulting misclassification error rate, the iterative error reduction rate of the proposed incremental learning, and the decision fusion technique prove its validity when applied to visual sensor networks. Furthermore, the enabled online learning allows an adaptive domain knowledge insertion and offers the advantage of reducing both the model training time and the information storage requirements of the overall system which makes it even more attractive for distributed sensor networks communication.
Quantum-dot based nanothermometry in optical plasmonic recording media
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maestro, Laura Martinez; Centre for Micro-Photonics, Faculty of Science, Engineering and Technology, Swinburne University of Technology, Hawthorn, Victoria 3122; Zhang, Qiming
2014-11-03
We report on the direct experimental determination of the temperature increment caused by laser irradiation in a optical recording media constituted by a polymeric film in which gold nanorods have been incorporated. The incorporation of CdSe quantum dots in the recording media allowed for single beam thermal reading of the on-focus temperature from a simple analysis of the two-photon excited fluorescence of quantum dots. Experimental results have been compared with numerical simulations revealing an excellent agreement and opening a promising avenue for further understanding and optimization of optical writing processes and media.
A scalable and practical one-pass clustering algorithm for recommender system
NASA Astrophysics Data System (ADS)
Khalid, Asra; Ghazanfar, Mustansar Ali; Azam, Awais; Alahmari, Saad Ali
2015-12-01
KMeans clustering-based recommendation algorithms have been proposed claiming to increase the scalability of recommender systems. One potential drawback of these algorithms is that they perform training offline and hence cannot accommodate the incremental updates with the arrival of new data, making them unsuitable for the dynamic environments. From this line of research, a new clustering algorithm called One-Pass is proposed, which is a simple, fast, and accurate. We show empirically that the proposed algorithm outperforms K-Means in terms of recommendation and training time while maintaining a good level of accuracy.
Bennett, Iain; Paracha, Noman; Abrams, Keith; Ray, Joshua
2018-01-01
Rank Preserving Structural Failure Time models are one of the most commonly used statistical methods to adjust for treatment switching in oncology clinical trials. The method is often applied in a decision analytic model without appropriately accounting for additional uncertainty when determining the allocation of health care resources. The aim of the study is to describe novel approaches to adequately account for uncertainty when using a Rank Preserving Structural Failure Time model in a decision analytic model. Using two examples, we tested and compared the performance of the novel Test-based method with the resampling bootstrap method and with the conventional approach of no adjustment. In the first example, we simulated life expectancy using a simple decision analytic model based on a hypothetical oncology trial with treatment switching. In the second example, we applied the adjustment method on published data when no individual patient data were available. Mean estimates of overall and incremental life expectancy were similar across methods. However, the bootstrapped and test-based estimates consistently produced greater estimates of uncertainty compared with the estimate without any adjustment applied. Similar results were observed when using the test based approach on a published data showing that failing to adjust for uncertainty led to smaller confidence intervals. Both the bootstrapping and test-based approaches provide a solution to appropriately incorporate uncertainty, with the benefit that the latter can implemented by researchers in the absence of individual patient data. Copyright © 2018 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Modeling rate sensitivity of exercise transient responses to limb motion.
Yamashiro, Stanley M; Kato, Takahide
2014-10-01
Transient responses of ventilation (V̇e) to limb motion can exhibit predictive characteristics. In response to a change in limb motion, a rapid change in V̇e is commonly observed with characteristics different than during a change in workload. This rapid change has been attributed to a feed-forward or adaptive response. Rate sensitivity was explored as a specific hypothesis to explain predictive V̇e responses to limb motion. A simple model assuming an additive feed-forward summation of V̇e proportional to the rate of change of limb motion was studied. This model was able to successfully account for the adaptive phase correction observed during human sinusoidal changes in limb motion. Adaptation of rate sensitivity might also explain the reduction of the fast component of V̇e responses previously reported following sudden exercise termination. Adaptation of the fast component of V̇e response could occur by reduction of rate sensitivity. Rate sensitivity of limb motion was predicted by the model to reduce the phase delay between limb motion and V̇e response without changing the steady-state response to exercise load. In this way, V̇e can respond more quickly to an exercise change without interfering with overall feedback control. The asymmetry between responses to an incremental and decremental ramp change in exercise can also be accounted for by the proposed model. Rate sensitivity leads to predicted behavior, which resembles responses observed in exercise tied to expiratory reserve volume. Copyright © 2014 the American Physiological Society.
Frequency Response of Synthetic Vocal Fold Models with Linear and Nonlinear Material Properties
Shaw, Stephanie M.; Thomson, Scott L.; Dromey, Christopher; Smith, Simeon
2014-01-01
Purpose The purpose of this study was to create synthetic vocal fold models with nonlinear stress-strain properties and to investigate the effect of linear versus nonlinear material properties on fundamental frequency during anterior-posterior stretching. Method Three materially linear and three materially nonlinear models were created and stretched up to 10 mm in 1 mm increments. Phonation onset pressure (Pon) and fundamental frequency (F0) at Pon were recorded for each length. Measurements were repeated as the models were relaxed in 1 mm increments back to their resting lengths, and tensile tests were conducted to determine the stress-strain responses of linear versus nonlinear models. Results Nonlinear models demonstrated a more substantial frequency response than did linear models and a more predictable pattern of F0 increase with respect to increasing length (although range was inconsistent across models). Pon generally increased with increasing vocal fold length for nonlinear models, whereas for linear models, Pon decreased with increasing length. Conclusions Nonlinear synthetic models appear to more accurately represent the human vocal folds than linear models, especially with respect to F0 response. PMID:22271874
Frequency response of synthetic vocal fold models with linear and nonlinear material properties.
Shaw, Stephanie M; Thomson, Scott L; Dromey, Christopher; Smith, Simeon
2012-10-01
The purpose of this study was to create synthetic vocal fold models with nonlinear stress-strain properties and to investigate the effect of linear versus nonlinear material properties on fundamental frequency (F0) during anterior-posterior stretching. Three materially linear and 3 materially nonlinear models were created and stretched up to 10 mm in 1-mm increments. Phonation onset pressure (Pon) and F0 at Pon were recorded for each length. Measurements were repeated as the models were relaxed in 1-mm increments back to their resting lengths, and tensile tests were conducted to determine the stress-strain responses of linear versus nonlinear models. Nonlinear models demonstrated a more substantial frequency response than did linear models and a more predictable pattern of F0 increase with respect to increasing length (although range was inconsistent across models). Pon generally increased with increasing vocal fold length for nonlinear models, whereas for linear models, Pon decreased with increasing length. Nonlinear synthetic models appear to more accurately represent the human vocal folds than do linear models, especially with respect to F0 response.
Helicity statistics in homogeneous and isotropic turbulence and turbulence models
NASA Astrophysics Data System (ADS)
Sahoo, Ganapati; De Pietro, Massimo; Biferale, Luca
2017-02-01
We study the statistical properties of helicity in direct numerical simulations of fully developed homogeneous and isotropic turbulence and in a class of turbulence shell models. We consider correlation functions based on combinations of vorticity and velocity increments that are not invariant under mirror symmetry. We also study the scaling properties of high-order structure functions based on the moments of the velocity increments projected on a subset of modes with either positive or negative helicity (chirality). We show that mirror symmetry is recovered at small scales, i.e., chiral terms are subleading and they are well captured by a dimensional argument plus anomalous corrections. These findings are also supported by a high Reynolds numbers study of helical shell models with the same chiral symmetry of Navier-Stokes equations.
On the statistics of increments in strong Alfvenic turbulence
NASA Astrophysics Data System (ADS)
Palacios, J. C.; Perez, J. C.
2017-12-01
In-situ measurements have shown that the solar wind is dominated by non-compressive Alfvén-like fluctuations of plasma velocity and magnetic field over a broad range of scales. In this work, we present recent progress in understanding intermittency in Alfvenic turbulence by investigating the statistics of Elsasser increments from simulations of steadily driven Reduced MHD with numerical resolutions up to 2048^3. The nature of these statistics guards a close relation to the fundamental properties of small-scale structures in which the turbulence is ultimately dissipated and therefore has profound implications in the possible contribution of turbulence to the heating of the solar wind. We extensively investigate the properties and three-dimensional structure of probability density functions (PDFs) of increments and compare with recent phenomenological models of intermittency in MHD turbulence.
Harrison, D; Muskett, H; Harvey, S; Grieve, R; Shahin, J; Patel, K; Sadique, Z; Allen, E; Dybowski, R; Jit, M; Edgeworth, J; Kibbler, C; Barnes, R; Soni, N; Rowan, K
2013-02-01
There is increasing evidence that invasive fungal disease (IFD) is more likely to occur in non-neutropenic patients in critical care units. A number of randomised controlled trials (RCTs) have evaluated antifungal prophylaxis in non-neutropenic, critically ill patients, demonstrating a reduction in the risk of proven IFD and suggesting a reduction in mortality. It is necessary to establish a method to identify and target antifungal prophylaxis at those patients at highest risk of IFD, who stand to benefit most from any antifungal prophylaxis strategy. To develop and validate risk models to identify non-neutropenic, critically ill adult patients at high risk of invasive Candida infection, who would benefit from antifungal prophylaxis, and to assess the cost-effectiveness of targeting antifungal prophylaxis to high-risk patients based on these models. Systematic review, prospective data collection, statistical modelling, economic decision modelling and value of information analysis. Ninety-six UK adult general critical care units. Consecutive admissions to participating critical care units. None. Invasive fungal disease, defined as a blood culture or sample from a normally sterile site showing yeast/mould cells in a microbiological or histopathological report. For statistical and economic modelling, the primary outcome was invasive Candida infection, defined as IFD-positive for Candida species. Systematic review: Thirteen articles exploring risk factors, risk models or clinical decision rules for IFD in critically ill adult patients were identified. Risk factors reported to be significantly associated with IFD were included in the final data set for the prospective data collection. Data were collected on 60,778 admissions between July 2009 and March 2011. Overall, 383 patients (0.6%) were admitted with or developed IFD. The majority of IFD patients (94%) were positive for Candida species. The most common site of infection was blood (55%). The incidence of IFD identified in unit was 4.7 cases per 1000 admissions, and for unit-acquired IFD was 3.2 cases per 1000 admissions. Statistical modelling: Risk models were developed at admission to the critical care unit, 24 hours and the end of calendar day 3. The risk model at admission had fair discrimination (c-index 0.705). Discrimination improved at 24 hours (c-index 0.823) and this was maintained at the end of calendar day 3 (c-index 0.835). There was a drop in model performance in the validation sample. Economic decision model: Irrespective of risk threshold, incremental quality-adjusted life-years of prophylaxis strategies compared with current practice were positive but small compared with the incremental costs. Incremental net benefits of each prophylaxis strategy compared with current practice were all negative. Cost-effectiveness acceptability curves showed that current practice was the strategy most likely to be cost-effective. Across all parameters in the decision model, results indicated that the value of further research for the whole population of interest might be high relative to the research costs. The results of the Fungal Infection Risk Evaluation (FIRE) Study, derived from a highly representative sample of adult general critical care units across the UK, indicated a low incidence of IFD among non-neutropenic, critically ill adult patients. IFD was associated with substantially higher mortality, more intensive organ support and longer length of stay. Risk modelling produced simple risk models that provided acceptable discrimination for identifying patients at 'high risk' of invasive Candida infection. Results of the economic model suggested that the current most cost-effective treatment strategy for prophylactic use of systemic antifungal agents among non-neutropenic, critically ill adult patients admitted to NHS adult general critical care units is a strategy of no risk assessment and no antifungal prophylaxis. Funding for this study was provided by the Health Technology Assessment programme of the National Institute for Health Research.
Numerical Analysis of Laminated, Orthotropic Composite Structures
1975-11-01
the meridian plane. In the first model , a nine degree-of-freedom, straight sided, tri- angular element was used. In this element, the three...E ■ 13.79 GPa v«. ■ «25» 6.. ■ 4.82 GPa ns its V . « .25, G. « 4.82 GPa nt nt vst * ,4S» 6st * 1*379 6P...means zero values of axial accelera- tion, and angular acceleration and velocity for each load increment) NLINC (Number of load increments with time
Moving Up the CMMI Capability and Maturity Levels Using Simulation
2008-01-01
Alternative Process Tools, Including NPV and ROI 6 Figure 3: Top-Level View of the Full Life-Cycle Version of the IEEE 12207 PSIM, Including IV&V Layer 19...Figure 4: Screenshot of the Incremental Version Model 19 Figure 5: IEEE 12207 PSIM Showing the Top-Level Life-Cycle Phases 22 Figure 6: IEEE 12207 ...Software Detailed Design for the IEEE 12207 Life- Cycle Process 24 Figure 8: Incremental Life Cycle PSIM Configured for a Specific Project Using SEPG
Beyond Incrementalism? SCHIP and the politics of health reform.
Oberlander, Jonathan B; Lyons, Barbara
2009-01-01
When Congress enacted the State Children's Health Insurance Program (SCHIP) in 1997, it was heralded as a model of bipartisan, incremental health policy. However, despite the program's achievements in the ensuing decade, SCHIP's reauthorization triggered political conflict, and efforts to expand the program stalemated in 2007. The 2008 elections broke that stalemate, and in 2009 the new Congress passed, and President Barack Obama signed, legislation reauthorizing SCHIP. Now that attention is turning to comprehensive health reform, what lessons can reformers learn from SCHIP's political adventures?
Nuclear cycler: An incremental approach to the deflection of asteroids
NASA Astrophysics Data System (ADS)
Vasile, Massimiliano; Thiry, Nicolas
2016-04-01
This paper introduces a novel deflection approach based on nuclear explosions: the nuclear cycler. The idea is to combine the effectiveness of nuclear explosions with the controllability and redundancy offered by slow push methods within an incremental deflection strategy. The paper will present an extended model for single nuclear stand-off explosions in the proximity of elongated ellipsoidal asteroids, and a family of natural formation orbits that allows the spacecraft to deploy multiple bombs while being shielded by the asteroid during the detonation.
Galizio, Mark; April, Brooke; Deal, Melissa; Hawkey, Andrew; Panoz-Brown, Danielle; Prichard, Ashley; Bruce, Katherine
2018-01-01
The Odor Span Task is an incrementing non-matching-to-sample procedure that permits the study of behavior under the control of multiple stimuli. Rats are exposed to a series of odor stimuli and selection of new stimuli is reinforced. Successful performance thus requires remembering which stimuli have previously been presented during a given session. This procedure has been frequently used in neurobiological studies as a rodent model of working memory; however, only a few studies have examined the effects of drugs on performance in this task. The present experiments explored the behavioral pharmacology of a modified version of the Odor Span Task by determining the effects of stimulant drugs methylphenidate and methamphetamine, NMDA antagonist ketamine, and positive GABAA modulator flunitrazepam. All four drugs produced dose-dependent impairment of performances on the Odor Span Task, but for methylphenidate and methamphetamine, these occurred only at doses that had similar effects on performance of a simple odor discrimination. Generally, these disruptions were based on omission of responding at the effective doses. The effects of ketamine and flunitrazepam were more selective in some rats. That is, some rats tested under flunitrazepam and ketamine showed decreases in accuracy on the Odor Span Task at doses that did not affect simple discrimination performance. These selective effects indicate disruption of within-session stimulus control. Overall, these findings support the potential of the Odor Span Task as a baseline for the behavioral pharmacological analysis of remembering. PMID:27747877
Business Model Innovation: A Blueprint for Higher Education
ERIC Educational Resources Information Center
Flanagan, Christine
2012-01-01
Business model innovation is one of the most challenging components of 21st-century leadership. Making incremental improvements to a business model--creating new efficiencies, expanding into adjacent markets--is hard enough. Developing and experimenting with new business models that truly transform how an institution delivers value (while…
An Improved Incremental Learning Approach for KPI Prognosis of Dynamic Fuel Cell System.
Yin, Shen; Xie, Xiaochen; Lam, James; Cheung, Kie Chung; Gao, Huijun
2016-12-01
The key performance indicator (KPI) has an important practical value with respect to the product quality and economic benefits for modern industry. To cope with the KPI prognosis issue under nonlinear conditions, this paper presents an improved incremental learning approach based on available process measurements. The proposed approach takes advantage of the algorithm overlapping of locally weighted projection regression (LWPR) and partial least squares (PLS), implementing the PLS-based prognosis in each locally linear model produced by the incremental learning process of LWPR. The global prognosis results including KPI prediction and process monitoring are obtained from the corresponding normalized weighted means of all the local models. The statistical indicators for prognosis are enhanced as well by the design of novel KPI-related and KPI-unrelated statistics with suitable control limits for non-Gaussian data. For application-oriented purpose, the process measurements from real datasets of a proton exchange membrane fuel cell system are employed to demonstrate the effectiveness of KPI prognosis. The proposed approach is finally extended to a long-term voltage prediction for potential reference of further fuel cell applications.
NASA Technical Reports Server (NTRS)
Kennedy, Thomas L.
1956-01-01
A flight investigation was conducted to determine the effect of jet exhaust on the drag, trim characteristics, and afterbody pressures on a 0.125-scale rocket model of the McDonnell F-101A airplance. Power-off data were obtained over a Mach number range of 1.04 to 1.9 and power-on data were obtained at a Mach number of about 1.5. The data indicated that with power-on the change in external drag coefficient was within the data accuracy and there was a decrease in trim angle of attack of 1.27 degrees with a corresponding decrease of 0.07 in lift coefficient. Correspondingly, pressure coefficients on the side and bottom of the fuselage indicated a positive increment near the jet exit. As the distance downstream of the jet exit increased, the increment on the bottom of the fuselage increased, whereas the increments on the side decreased to a negative peak.
Analyzing the posting behaviors in news forums with incremental inter-event time
NASA Astrophysics Data System (ADS)
Sun, Zhi; Peng, Qinke; Lv, Jia; Zhong, Tao
2017-08-01
Online human behaviors are widely discussed in various fields. Three key factors, named priority, interest and memory are found crucial in human behaviors. Existing research mainly focuses on the identified and active users. However, the anonymous users and the inactive ones exist widely in news forums, whose behaviors do not receive enough attention. They cannot offer abundant postings like the others. It requires us to study posting behaviors of all the users including anonymous ones, identified ones, active ones and inactive ones in news forums only at the collective level. In this paper, the memory effects of the posting behaviors in news forums are investigated at the collective level. On the basis of the incremental inter-event time, a new model is proposed to describe the posting behaviors at the collective level. The results on twelve actual news events demonstrate the good performance of our model to describe the posting behaviors at the collective level in news forums. In addition, we find the symmetric incremental inter-event time distribution and the similar posting patterns in different durations.
Whitlock, R E; Walli, A; Cermeño, P; Rodriguez, L E; Farwell, C; Block, B A
2013-11-01
Using implanted archival tags, we examined the effects of meal caloric value, food type (sardine or squid) and ambient temperature on the magnitude and duration of the heat increment of feeding in three captive juvenile Pacific bluefin tuna. The objective of our study was to develop a model that can be used to estimate energy intake in wild fish of similar body mass. Both the magnitude and duration of the heat increment of feeding (measured by visceral warming) showed a strong positive correlation with the caloric value of the ingested meal. Controlling for meal caloric value, the extent of visceral warming was significantly greater at lower ambient temperature. The extent of visceral warming was also significantly higher for squid meals compared with sardine meals. By using a hierarchical Bayesian model to analyze our data and treating individuals as random effects, we demonstrate how increases in visceral temperature can be used to estimate the energy intake of wild Pacific bluefin tuna of similar body mass to the individuals used in our study.
NASA Technical Reports Server (NTRS)
Griffin, Roy N., Jr.; Holzhauser, Curt A.; Weiberg, James A.
1958-01-01
An investigation was made to determine the lifting effectiveness and flow requirements of blowing over the trailing-edge flaps and ailerons on a large-scale model of a twin-engine, propeller-driven airplane having a high-aspect-ratio, thick, straight wing. With sufficient blowing jet momentum to prevent flow separation on the flap, the lift increment increased for flap deflections up to 80 deg (the maximum tested). This lift increment also increased with increasing propeller thrust coefficient. The blowing jet momentum coefficient required for attached flow on the flaps was not significantly affected by thrust coefficient, angle of attack, or blowing nozzle height.
Flexible Environments for Grand-Challenge Simulation in Climate Science
NASA Astrophysics Data System (ADS)
Pierrehumbert, R.; Tobis, M.; Lin, J.; Dieterich, C.; Caballero, R.
2004-12-01
Current climate models are monolithic codes, generally in Fortran, aimed at high-performance simulation of the modern climate. Though they adequately serve their designated purpose, they present major barriers to application in other problems. Tailoring them to paleoclimate of planetary simulations, for instance, takes months of work. Theoretical studies, where one may want to remove selected processes or break feedback loops, are similarly hindered. Further, current climate models are of little value in education, since the implementation of textbook concepts and equations in the code is obscured by technical detail. The Climate Systems Center at the University of Chicago seeks to overcome these limitations by bringing modern object-oriented design into the business of climate modeling. Our ultimate goal is to produce an end-to-end modeling environment capable of configuring anything from a simple single-column radiative-convective model to a full 3-D coupled climate model using a uniform, flexible interface. Technically, the modeling environment is implemented as a Python-based software component toolkit: key number-crunching procedures are implemented as discrete, compiled-language components 'glued' together and co-ordinated by Python, combining the high performance of compiled languages and the flexibility and extensibility of Python. We are incrementally working towards this final objective following a series of distinct, complementary lines. We will present an overview of these activities, including PyOM, a Python-based finite-difference ocean model allowing run-time selection of different Arakawa grids and physical parameterizations; CliMT, an atmospheric modeling toolkit providing a library of 'legacy' radiative, convective and dynamical modules which can be knitted into dynamical models, and PyCCSM, a version of NCAR's Community Climate System Model in which the coupler and run-control architecture are re-implemented in Python, augmenting its flexibility and adaptability.
Impact of Incremental Perfusion Loss on Oxygen Transport in a Capillary Network Mathematical Model.
Fraser, Graham M; Sharpe, Michael D; Goldman, Daniel; Ellis, Christopher G
2015-07-01
To quantify how incremental capillary PL, such as that seen in experimental models of sepsis, affects tissue oxygenation using a computation model of oxygen transport. A computational model was applied to capillary networks with dimensions 84 × 168 × 342 (NI) and 70 × 157 × 268 (NII) μm, reconstructed in vivo from rat skeletal muscle. FCD loss was applied incrementally up to ~40% and combined with high tissue oxygen consumption to simulate severe sepsis. A loss of ~40% FCD loss decreased median tissue PO2 to 22.9 and 20.1 mmHg in NI and NII compared to 28.1 and 27.5 mmHg under resting conditions. Increasing RBC SR to baseline levels returned tissue PO2 to within 5% of baseline. HC combined with a 40% FCD loss, resulted in tissue anoxia in both network volumes and median tissue PO2 of 11.5 and 8.9 mmHg in NI and NII respectively; median tissue PO2 was recovered to baseline levels by increasing total SR 3-4 fold. These results suggest a substantial increase in total SR is required in order to compensate for impaired oxygen delivery as a result of loss of capillary perfusion and increased oxygen consumption during sepsis. © 2015 John Wiley & Sons Ltd.
Wang, Heng; Sang, Yuanjun
2017-10-01
The mechanical behavior modeling of human soft biological tissues is a key issue for a large number of medical applications, such as surgery simulation, surgery planning, diagnosis, etc. To develop a biomechanical model of human soft tissues under large deformation for surgery simulation, the adaptive quasi-linear viscoelastic (AQLV) model was proposed and applied in human forearm soft tissues by indentation tests. An incremental ramp-and-hold test was carried out to calibrate the model parameters. To verify the predictive ability of the AQLV model, the incremental ramp-and-hold test, a single large amplitude ramp-and-hold test and a sinusoidal cyclic test at large strain amplitude were adopted in this study. Results showed that the AQLV model could predict the test results under the three kinds of load conditions. It is concluded that the AQLV model is feasible to describe the nonlinear viscoelastic properties of in vivo soft tissues under large deformation. It is promising that this model can be selected as one of the soft tissues models in the software design for surgery simulation or diagnosis.
How Fast Do Europa's Ridges Grow?
NASA Astrophysics Data System (ADS)
Melosh, H. J.; Turtle, E. P.; Freed, A. M.
2017-11-01
We demonstrate with our incremental wedging model of ridge formation that ridges must grow in 5000 years or less to prevent their material flowing down an underlying warm ice channel. This conclusion holds for other models as well.
An incremental knowledge assimilation system (IKAS) for mine detection
NASA Astrophysics Data System (ADS)
Porway, Jake; Raju, Chaitanya; Varadarajan, Karthik Mahesh; Nguyen, Hieu; Yadegar, Joseph
2010-04-01
In this paper we present an adaptive incremental learning system for underwater mine detection and classification that utilizes statistical models of seabed texture and an adaptive nearest-neighbor classifier to identify varied underwater targets in many different environments. The first stage of processing uses our Background Adaptive ANomaly detector (BAAN), which identifies statistically likely target regions using Gabor filter responses over the image. Using this information, BAAN classifies the background type and updates its detection using background-specific parameters. To perform classification, a Fully Adaptive Nearest Neighbor (FAAN) determines the best label for each detection. FAAN uses an extremely fast version of Nearest Neighbor to find the most likely label for the target. The classifier perpetually assimilates new and relevant information into its existing knowledge database in an incremental fashion, allowing improved classification accuracy and capturing concept drift in the target classes. Experiments show that the system achieves >90% classification accuracy on underwater mine detection tasks performed on synthesized datasets provided by the Office of Naval Research. We have also demonstrated that the system can incrementally improve its detection accuracy by constantly learning from new samples.
McLaughlin, Samuel B; Wullschleger, Stan D; Nosal, Miloslav
2003-11-01
To evaluate indicators of whole-tree physiological responses to climate stress, we determined seasonal, daily and diurnal patterns of growth and water use in 10 yellow poplar (Liriodendron tulipifera L.) trees in a stand recently released from competition. Precise measurements of stem increment and sap flow made with automated electronic dendrometers and thermal dissipation probes, respectively, indicated close temporal linkages between water use and patterns of stem shrinkage and swelling during daily cycles of water depletion and recharge of extensible outer-stem tissues. These cycles also determined net daily basal area increment. Multivariate regression models based on a 123-day data series showed that daily diameter increments were related negatively to vapor pressure deficit (VPD), but positively to precipitation and temperature. The same model form with slight changes in coefficients yielded coefficients of determination of about 0.62 (0.57-0.66) across data subsets that included widely variable growth rates and VPDs. Model R2 was improved to 0.75 by using 3-day running mean daily growth data. Rapid recovery of stem diameter growth following short-term, diurnal reductions in VPD indicated that water stored in extensible stem tissues was part of a fast recharge system that limited hydration changes in the cambial zone during periods of water stress. There were substantial differences in the seasonal dynamics of growth among individual trees, and analyses indicated that faster-growing trees were more positively affected by precipitation, solar irradiance and temperature and more negatively affected by high VPD than slower-growing trees. There were no negative effects of ozone on daily growth rates in a year of low ozone concentrations.
Byrnes, Joshua; Carrington, Melinda; Chan, Yih-Kai; Pollicino, Christine; Dubrowin, Natalie; Stewart, Simon; Scuffham, Paul A.
2015-01-01
The aim of this study is to consider the cost-effectiveness of a nurse-led, home-based intervention (HBI) in cardiac patients with private health insurance compared to usual post-discharge care. A within trial analysis of the Young @ Heart multicentre, randomized controlled trial along with a micro-simulation decision analytical model was conducted to estimate the incremental costs and quality adjusted life years associated with the home based intervention compared to usual care. For the micro-simulation model, future costs, from the perspective of the funder, and effects are estimated over a twenty-year time horizon. An Incremental Cost-Effectiveness Ratio, along with Incremental Net Monetary Benefit, is evaluated using a willingness to pay threshold of $50,000 per quality adjusted life year. Sub-group analyses are conducted for men and women across three age groups separately. Costs and benefits that arise in the future are discounted at five percent per annum. Overall, home based intervention for secondary prevention in patients with chronic heart disease identified in the Australian private health care sector is not cost-effective. The estimated within trial incremental net monetary benefit is -$3,116 [95%CI: -11,145, $4,914]; indicating that the costs outweigh the benefits. However, for males and in particular males aged 75 years and above, home based intervention indicated a potential to reduce health care costs when compared to usual care (within trial: -$10,416 [95%CI: -$26,745, $5,913]; modelled analysis: -$1,980 [95%CI: -$22,843, $14,863]). This work provides a crucial impetus for future research to understand for whom disease management programs are likely to benefit most. PMID:26657844
Baudrexel, Simon; Nöth, Ulrike; Schüre, Jan-Rüdiger; Deichmann, Ralf
2018-06-01
The variable flip angle method derives T 1 maps from radiofrequency-spoiled gradient-echo data sets, acquired with different flip angles α. Because the method assumes validity of the Ernst equation, insufficient spoiling of transverse magnetization yields errors in T 1 estimation, depending on the chosen radiofrequency-spoiling phase increment (Δϕ). This paper presents a versatile correction method that uses modified flip angles α' to restore the validity of the Ernst equation. Spoiled gradient-echo signals were simulated for three commonly used phase increments Δϕ (50°/117°/150°), different values of α, repetition time (TR), T 1 , and a T 2 of 85 ms. For each parameter combination, α' (for which the Ernst equation yielded the same signal) and a correction factor C Δϕ (α, TR, T 1 ) = α'/α were determined. C Δϕ was found to be independent of T 1 and fitted as polynomial C Δϕ (α, TR), allowing to calculate α' for any protocol using this Δϕ. The accuracy of the correction method for T 2 values deviating from 85 ms was also determined. The method was tested in vitro and in vivo for variable flip angle scans with different acquisition parameters. The technique considerably improved the accuracy of variable flip angle-based T 1 maps in vitro and in vivo. The proposed method allows for a simple correction of insufficient spoiling in gradient-echo data. The required polynomial parameters are supplied for three common Δϕ. Magn Reson Med 79:3082-3092, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
NASA Astrophysics Data System (ADS)
Chan, Kwai S.; Enright, Michael P.; Moody, Jonathan; Fitch, Simeon H. K.
2014-01-01
The objective of this investigation was to develop an innovative methodology for life and reliability prediction of hot-section components in advanced turbopropulsion systems. A set of generic microstructure-based time-dependent crack growth (TDCG) models was developed and used to assess the sources of material variability due to microstructure and material parameters such as grain size, activation energy, and crack growth threshold for TDCG. A comparison of model predictions and experimental data obtained in air and in vacuum suggests that oxidation is responsible for higher crack growth rates at high temperatures, low frequencies, and long dwell times, but oxidation can also induce higher crack growth thresholds (Δ K th or K th) under certain conditions. Using the enhanced risk analysis tool and material constants calibrated to IN 718 data, the effect of TDCG on the risk of fracture in turboengine components was demonstrated for a generic rotor design and a realistic mission profile using the DARWIN® probabilistic life-prediction code. The results of this investigation confirmed that TDCG and cycle-dependent crack growth in IN 718 can be treated by a simple summation of the crack increments over a mission. For the temperatures considered, TDCG in IN 718 can be considered as a K-controlled or a diffusion-controlled oxidation-induced degradation process. This methodology provides a pathway for evaluating microstructural effects on multiple damage modes in hot-section components.
NASA Astrophysics Data System (ADS)
Polvani, L. M.; Wang, L.; Aquila, V.; Waugh, D.
2016-12-01
The impact of ozone depleting substances on global lower stratospheric temperature trends is widely recognized. In the tropics, however, understanding lower stratospheric temperature trends has proven more challenging. While the tropical lower stratospheric cooling observed from 1979 to 1997 has also been shown to result almost entirely from ozone decreases, those ozone trends cannot be of chemical origin, as active chlorine is not abundant in the tropical lower stratosphere. The 1979-1997 tropical ozone trends are believed to originate from enhanced upwelling which, it is often stated, would be driven by increasing concentrations of well mixed greenhouse gases. In this study, using simple arguments based on observational evidence after 1997, combined with model integrations with incrementally added single forcings, we argue that ozone depleting substances, not well mixed greenhouse gases, have been the primary driver of temperature and ozone trends in the tropical lower stratosphere until 1997, and this has occurred because ozone depleting substances affect tropical upwelling and the entire Brewer-Dobson circulation.
NASA Technical Reports Server (NTRS)
Miller, C. G., III
1982-01-01
Pressure distributions, aerodynamic coefficients, and shock shapes were measured on blunt bodies of revolution in Mach 6 CF4 and in Mach 6 and Mach 10 air. The angle of attack was varied from 0 deg to 20 deg in 4 deg increments. Configurations tested were a hyperboloid with an asymptotic angle of 45 deg, a sonic-corner paraboloid, a paraboloid with an angle of 27.6 deg at the base, a Viking aeroshell generated in a generalized orthogonal coordinate system, and a family of cones having a 45 deg half-angle with spherical, flattened, concave, and cusp nose shapes. Real-gas effects were simulated for the hperboloid and paraboloid models at Mach 6 by testing at a normal-shock density ratio of 5.3 in air and 12 CF4. Predictions from simple theories and numerical flow field programs are compared with measurement. It is anticipated that the data presented in this report will be useful for verification of analytical methods for predicting hypersonic flow fields about blunt bodies at incidence.
NASA Astrophysics Data System (ADS)
Onnela, Jukka-Pekka; Töyli, Juuso; Kaski, Kimmo
2009-02-01
Tick size is an important aspect of the micro-structural level organization of financial markets. It is the smallest institutionally allowed price increment, has a direct bearing on the bid-ask spread, influences the strategy of trading order placement in electronic markets, affects the price formation mechanism, and appears to be related to the long-term memory of volatility clustering. In this paper we investigate the impact of tick size on stock returns. We start with a simple simulation to demonstrate how continuous returns become distorted after confining the price to a discrete grid governed by the tick size. We then move on to a novel experimental set-up that combines decimalization pilot programs and cross-listed stocks in New York and Toronto. This allows us to observe a set of stocks traded simultaneously under two different ticks while holding all security-specific characteristics fixed. We then study the normality of the return distributions and carry out fits to the chosen distribution models. Our empirical findings are somewhat mixed and in some cases appear to challenge the simulation results.
NASA Astrophysics Data System (ADS)
Pena, Rodrigo F. O.; Ceballos, Cesar C.; Lima, Vinicius; Roque, Antonio C.
2018-04-01
In a neuron with hyperpolarization activated current (Ih), the correct input frequency leads to an enhancement of the output response. This behavior is known as resonance and is well described by the neuronal impedance. In a simple neuron model we derive equations for the neuron's resonance and we link its frequency and existence with the biophysical properties of Ih. For a small voltage change, the component of the ratio of current change to voltage change (d I /d V ) due to the voltage-dependent conductance change (d g /d V ) is known as derivative conductance (GhDer). We show that both GhDer and the current activation kinetics (characterized by the activation time constant τh) are mainly responsible for controlling the frequency and existence of resonance. The increment of both factors (GhDer and τh) greatly contributes to the appearance of resonance. We also demonstrate that resonance is voltage dependent due to the voltage dependence of GhDer. Our results have important implications and can be used to predict and explain resonance properties of neurons with the Ih current.
NASA Astrophysics Data System (ADS)
Forsythe, N.; Blenkinsop, S.; Fowler, H. J.
2015-05-01
A three-step climate classification was applied to a spatial domain covering the Himalayan arc and adjacent plains regions using input data from four global meteorological reanalyses. Input variables were selected based on an understanding of the climatic drivers of regional water resource variability and crop yields. Principal component analysis (PCA) of those variables and k-means clustering on the PCA outputs revealed a reanalysis ensemble consensus for eight macro-climate zones. Spatial statistics of input variables for each zone revealed consistent, distinct climatologies. This climate classification approach has potential for enhancing assessment of climatic influences on water resources and food security as well as for characterising the skill and bias of gridded data sets, both meteorological reanalyses and climate models, for reproducing subregional climatologies. Through their spatial descriptors (area, geographic centroid, elevation mean range), climate classifications also provide metrics, beyond simple changes in individual variables, with which to assess the magnitude of projected climate change. Such sophisticated metrics are of particular interest for regions, including mountainous areas, where natural and anthropogenic systems are expected to be sensitive to incremental climate shifts.
Magmatism at different crustal levels in the ancient North Cascades magmatic arc
NASA Astrophysics Data System (ADS)
Shea, E. K.; Bowring, S. A.; Miller, R. B.; Miller, J. S.
2013-12-01
The mechanisms of magma ascent and emplacement inferred from study of intrusive complexes have long been the subject of intense debate. Current models favor incremental construction based on integration of field, geochemical, geochronologic, and modeling studies. Much of this work has been focused on a single crustal level. However, study of magmatism throughout the crust is critical for understanding how magma ascends through and intrudes surrounding crustal material. Here, we present new geochronologic and geochemical work from intrusive complexes emplaced at a range of crustal depths in the Cretaceous North Cascades magmatic arc. These complexes were intruded between 92 and 87 Ma at depths of at ≤5 -10 km, ~20 km, and ~25 km during this time. U-Pb CA-TIMS geochronology in zircon can resolve <0.1% differences in zircon dates and when combined with detailed field relationships allow new insights into how magmatic systems are assembled. We can demonstrate highly variable rates of intrusion at different crustal levels: the shallow-crustal (5-10 km) Black Peak intrusive complex was assembled semi-continuously over ~5 My, while the deep-crustal (25-30 km) Tenpeak intrusive complex was assembled in brief, high-flux events over ~2.6 My. Between these bodies is the Seven-Fingered Jack-Entiat intrusive complex, a highly elongate amalgamation of intrusions recording two episodes of magmatism between~92-88 Ma and ~80-77 Ma. Each of these complexes provides a window into crustal processes that occur at different depths. Our data suggest assembly of the Black Peak intrusive complex occurred via a series of small (0.5-2 km2) magmatic increments from ~92 Ma to ~87 Ma. Field relations and zircon trace element geochemistry indicate each of these increments were emplaced and crystallized as closed systems-we find no evidence for mixing between magmas in the complex. However, zircon inheritance becomes more common in younger intrusions, indicating assimilation of older plutonic material, possibly during magma production or transport. The Seven-Fingered Jack intrusive complex, emplaced around 15-20 km, preserves a much more discontinuous record of intrusion than the Black Peak. Our data indicate major magmatism in the complex occurred between ~92.1-91.1 Ma. Inheritance in the Seven-Fingered Jack is common, particularly along contacts between intrusions. The Tenpeak intrusive complex, assembled between ~92 Ma and 89 Ma, represents one of the deepest exhumed complexes in the North Cascades. Our geochronology indicates that plutons comprising the complex were intruded rapidly (<200 ka) and followed by periods of magmatic quiescence. Contact relations between contemporaneous intrusions are often mixed, further supporting rapid assembly. Zircon systematics in the Tenpeak are relatively simple, showing no evidence for inheritance from the surrounding host rock or from earlier intrusions. However, zircon oxygen isotope data indicates many magmas contain significant crustal input. The Black Peak, Seven-Fingered Jack, and Tenpeak intrusions illustrate the complicated nature of magmatism at different crustal levels in the 92-87 Ma North Cascades magmatic arc. Our data support incremental assembly of these complexes, but show that many features, such as style of emplacement, zircon chemical and temporal systematics, and magma composition vary between these intrusions.
Bohlouli, Babak; Jackson, Terri; Tonelli, Marcello; Hemmelgarn, Brenda; Klarenbach, Scott
2017-12-28
Patients with CKD are at increased risk of potentially preventable hospital acquired complications (HACs). Understanding the economic consequences of preventable HACs, may define the scope and investment of initiatives aimed at prevention. Adult patients hospitalized from April, 2003 to March, 2008 in Alberta, Canada comprised the study cohort. Healthcare costs were determined and categorized into 'index hospitalization' including hospital cost and in-hospital physician claims, and 'post discharge' including ambulatory care cost, physician claims, and readmission costs from discharge to 90 days. Multivariable regression was used to estimate the incremental healthcare costs associated with potentially preventable HACs. In fully adjusted models, the median incremental index hospitalization cost was CAN-$6169 (95% CI; 6003-6336) in CKD patients with ≥1 potentially preventable HACs, compared with those without. Post-discharge incremental costs were 1471(95% CI; 844-2099) in those patients with CKD who developed potentially preventable HACs within 90 days after discharge compared with patients without potentially preventable HACs. Additionally, the incremental costs associated with ≥1 potentially preventable HACs within 90 days from admission in patients with CKD were $7522 (95% CI; 7219-7824). A graded relation of the incremental costs was noted with the increasing number of complications. In patients without CKD but with ≥1 preventable HACs incremental costs within 90 days from hospital admission was $6688 (95% CI: 6612-6723). Potentially preventable HACs are associated with substantial increases in healthcare costs in people with CKD. Investment in implementing targeted strategies to reduce HACs may have a significant benefit for patient and health system outcomes.
Assimilation of gridded terrestrial water storage observations from GRACE into a land surface model
NASA Astrophysics Data System (ADS)
Girotto, Manuela; De Lannoy, Gabriëlle J. M.; Reichle, Rolf H.; Rodell, Matthew
2016-05-01
Observations of terrestrial water storage (TWS) from the Gravity Recovery and Climate Experiment (GRACE) satellite mission have a coarse resolution in time (monthly) and space (roughly 150,000 km2 at midlatitudes) and vertically integrate all water storage components over land, including soil moisture and groundwater. Data assimilation can be used to horizontally downscale and vertically partition GRACE-TWS observations. This work proposes a variant of existing ensemble-based GRACE-TWS data assimilation schemes. The new algorithm differs in how the analysis increments are computed and applied. Existing schemes correlate the uncertainty in the modeled monthly TWS estimates with errors in the soil moisture profile state variables at a single instant in the month and then apply the increment either at the end of the month or gradually throughout the month. The proposed new scheme first computes increments for each day of the month and then applies the average of those increments at the beginning of the month. The new scheme therefore better reflects submonthly variations in TWS errors. The new and existing schemes are investigated here using gridded GRACE-TWS observations. The assimilation results are validated at the monthly time scale, using in situ measurements of groundwater depth and soil moisture across the U.S. The new assimilation scheme yields improved (although not in a statistically significant sense) skill metrics for groundwater compared to the open-loop (no assimilation) simulations and compared to the existing assimilation schemes. A smaller impact is seen for surface and root-zone soil moisture, which have a shorter memory and receive smaller increments from TWS assimilation than groundwater. These results motivate future efforts to combine GRACE-TWS observations with observations that are more sensitive to surface soil moisture, such as L-band brightness temperature observations from Soil Moisture Ocean Salinity (SMOS) or Soil Moisture Active Passive (SMAP). Finally, we demonstrate that the scaling parameters that are applied to the GRACE observations prior to assimilation should be consistent with the land surface model that is used within the assimilation system.
The value of atorvastatin over the product life cycle in the United States.
Grabner, Michael; Johnson, Wallace; Abdulhalim, Abdulla M; Kuznik, Andreas; Mullins, C Daniel
2011-10-01
US health care reform mandates the reduction of wasteful health care spending while maintaining quality of care. Introducing new drugs into crowded therapeutic classes may be viewed as offering "me-too" (new drugs with a similar mechanism of action compared to existing drugs) drugs without incremental benefit. This article presents an analysis of the incremental costs and benefits of atorvastatin, a lipid-lowering agent. This analysis models the cost-effectiveness of atorvastatin over the product life cycle. The yearly cost-effectiveness of atorvastatin compared to simvastatin was modeled from 1997 to 2030 from the point of view of a US third-party payer. Estimates for incremental costs (in US $) and effects (in quality-adjusted life-years [QALYs]) for the primary and secondary prevention of cardiovascular events were taken from previously published literature and adjusted for changes in drug prices over time. Estimates of total statin use were derived using the National Health and Nutrition Examination Survey. Sensitivity analyses were conducted to examine variations in study parameters, including drug prices, indications, and discount rates. Assuming increasing statin use over time (with a mean of 1.07 million new users per year) and a 3% discount rate, the cumulative incremental cost-effectiveness ratio (ICER) of atorvastatin versus simvastatin ranged from cost-savings at release to a maximum of $45,066/QALY after 6 years of generic simvastatin use in 2012. Over the full modeled life cycle (1997-2030), the cumulative ICER of atorvastatin was $20,331/QALY. The incremental value of atorvastatin to US payers (after subtracting costs) was estimated at $44.57 to $194.78 billion, depending on willingness to pay. Findings from the sensitivity analyses were similar. A hypothetical situation in which atorvastatin did not exist was associated with a reduction in total expenditures but also a loss of QALYs gained. The cumulative ICER of atorvastatin varied across the product life cycle, increasing during the period between generic simvastatin entry and generic atorvastatin entry, and decreasing thereafter. Copyright © 2011 Elsevier HS Journals, Inc. All rights reserved.
Assimilation of Gridded Terrestrial Water Storage Observations from GRACE into a Land Surface Model
NASA Technical Reports Server (NTRS)
Girotto, Manuela; De Lannoy, Gabrielle J. M.; Reichle, Rolf H.; Rodell, Matthew
2016-01-01
Observations of terrestrial water storage (TWS) from the Gravity Recovery and Climate Experiment (GRACE) satellite mission have a coarse resolution in time (monthly) and space (roughly 150,000 km(sup 2) at midlatitudes) and vertically integrate all water storage components over land, including soil moisture and groundwater. Data assimilation can be used to horizontally downscale and vertically partition GRACE-TWS observations. This work proposes a variant of existing ensemble-based GRACE-TWS data assimilation schemes. The new algorithm differs in how the analysis increments are computed and applied. Existing schemes correlate the uncertainty in the modeled monthly TWS estimates with errors in the soil moisture profile state variables at a single instant in the month and then apply the increment either at the end of the month or gradually throughout the month. The proposed new scheme first computes increments for each day of the month and then applies the average of those increments at the beginning of the month. The new scheme therefore better reflects submonthly variations in TWS errors. The new and existing schemes are investigated here using gridded GRACE-TWS observations. The assimilation results are validated at the monthly time scale, using in situ measurements of groundwater depth and soil moisture across the U.S. The new assimilation scheme yields improved (although not in a statistically significant sense) skill metrics for groundwater compared to the open-loop (no assimilation) simulations and compared to the existing assimilation schemes. A smaller impact is seen for surface and root-zone soil moisture, which have a shorter memory and receive smaller increments from TWS assimilation than groundwater. These results motivate future efforts to combine GRACE-TWS observations with observations that are more sensitive to surface soil moisture, such as L-band brightness temperature observations from Soil Moisture Ocean Salinity (SMOS) or Soil Moisture Active Passive (SMAP). Finally, we demonstrate that the scaling parameters that are applied to the GRACE observations prior to assimilation should be consistent with the land surface model that is used within the assimilation system.
Foundations for Streaming Model Transformations by Complex Event Processing.
Dávid, István; Ráth, István; Varró, Dániel
2018-01-01
Streaming model transformations represent a novel class of transformations to manipulate models whose elements are continuously produced or modified in high volume and with rapid rate of change. Executing streaming transformations requires efficient techniques to recognize activated transformation rules over a live model and a potentially infinite stream of events. In this paper, we propose foundations of streaming model transformations by innovatively integrating incremental model query, complex event processing (CEP) and reactive (event-driven) transformation techniques. Complex event processing allows to identify relevant patterns and sequences of events over an event stream. Our approach enables event streams to include model change events which are automatically and continuously populated by incremental model queries. Furthermore, a reactive rule engine carries out transformations on identified complex event patterns. We provide an integrated domain-specific language with precise semantics for capturing complex event patterns and streaming transformations together with an execution engine, all of which is now part of the Viatra reactive transformation framework. We demonstrate the feasibility of our approach with two case studies: one in an advanced model engineering workflow; and one in the context of on-the-fly gesture recognition.
NASA Astrophysics Data System (ADS)
Wu, Hongjie; Yuan, Shifei; Zhang, Xi; Yin, Chengliang; Ma, Xuerui
2015-08-01
To improve the suitability of lithium-ion battery model under varying scenarios, such as fluctuating temperature and SoC variation, dynamic model with parameters updated realtime should be developed. In this paper, an incremental analysis-based auto regressive exogenous (I-ARX) modeling method is proposed to eliminate the modeling error caused by the OCV effect and improve the accuracy of parameter estimation. Then, its numerical stability, modeling error, and parametric sensitivity are analyzed at different sampling rates (0.02, 0.1, 0.5 and 1 s). To identify the model parameters recursively, a bias-correction recursive least squares (CRLS) algorithm is applied. Finally, the pseudo random binary sequence (PRBS) and urban dynamic driving sequences (UDDSs) profiles are performed to verify the realtime performance and robustness of the newly proposed model and algorithm. Different sampling rates (1 Hz and 10 Hz) and multiple temperature points (5, 25, and 45 °C) are covered in our experiments. The experimental and simulation results indicate that the proposed I-ARX model can present high accuracy and suitability for parameter identification without using open circuit voltage.
Vallecilla, Carolina; Khiabani, Reza H; Sandoval, Néstor; Fogel, Mark; Briceño, Juan Carlos; Yoganathan, Ajit P
2014-06-03
The considerable blood mixing in the bidirectional Glenn (BDG) physiology further limits the capacity of the single working ventricle to pump enough oxygenated blood to the circulatory system. This condition is exacerbated under severe conditions such as physical activity or high altitude. In this study, the effect of high altitude exposure on hemodynamics and ventricular function of the BDG physiology is investigated. For this purpose, a mathematical approach based on a lumped parameter model was developed to model the BDG circulation. Catheterization data from 39 BDG patients at stabilized oxygen conditions was used to determine baseline flows and pressures for the model. The effect of high altitude exposure was modeled by increasing the pulmonary vascular resistance (PVR) and heart rate (HR) in increments up to 80% and 40%, respectively. The resulting differences in vascular flows, pressures and ventricular function parameters were analyzed. By simultaneously increasing PVR and HR, significant changes (p <0.05) were observed in cardiac index (11% increase at an 80% PVR and 40% HR increase) and pulmonary flow (26% decrease at an 80% PVR and 40% HR increase). Significant increase in mean systemic pressure (9%) was observed at 80% PVR (40% HR) increase. The results show that the poor ventricular function fails to overcome the increased preload and implied low oxygenation in BDG patients at higher altitudes, especially for those with high baseline PVRs. The presented mathematical model provides a framework to estimate the hemodynamic performance of BDG patients at different PVR increments. Copyright © 2014 Elsevier Ltd. All rights reserved.
Svendsen, Erik R; Gonzales, Melissa; Mukerjee, Shaibal; Smith, Luther; Ross, Mary; Walsh, Debra; Rhoney, Scott; Andrews, Gina; Ozkaynak, Halûk; Neas, Lucas M
2012-10-01
Investigators examined 5,654 children enrolled in the El Paso, Texas, public school district by questionnaire in 2001. Exposure measurements were first collected in the late fall of 1999. School-level and residence-level exposures to traffic-related air pollutants were estimated using a land use regression model. For 1,529 children with spirometry, overall geographic information system (GIS)-modeled residential levels of traffic-related ambient air pollution (calibrated to a 10-ppb increment in nitrogen dioxide levels) were associated with a 2.4% decrement in forced vital capacity (95% confidence interval (CI): -4.0, -0.7) after adjustment for demographic, anthropomorphic, and socioeconomic factors and spirometer/technician effects. After adjustment for these potential covariates, overall GIS-modeled residential levels of traffic-related ambient air pollution (calibrated to a 10-ppb increment in nitrogen dioxide levels) were associated with pulmonary function levels below 85% of those predicted for both forced vital capacity (odds ratio (OR) = 3.10, 95% CI: 1.65, 5.78) and forced expiratory volume in 1 second (OR = 2.35, 95% CI: 1.38, 4.01). For children attending schools at elevations above 1,170 m, a 10-ppb increment in modeled nitrogen dioxide levels was associated with current asthma (OR = 1.56, 95% CI: 1.08, 2.50) after adjustment for demographic, socioeconomic, and parental factors and random school effects. These results are consistent with previous studies in Europe and California that found adverse health outcomes in children associated with modeled traffic-related air pollutants.
Spatiotemporal analysis of black spruce forest soils and implications for the fate of C
Harden, Jennifer W.; Manies, Kristen L.; O'Donnell, Jonathan; Johnson, Kristofer; Frolking, Steve; Fan, Zhaosheng
2012-01-01
Post-fire storage of carbon (C) in organic-soil horizons was measured in one Canadian and three Alaskan chronosequences in black spruce forests, together spanning stand ages of nearly 200 yrs. We used a simple mass balance model to derive estimates of inputs, losses, and accumulation rates of C on timescales of years to centuries. The model performed well for the surface and total organic soil layers and presented questions for resolving the dynamics of deeper organic soils. C accumulation in all study areas is on the order of 20–40 gC/m2/yr for stand ages up to ∼200 yrs. Much larger fluxes, both positive and negative, are detected using incremental changes in soil C stocks and by other studies using eddy covariance methods for CO2. This difference suggests that over the course of stand replacement, about 80% of all net primary production (NPP) is returned to the atmosphere within a fire cycle, while about 20% of NPP enters the organic soil layers and becomes available for stabilization or loss via decomposition, leaching, or combustion. Shifts toward more frequent and more severe burning and degradation of deep organic horizons would likely result in an acceleration of the carbon cycle, with greater CO2 emissions from these systems overall.
Use of Ventricular Assist Device in Univentricular Physiology: The Role of Lumped Parameter Models.
Di Molfetta, Arianna; Ferrari, Gianfranco; Filippelli, Sergio; Fresiello, Libera; Iacobelli, Roberta; Gagliardi, Maria G; Amodeo, Antonio
2016-05-01
Failing single-ventricle (SV) patients might benefit from ventricular assist devices (VADs) as a bridge to heart transplantation. Considering the complex physiopathology of SV patients and the lack of established experience, the aim of this work was to realize and test a lumped parameter model of the cardiovascular system, able to simulate SV hemodynamics and VAD implantation effects. Data of 30 SV patients (10 Norwood, 10 Glenn, and 10 Fontan) were retrospectively collected and used to simulate patients' baseline. Then, the effects of VAD implantation were simulated. Additionally, both the effects of ventricular assistance and cavopulmonary assistance were simulated in different pathologic conditions on Fontan patients, including systolic dysfunction, diastolic dysfunction, and pulmonary vascular resistance increment. The model can reproduce patients' baseline well. Simulation results suggest that the implantation of VAD: (i) increases the cardiac output (CO) in all the three palliation conditions (Norwood 77.2%, Glenn 38.6%, and Fontan 17.2%); (ii) decreases the SV external work (SVEW) (Norwood 55%, Glenn 35.6%, and Fontan 41%); (iii) increases the mean pulmonary arterial pressure (Pap) (Norwood 39.7%, Glenn 12.1%, and Fontan 3%). In Fontan circulation, with systolic dysfunction, the left VAD (LVAD) increases CO (35%), while the right VAD (RVAD) determines a decrement of inferior vena cava pressure (Pvci) (39%) with 34% increment of CO. With diastolic dysfunction, the LVAD increases CO (42%) and the RVAD decreases the Pvci. With pulmonary vascular resistance increment, the RVAD allows the highest CO (50%) increment with the highest decrement of Pvci (53%). The single ventricular external work (SVEW) increases (decreases) increasing the VAD speed in cavopulmonary (ventricular) assistance. Numeric models could be helpful in this challenging and innovative field to support patients and VAD selection to optimize the clinical outcome and personalize the therapy. Copyright © 2015 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.
Borg, Sixten; Nahi, Hareth; Hansson, Markus; Lee, Dawn; Elvidge, Jamie; Persson, Ulf
2016-05-01
Multiple myeloma (MM) patients who have progressed following treatment with both bortezomib and lenalidomide have a poor prognosis. In this late stage, other effective alternatives are limited, and patients in Sweden are often left with best supportive care. Pomalidomide is a new anti-angiogenic and immunomodulatory drug for the treatment of MM. Our objective was to evaluate the cost effectiveness of pomalidomide as an add-on to best supportive care in patients with relapsed and refractory MM in Sweden. We developed a health-economic discrete event simulation model of a patient's course through stable disease and progressive disease, until death. It estimates life expectancy, quality-adjusted life years (QALYs) and costs from a societal perspective. Effectiveness data and utilities were taken from the MM-003 trial comparing pomalidomide plus low-dose dexamethasone with high-dose dexamethasone (HIDEX). Cost data were taken from official Swedish price lists, government sources and literature. The model estimates that, if a patient is treated with HIDEX, life expectancy is 1.12 years and the total cost is SEK 179 976 (€19 100), mainly indirect costs. With pomalidomide plus low-dose dexamethasone, life expectancy is 2.33 years, with a total cost of SEK 767 064 (€81 500), mainly in drug and indirect costs. Compared to HIDEX, pomalidomide treatment gives a QALY gain of 0.7351 and an incremental cost of SEK 587 088 (€62 400) consisting of increased drug costs (59%), incremental indirect costs (33%) and other healthcare costs (8%). The incremental cost-effectiveness ratio is SEK 798 613 (€84 900) per QALY gained. In a model of late-stage MM patients with a poor prognosis in the Swedish setting, pomalidomide is associated with a relatively high incremental cost per QALY gained. This model was accepted by the national Swedish reimbursement authority TLV, and pomalidomide was granted reimbursement in Sweden.
Contribution For Arc Temperature Affected By Current Increment Ratio At Peak Current In Pulsed Arc
NASA Astrophysics Data System (ADS)
Kano, Ryota; Mitubori, Hironori; Iwao, Toru
2015-11-01
Tungsten Inert Gas (TIG) Welding is one of the high quality welding. However, parameters of the pulsed arc welding are many and complicated. if the welding parameters are not appropriate, the welding pool shape becomes wide and shallow.the convection of driving force contributes to the welding pool shape. However, in the case of changing current waveform as the pulse high frequency TIG welding, the arc temperature does not follow the change of the current. Other result of the calculation, in particular, the arc temperature at the reaching time of peak current is based on these considerations. Thus, the accurate measurement of the temperature at the time is required. Therefore, the objective of this research is the elucidation of contribution for arc temperature affected by current increment ratio at peak current in pulsed arc. It should obtain a detail knowledge of the welding model in pulsed arc. The temperature in the case of increment of the peak current from the base current is measured by using spectroscopy. As a result, when the arc current increases from 100 A to 150 A at 120 ms, the transient response of the temperature didn't occur during increasing current. Thus, during the current rise, it has been verified by measuring. Therefore, the contribution for arc temperature affected by current increment ratio at peak current in pulsed arc was elucidated in order to obtain more knowledge of welding model of pulsed arc.
Incremental Refinement of FAÇADE Models with Attribute Grammar from 3d Point Clouds
NASA Astrophysics Data System (ADS)
Dehbi, Y.; Staat, C.; Mandtler, L.; Pl¨umer, L.
2016-06-01
Data acquisition using unmanned aerial vehicles (UAVs) has gotten more and more attention over the last years. Especially in the field of building reconstruction the incremental interpretation of such data is a demanding task. In this context formal grammars play an important role for the top-down identification and reconstruction of building objects. Up to now, the available approaches expect offline data in order to parse an a-priori known grammar. For mapping on demand an on the fly reconstruction based on UAV data is required. An incremental interpretation of the data stream is inevitable. This paper presents an incremental parser of grammar rules for an automatic 3D building reconstruction. The parser enables a model refinement based on new observations with respect to a weighted attribute context-free grammar (WACFG). The falsification or rejection of hypotheses is supported as well. The parser can deal with and adapt available parse trees acquired from previous interpretations or predictions. Parse trees derived so far are updated in an iterative way using transformation rules. A diagnostic step searches for mismatches between current and new nodes. Prior knowledge on façades is incorporated. It is given by probability densities as well as architectural patterns. Since we cannot always assume normal distributions, the derivation of location and shape parameters of building objects is based on a kernel density estimation (KDE). While the level of detail is continuously improved, the geometrical, semantic and topological consistency is ensured.
Health level seven interoperability strategy: big data, incrementally structured.
Dolin, R H; Rogers, B; Jaffe, C
2015-01-01
Describe how the HL7 Clinical Document Architecture (CDA), a foundational standard in US Meaningful Use, contributes to a "big data, incrementally structured" interoperability strategy, whereby data structured incrementally gets large amounts of data flowing faster. We present cases showing how this approach is leveraged for big data analysis. To support the assertion that semi-structured narrative in CDA format can be a useful adjunct in an overall big data analytic approach, we present two case studies. The first assesses an organization's ability to generate clinical quality reports using coded data alone vs. coded data supplemented by CDA narrative. The second leverages CDA to construct a network model for referral management, from which additional observations can be gleaned. The first case shows that coded data supplemented by CDA narrative resulted in significant variances in calculated performance scores. In the second case, we found that the constructed network model enables the identification of differences in patient characteristics among different referral work flows. The CDA approach goes after data indirectly, by focusing first on the flow of narrative, which is then incrementally structured. A quantitative assessment of whether this approach will lead to a greater flow of data and ultimately a greater flow of structured data vs. other approaches is planned as a future exercise. Along with growing adoption of CDA, we are now seeing the big data community explore the standard, particularly given its potential to supply analytic en- gines with volumes of data previously not possible.
Ostensson, Ellinor; Fröberg, Maria; Hjerpe, Anders; Zethraeus, Niklas; Andersson, Sonia
2010-10-01
To assess the cost-effectiveness of using human papillomavirus testing (HPV triage) in the management of women with minor cytological abnormalities in Sweden. An economic analysis based on a clinical trial, complemented with data from published meta-analyses on accuracy of HPV triage. The study takes perspective of the Swedish healthcare system. The Swedish population-based cervical cancer screening program. A decision analytic model was constructed to evaluate cost-effectiveness of HPV triage compared to repeat cytology and immediate colposcopy with biopsy, stratifying by index cytology (ASCUS = atypical squamous cells of undetermined significance, and LSIL = low-grade squamous intraepithelial lesion) and age (23-60 years, <30 years and ≥30 years). Costs, incremental cost, incremental effectiveness and incremental cost per additional high-grade lesion (CIN2+) detected. For women with ASCUS ≥30 years, HPV triage is the least costly alternative, whereas immediate colposcopy with biopsy provides the most effective option at an incremental cost-effectiveness ratio (ICER) of SEK 2,056 per additional case of CIN2+ detected. For LSIL (all age groups) and ASCUS (23-60 years and <30 years), HPV triage is dominated by immediate colposcopy and biopsy. Model results were sensitive to HPV test cost changes. With improved HPV testing techniques at lower costs, HPV triage can become a cost-effective alternative for follow-up of minor cytological abnormalities. Today, immediate colposcopy with biopsy is a cost-effective alternative compared to HPV triage and repeat cytology.
Liana infestation impacts tree growth in a lowland tropical moist forest
NASA Astrophysics Data System (ADS)
van der Heijden, G. M. F.; Phillips, O. L.
2009-10-01
Ecosystem-level estimates of the effect of lianas on tree growth in mature tropical forests are needed to evaluate the functional impact of lianas and their potential to affect the ability of tropical forests to sequester carbon, but these are currently lacking. Using data collected on tree growth rates, local growing conditions and liana competition in five permanent sampling plots in Amazonian Peru, we present the first ecosystem-level estimates of the effect of lianas on above-ground productivity of trees. By first constructing a multi-level linear mixed effect model to predict individual-tree diameter growth model using individual-tree growth conditions, we were able to then estimate stand-level above-ground biomass (AGB) increment in the absence of lianas. We show that lianas, mainly by competing above-ground with trees, reduce tree annual above-ground stand-level biomass increment by ~10%, equivalent to 0.51 Mg dry weight ha-1 yr-1 or 0.25 Mg C ha-1 yr-1. AGB increment of lianas themselves was estimated to be 0.15 Mg dry weight ha-1 yr-1 or 0.07 Mg C ha-1 yr-1, thus only compensating ~29% of the liana-induced reduction in ecosystem AGB increment. Increasing liana pressure on tropical forests will therefore not only tend to reduce their carbon storage capacity, by indirectly promoting tree species with low-density wood, but also their rate of carbon uptake, with potential consequences for the rate of increase in atmospheric carbon dioxide.
A New Wet Deposition Module in SILAM Chemical Transport Model
NASA Astrophysics Data System (ADS)
Kouznetsov, R.; Sofiev, M.
2013-12-01
The System for Integrated modeLling of Atmopsheric coMposition SILAM (http://silam.fmi.fi/) is a CTM model of FMI air-quality research unit. SILAM is used for research, operational and emergency-response assessments and forecasting of the atmospheric composition within the scope of European and Finnish national projects. Characteristic scales of the SILAM applications vary from -mesoscale (grid spacing 1 km) up to the globe with characteristic resolution of 1 degree. Till recently, a simple approach based on scavenging coefficients and their species-dependent scaling was used in SILAM. Due to the lack of information on the vertical structure of precipitation in older meteorological datasets, it was prescribed. The new scheme uses a mechanistic description of the scavenging process and utilizes the vertical profiles of cloud water content. A simple model for dissociation of H2SO3 accounts for saturation of SO2 scavenging. As the vertical profiles of precipitation rates are rarely available from meteorological models, they are reconstructed from the profiles of cloud water and surface precipitation fields. The rain/snow increment in a 3D model grid cell is taken as a fraction of surface precipitation intensity equal to the cell's fraction of total cloud water column. The phase of precipitation (liquid/solid) is a function of air temperature. The fall speed is derived from the size of water drops given by a function of rain/snow intensity. In-cloud scavenging is considered as an equilibrium process: . the concentrations in cloud water are assumed to be in equilibrium with ambient air. The sub-cloud scavenging is driven by the precipitation that comes from above the cell. The scavenging by a single droplet is considered as a two-way equilibration process of in-water and in-air concentrations, controlled by the hydrometeors size, cross-section and a time the droplet falls through a cell, effective solubility and amount of already dissolved pollutant. The solubility for most species is given by their effective Henry factors as functions of temperature. An exception is SO2 since the in-water amount of [S(IV)] is not a linear function of SO2 partial pressure in the air. The effective Henry factor for SO2 is then calculated from a dissociation equation after all other species in a cell are processed and their in-water concentrations are known. The new scheme results in substantially more realistic vertical patterns for scavenging. The consideration of equilibration rather than one-way scavenging allows modelling the vertical redistribution of pollutants by precipitation. The scheme provides a simple and well-grounded means to account for saturation of scavenging for SO2.
Electron correlation contribution to the physisorption of CO on MgF2(110).
Hammerschmidt, Lukas; Müller, Carsten; Paulus, Beate
2012-03-28
We have performed CCSD(T), MP2, and DF-LMP2 calculations of the interaction energy of CO on the MgF(2)(110) surface by applying the method of increments and an embedded cluster model. In addition, we performed periodic HF, B3LYP, and DF-LMP2 calculations and compare them to the cluster results. The incremental CCSD(T) calculations predict an interaction energy of E(int) = -0.37 eV with a C-down orientation of CO above a Mg(2+) ion at the surface with a basis set of VTZ quality. We find that electron correlation constitutes about 50% of the binding energy and a detailed evaluation of the increments shows that the largest contribution to the correlation energy originates from the CO interaction with the closest F ions on the second layer.
A LATIN-based model reduction approach for the simulation of cycling damage
NASA Astrophysics Data System (ADS)
Bhattacharyya, Mainak; Fau, Amelie; Nackenhorst, Udo; Néron, David; Ladevèze, Pierre
2017-11-01
The objective of this article is to introduce a new method including model order reduction for the life prediction of structures subjected to cycling damage. Contrary to classical incremental schemes for damage computation, a non-incremental technique, the LATIN method, is used herein as a solution framework. This approach allows to introduce a PGD model reduction technique which leads to a drastic reduction of the computational cost. The proposed framework is exemplified for structures subjected to cyclic loading, where damage is considered to be isotropic and micro-defect closure effects are taken into account. A difficulty herein for the use of the LATIN method comes from the state laws which can not be transformed into linear relations through an internal variable transformation. A specific treatment of this issue is introduced in this work.
A constitutive law for finite element contact problems with unclassical friction
NASA Technical Reports Server (NTRS)
Plesha, M. E.; Steinetz, B. M.
1986-01-01
Techniques for modeling complex, unclassical contact-friction problems arising in solid and structural mechanics are discussed. A constitutive modeling concept is employed whereby analytic relations between increments of contact surface stress (i.e., traction) and contact surface deformation (i.e., relative displacement) are developed. Because of the incremental form of these relations, they are valid for arbitrary load-deformation histories. The motivation for the development of such a constitutive law is that more realistic friction idealizations can be implemented in finite element analysis software in a consistent, straightforward manner. Of particular interest is modeling of two-body (i.e., unlubricated) metal-metal, ceramic-ceramic, and metal-ceramic contact. Interfaces involving ceramics are of engineering importance and are being considered for advanced turbine engines in which higher temperature materials offer potential for higher engine fuel efficiency.
A New Method for Incremental Testing of Finite State Machines
NASA Technical Reports Server (NTRS)
Pedrosa, Lehilton Lelis Chaves; Moura, Arnaldo Vieira
2010-01-01
The automatic generation of test cases is an important issue for conformance testing of several critical systems. We present a new method for the derivation of test suites when the specification is modeled as a combined Finite State Machine (FSM). A combined FSM is obtained conjoining previously tested submachines with newly added states. This new concept is used to describe a fault model suitable for incremental testing of new systems, or for retesting modified implementations. For this fault model, only the newly added or modified states need to be tested, thereby considerably reducing the size of the test suites. The new method is a generalization of the well-known W-method and the G-method, but is scalable, and so it can be used to test FSMs with an arbitrarily large number of states.
ERIC Educational Resources Information Center
Perry, Conrad; Ziegler, Johannes C.; Zorzi, Marco
2007-01-01
At least 3 different types of computational model have been shown to account for various facets of both normal and impaired single word reading: (a) the connectionist triangle model, (b) the dual-route cascaded model, and (c) the connectionist dual process model. Major strengths and weaknesses of these models are identified. In the spirit of…
NASA Astrophysics Data System (ADS)
Ruiz-Pérez, G.
2015-12-01
Drylands are extensive, covering 30% of the Earth's land surface and 50% of Africa. Projections of the IPCC (Intergovernmental Panel on Climate Change, 2007) indicate that the extent of these regions have high probability to increase with a considerable additional impact on water resources, which should be taken into account by water management plans. In these water-controlled areas, vegetation plays a key role in the water cycle. Ecohydrological models provide a tool to investigate the relationships between vegetation and water resources. However, studies in Africa often face the problem that many ecohydrological models have quite extensive parametrical requirements, while available data are scarce. Therefore, there is a need for assessments using models whose requirements match the data availability. In that context, parsimonious models, together with available remote sensing information, can be valuable tools for ecohydrological studies. For this reason, we have focused on the use of a parsimonious model based on the amount of photosynthetically active radiation absorbed by green vegetation (APAR) and the Light Use Efficiency index (the efficiency by which that radiation is converted to plant biomass increment) in order to compute the gross primary production (GPP).This model has been calibrated using only remote sensing data (particularly, NDVI data from Modis products) in order to explore the potential of satellite information in implementing a simple distributed model. The model has been subsequently validated against stream flow data with the aim to define a tool able to account for landuse characteristics in describing water budget. Results are promising for studies aimed at the description of the consequences of ongoing land use changes on water resources.
NASA Astrophysics Data System (ADS)
Haverd, V.; Smith, B.; Nieradzik, L. P.; Briggs, P. R.
2014-02-01
Poorly constrained rates of biomass turnover are a key limitation of Earth system models (ESM). In light of this, we recently proposed a new approach encoded in a model called Populations-Order-Physiology (POP), for the simulation of woody ecosystem stand dynamics, demography and disturbance-mediated heterogeneity. POP is suitable for continental to global applications and designed for coupling to the terrestrial ecosystem component of any ESM. POP bridges the gap between first generation Dynamic Vegetation Models (DVMs) with simple large-area parameterisations of woody biomass (typically used in current ESMs) and complex second generation DVMs, that explicitly simulate demographic processes and landscape heterogeneity of forests. The key simplification in the POP approach, compared with second-generation DVMs, is to compute physiological processes such as assimilation at grid-scale (with CABLE or a similar land surface model), but to partition the grid-scale biomass increment among age classes defined at sub grid-scale, each subject to its own dynamics. POP was successfully demonstrated along a savanna transect in northern Australia, replicating the effects of strong rainfall and fire disturbance gradients on observed stand productivity and structure. Here, we extend the application of POP to a range of forest types around the globe, employing paired observations of stem biomass and density from forest inventory data to calibrate model parameters governing stand demography and biomass evolution. The calibrated POP model is then coupled to the CABLE land surface model and the combined model (CABLE-POP) is evaluated against leaf-stem allometry observations from forest stands ranging in age from 3 to 200 yr. Results indicate that simulated biomass pools conform well with observed allometry. We conclude that POP represents a preferable alternative to large-area parameterisations of woody biomass turnover, typically used in current ESMs.
BMI and BMI SDS in childhood: annual increments and conditional change.
Brannsether, Bente; Eide, Geir Egil; Roelants, Mathieu; Bjerknes, Robert; Júlíusson, Pétur Benedikt
2017-02-01
Background Early detection of abnormal weight gain in childhood may be important for preventive purposes. It is still debated which annual changes in BMI should warrant attention. Aim To analyse 1-year increments of Body Mass Index (BMI) and standardised BMI (BMI SDS) in childhood and explore conditional change in BMI SDS as an alternative method to evaluate 1-year changes in BMI. Subjects and methods The distributions of 1-year increments of BMI (kg/m 2 ) and BMI SDS are summarised by percentiles. Differences according to sex, age, height, weight, initial BMI and weight status on the BMI and BMI SDS increments were assessed with multiple linear regression. Conditional change in BMI SDS was based on the correlation between annual BMI measurements converted to SDS. Results BMI increments depended significantly on sex, height, weight and initial BMI. Changes in BMI SDS depended significantly only on the initial BMI SDS. The distribution of conditional change in BMI SDS using a two-correlation model was close to normal (mean = 0.11, SD = 1.02, n = 1167), with 3.2% (2.3-4.4%) of the observations below -2 SD and 2.8% (2.0-4.0%) above +2 SD. Conclusion Conditional change in BMI SDS can be used to detect unexpected large changes in BMI SDS. Although this method requires the use of a computer, it may be clinically useful to detect aberrant weight development.
Impact of chemical plant start-up emissions on ambient ozone concentration
NASA Astrophysics Data System (ADS)
Ge, Sijie; Wang, Sujing; Xu, Qiang; Ho, Thomas
2017-09-01
Flare emissions, especially start-up flare emissions, during chemical plant operations generate large amounts of ozone precursors that may cause highly localized and transient ground-level ozone increment. Such an adverse ozone impact could be aggravated by the synergies of multiple plant start-ups in an industrial zone. In this paper, a systematic study on ozone increment superposition due to chemical plant start-up emissions has been performed. It employs dynamic flaring profiles of two olefin plants' start-ups to investigate the superposition of the regional 1-hr ozone increment. It also summaries the superposition trend by manipulating the starting time (00:00-10:00) of plant start-up operations and the plant distance (4-32 km). The study indicates that the ozone increment induced by simultaneous start-up emissions from multiple chemical plants generally does not follow the linear superposition of the ozone increment induced by individual plant start-ups. Meanwhile, the trend of such nonlinear superposition related to the temporal (starting time and operating hours of plant start-ups) and spatial (plant distance) factors is also disclosed. This paper couples dynamic simulations of chemical plant start-up operations with air-quality modeling and statistical methods to examine the regional ozone impact. It could be helpful for technical decision support for cost-effective air-quality and industrial flare emission controls.
An incremental DPMM-based method for trajectory clustering, modeling, and retrieval.
Hu, Weiming; Li, Xi; Tian, Guodong; Maybank, Stephen; Zhang, Zhongfei
2013-05-01
Trajectory analysis is the basis for many applications, such as indexing of motion events in videos, activity recognition, and surveillance. In this paper, the Dirichlet process mixture model (DPMM) is applied to trajectory clustering, modeling, and retrieval. We propose an incremental version of a DPMM-based clustering algorithm and apply it to cluster trajectories. An appropriate number of trajectory clusters is determined automatically. When trajectories belonging to new clusters arrive, the new clusters can be identified online and added to the model without any retraining using the previous data. A time-sensitive Dirichlet process mixture model (tDPMM) is applied to each trajectory cluster for learning the trajectory pattern which represents the time-series characteristics of the trajectories in the cluster. Then, a parameterized index is constructed for each cluster. A novel likelihood estimation algorithm for the tDPMM is proposed, and a trajectory-based video retrieval model is developed. The tDPMM-based probabilistic matching method and the DPMM-based model growing method are combined to make the retrieval model scalable and adaptable. Experimental comparisons with state-of-the-art algorithms demonstrate the effectiveness of our algorithm.
ERIC Educational Resources Information Center
Back, Par-Erik; Lane, Jan-Erik
To analyze organizational development of Swedish universities and colleges, decision theory and implementation theory were examined. Attention was directed to the following models of decision-making: the demographic model, the incremental model, the garbage-can model, and the political model. The focus was on system decision-making, and empirical…
Role of investment heterogeneity in the cooperation on spatial public goods game.
Yuan, Wu-Jie; Xia, Cheng-Yi
2014-01-01
Public cooperation plays a significant role in the survival and maintenance of biological species, to elucidate its origin thus becomes an interesting question from various disciplines. Through long-term development, the public goods game has proven to be a useful tool, where cooperator making contribution can beat again the free-rides. Differentiating from the traditional homogeneous investment, individual trend of making contribution is more likely affected by the investment level of his neighborhood. Based on this fact, we here investigate the impact of heterogeneous investment on public cooperation, where the investment sum is mapped to the proportion of cooperators determined by parameter α. Interestingly, we find, irrespective of interaction networks, that the increment of α (increment of heterogeneous investment) is beneficial for promoting cooperation and even guarantees the complete cooperation dominance under weak replication factor. While this promotion effect can be attributed to the formation of more robust cooperator clusters and shortening END period. Moreover, we find that this simple mechanism can change the potential interaction network, which results in the change of phase diagrams. We hope that our work may shed light on the understanding of the cooperative behavior in other social dilemmas.
Response rate and reinforcement rate in Pavlovian conditioning.
Harris, Justin A; Carpenter, Joanne S
2011-10-01
Four experiments used delay conditioning of magazine approach in rats to investigate the relationship between the rate of responding, R, to a conditioned stimulus (CS) and the rate, r, at which the CS is reinforced with the unconditioned stimulus (US). Rats were concurrently trained with four variable-duration CSs with different rs, either as a result of differences in the mean CS-US interval or in the proportion of CS presentations that ended with the US. In each case, R was systematically related to r, and the relationship was very accurately characterized by a hyperbolic function, R = Ar/(r +c). Accordingly, the reciprocal of these two variables-response interval, I (= 1/R), and CS-US interval, i (= 1/r) - were related by a simple affine (straight line) transformation, I = mi+b. This latter relationship shows that each increment in the time that the rats had to wait for food produced a linear increment in the time they waited between magazine entries. We discuss the close agreement between our findings and the Matching Law (Herrnstein, 1970) and consider their implications for both associative theories (e.g., Rescorla & Wagner, 1972) and nonassociative theories (Gallistel & Gibbon, 2000) of conditioning. (PsycINFO Database Record (c) 2011 APA, all rights reserved).
NASA Astrophysics Data System (ADS)
Li, Yi; Abdel-Monem, Mohamed; Gopalakrishnan, Rahul; Berecibar, Maitane; Nanini-Maury, Elise; Omar, Noshin; van den Bossche, Peter; Van Mierlo, Joeri
2018-01-01
This paper proposes an advanced state of health (SoH) estimation method for high energy NMC lithium-ion batteries based on the incremental capacity (IC) analysis. IC curves are used due to their ability of detect and quantify battery degradation mechanism. A simple and robust smoothing method is proposed based on Gaussian filter to reduce the noise on IC curves, the signatures associated with battery ageing can therefore be accurately identified. A linear regression relationship is found between the battery capacity with the positions of features of interest (FOIs) on IC curves. Results show that the developed SoH estimation function from one single battery cell is able to evaluate the SoH of other batteries cycled under different cycling depth with less than 2.5% maximum errors, which proves the robustness of the proposed method on SoH estimation. With this technique, partial charging voltage curves can be used for SoH estimation and the testing time can be therefore largely reduced. This method shows great potential to be applied in reality, as it only requires static charging curves and can be easily implemented in battery management system (BMS).
Kutlak, Roman; van Deemter, Kees; Mellish, Chris
2016-01-01
This article presents a computational model of the production of referring expressions under uncertainty over the hearer's knowledge. Although situations where the hearer's knowledge is uncertain have seldom been addressed in the computational literature, they are common in ordinary communication, for example when a writer addresses an unknown audience, or when a speaker addresses a stranger. We propose a computational model composed of three complimentary heuristics based on, respectively, an estimation of the recipient's knowledge, an estimation of the extent to which a property is unexpected, and the question of what is the optimum number of properties in a given situation. The model was tested in an experiment with human readers, in which it was compared against the Incremental Algorithm and human-produced descriptions. The results suggest that the new model outperforms the Incremental Algorithm in terms of the proportion of correctly identified entities and in terms of the perceived quality of the generated descriptions. PMID:27630592
Using Predictability for Lexical Segmentation.
Çöltekin, Çağrı
2017-09-01
This study investigates a strategy based on predictability of consecutive sub-lexical units in learning to segment a continuous speech stream into lexical units using computational modeling and simulations. Lexical segmentation is one of the early challenges during language acquisition, and it has been studied extensively through psycholinguistic experiments as well as computational methods. However, despite strong empirical evidence, the explicit use of predictability of basic sub-lexical units in models of segmentation is underexplored. This paper presents an incremental computational model of lexical segmentation for exploring the usefulness of predictability for lexical segmentation. We show that the predictability cue is a strong cue for segmentation. Contrary to earlier reports in the literature, the strategy yields state-of-the-art segmentation performance with an incremental computational model that uses only this particular cue in a cognitively plausible setting. The paper also reports an in-depth analysis of the model, investigating the conditions affecting the usefulness of the strategy. Copyright © 2016 Cognitive Science Society, Inc.
Effects of Taxing Sugar-Sweetened Beverages on Caries and Treatment Costs.
Schwendicke, F; Thomson, W M; Broadbent, J M; Stolpe, M
2016-11-01
Caries increment is affected by sugar-sweetened beverage (SSB) consumption. Taxing SSBs could reduce sugar consumption and caries increment. The authors aimed to estimate the impact of a 20% SSB sales tax on caries increment and associated treatment costs (as well as the resulting tax revenue) in the context of Germany. A model-based approach was taken, estimating the effects for the German population aged 14 to 79 y over a 10-y period. Taxation was assumed to affect beverage-associated sugar consumption via empirical demand elasticities. Altered consumption affected caries increments and treatment costs, with cost estimates being calculated under the perspective of the statutory health insurance. National representative consumption and price data were used to estimate tax revenue. Microsimulations were performed to estimate health outcomes, costs, and revenue impact in different age, sex, and income groups. Implementing a 20% SSB sales tax reduced sugar consumption in nearly all male groups but in fewer female groups. The reduction was larger among younger than older individuals and among those with low income. Taxation reduced caries increment and treatment costs especially in younger (rather than older) individuals and those with low income. Over 10 y, mean (SD) net caries increments at the population level were 82.27 (1.15) million and 83.02 (1.08) million teeth at 20% and 0% SSB tax, respectively. These generated treatment costs of 2.64 (0.39) billion and 2.72 (0.35) billion euro, respectively. Additional tax revenue was 37.99 (3.41) billion euro over the 10 y. In conclusion and within the limitations of this study's perspective, database, and underlying assumptions, implementing a 20% sales tax on SSBs is likely to reduce caries increment, especially in young low-income males, thereby also reducing inequalities in the distribution of caries experience. Taxation would also reduce treatment costs. However, these reductions might be limited in the total population.
The Evaluation and Selection of Adequate Causal Models: A Compensatory Education Example.
ERIC Educational Resources Information Center
Tanaka, Jeffrey S.
1982-01-01
Implications of model evaluation (using traditional chi square goodness of fit statistics, incremental fit indices for covariance structure models, and latent variable coefficients of determination) on substantive conclusions are illustrated with an example examining the effects of participation in a compensatory education program on posttreatment…
Incremental Testing of the Community Multiscale Air Quality (CMAQ) Modeling System Version 4.7
This paper describes the scientific and structural updates to the latest release of the Community Multiscale Air Quality (CMAQ) modeling system version 4.7 (v4.7) and points the reader to additional resources for further details. The model updates were evaluated relative to obse...
Scratching the surface of ice: Interfacial phase transitions and their kinetic implications
NASA Astrophysics Data System (ADS)
Limmer, David
The surface structure of ice maintains a high degree of disorder down to surprisingly low temperatures. This is due to a number of underlying interfacial phase transitions that are associated with incremental changes in broken symmetry relative to the bulk crystal. In this talk I summarize recent work attempting to establish the nature and locations of these different phase transitions as well as how they depend on external conditions and nonequilibrium driving. The implications of this surface disorder is discussed in the context of simple kinetic processes that occur at these interfaces. Recent experimental work on the roughening transition is highlighted.
Overview of software development at the parabolic dish test site
NASA Technical Reports Server (NTRS)
Miyazono, C. K.
1985-01-01
The development history of the data acquisition and data analysis software is discussed. The software development occurred between 1978 and 1984 in support of solar energy module testing at the Jet Propulsion Laboratory's Parabolic Dish Test Site, located within Edwards Test Station. The development went through incremental stages, starting with a simple single-user BASIC set of programs, and progressing to the relative complex multi-user FORTRAN system that was used until the termination of the project. Additional software in support of testing is discussed including software in support of a meteorological subsystem and the Test Bed Concentrator Control Console interface. Conclusions and recommendations for further development are discussed.
A computational procedure for multibody systems including flexible beam dynamics
NASA Technical Reports Server (NTRS)
Downer, J. D.; Park, K. C.; Chiou, J. C.
1990-01-01
A computational procedure suitable for the solution of equations of motions for flexible multibody systems has been developed. The flexible beams are modeled using a fully nonlinear theory which accounts for both finite rotations and large deformations. The present formulation incorporates physical measures of conjugate Cauchy stress and covariant strain increments. As a consequence, the beam model can easily be interfaced with real-time strain measurements and feedback control systems. A distinct feature of the present work is the computational preservation of total energy for undamped systems; this is obtained via an objective strain increment/stress update procedure combined with an energy-conserving time integration algorithm which contains an accurate update of angular orientations. The procedure is demonstrated via several example problems.
Shinde, Pragati A; Lokhande, Vaibhav C; Chodankar, Nilesh R; Ji, Taeksoo; Kim, Jin Hyeok; Lokhande, Chandrakant D
2016-12-01
To achieve the highest electrochemical performance for supercapacitor, it is very essential to find out a suitable pair of an active electrode material and an electrolyte. In the present work, a simple approach is employed to enhance the supercapacitor performance of WO3 thin film. The WO3 thin film is prepared by a simple and cost effective chemical bath deposition method and its electrochemical performance is tested in conventional (H2SO4) and redox additive [H2SO4+hydroquinone (HQ)] electrolytes. Two-fold increment in electrochemical performance for WO3 thin film is observed in redox additive aqueous electrolyte compared to conventional electrolyte. WO3 thin film showed maximum specific capacitance of 725Fg(-1), energy density of 25.18Whkg(-1) at current density of 7mAcm(-2) with better cycling stability in redox electrolyte. This strategy provides the versatile way for designing the high performance energy storage devices. Copyright © 2016 Elsevier Inc. All rights reserved.
Effects of practice on the Wechsler Adult Intelligence Scale-IV across 3- and 6-month intervals.
Estevis, Eduardo; Basso, Michael R; Combs, Dennis
2012-01-01
A total of 54 participants (age M = 20.9; education M = 14.9; initial Full Scale IQ M = 111.6) were administered the Wechsler Adult Intelligence Scale-Fourth Edition (WAIS-IV) at baseline and again either 3 or 6 months later. Scores on the Full Scale IQ, Verbal Comprehension, Working Memory, Perceptual Reasoning, Processing Speed, and General Ability Indices improved approximately 7, 5, 4, 5, 9, and 6 points, respectively, and increases were similar regardless of whether the re-examination occurred over 3- or 6-month intervals. Reliable change indices (RCI) were computed using the simple difference and bivariate regression methods, providing estimated base rates of change across time. The regression method provided more accurate estimates of reliable change than did the simple difference between baseline and follow-up scores. These findings suggest that prior exposure to the WAIS-IV results in significant score increments. These gains reflect practice effects instead of genuine intellectual changes, which may lead to errors in clinical judgment.
Smith, Jennifer A; Sharma, Monisha; Levin, Carol; Baeten, Jared M; van Rooyen, Heidi; Celum, Connie; Hallett, Timothy B; Barnabas, Ruanne V
2015-04-01
Home HIV counselling and testing (HTC) achieves high coverage of testing and linkage to care compared with existing facility-based approaches, particularly among asymptomatic individuals. In a modelling analysis we aimed to assess the effect on population-level health and cost-effectiveness of a community-based package of home HTC in KwaZulu-Natal, South Africa. We parameterised an individual-based model with data from home HTC and linkage field studies that achieved high coverage (91%) and linkage to antiretroviral therapy (80%) in rural KwaZulu-Natal, South Africa. Costs were derived from a linked microcosting study. The model simulated 10,000 individuals over 10 years and incremental cost-effectiveness ratios were calculated for the intervention relative to the existing status quo of facility-based testing, with costs discounted at 3% annually. The model predicted implementation of home HTC in addition to current practice to decrease HIV-associated morbidity by 10–22% and HIV infections by 9–48% with increasing CD4 cell count thresholds for antiretroviral therapy initiation. Incremental programme costs were US$2·7 million to $4·4 million higher in the intervention scenarios than at baseline, and costs increased with higher CD4 cell count thresholds for antiretroviral therapy initiation; antiretroviral therapy accounted for 48–87% of total costs. Incremental cost-effectiveness ratios per disability-adjusted life-year averted were $1340 at an antiretroviral therapy threshold of CD4 count lower than 200 cells per μL, $1090 at lower than 350 cells per μL, $1150 at lower than 500 cells per μL, and $1360 at universal access to antiretroviral therapy. Community-based HTC with enhanced linkage to care can result in increased HIV testing coverage and treatment uptake, decreasing the population burden of HIV-associated morbidity and mortality. The incremental cost-effectiveness ratios are less than 20% of South Africa's gross domestic product per person, and are therefore classed as very cost effective. Home HTC can be a viable means to achieve UNAIDS' ambitious new targets for HIV treatment coverage. National Institutes of Health, Bill & Melinda Gates Foundation, Wellcome Trust.
Towards a Decision Support System for Space Flight Operations
NASA Technical Reports Server (NTRS)
Meshkat, Leila; Hogle, Charles; Ruszkowski, James
2013-01-01
The Mission Operations Directorate (MOD) at the Johnson Space Center (JSC) has put in place a Model Based Systems Engineering (MBSE) technological framework for the development and execution of the Flight Production Process (FPP). This framework has provided much added value and return on investment to date. This paper describes a vision for a model based Decision Support System (DSS) for the development and execution of the FPP and its design and development process. The envisioned system extends the existing MBSE methodology and technological framework which is currently in use. The MBSE technological framework currently in place enables the systematic collection and integration of data required for building an FPP model for a diverse set of missions. This framework includes the technology, people and processes required for rapid development of architectural artifacts. It is used to build a feasible FPP model for the first flight of spacecraft and for recurrent flights throughout the life of the program. This model greatly enhances our ability to effectively engage with a new customer. It provides a preliminary work breakdown structure, data flow information and a master schedule based on its existing knowledge base. These artifacts are then refined and iterated upon with the customer for the development of a robust end-to-end, high-level integrated master schedule and its associated dependencies. The vision is to enhance this framework to enable its application for uncertainty management, decision support and optimization of the design and execution of the FPP by the program. Furthermore, this enhanced framework will enable the agile response and redesign of the FPP based on observed system behavior. The discrepancy of the anticipated system behavior and the observed behavior may be due to the processing of tasks internally, or due to external factors such as changes in program requirements or conditions associated with other organizations that are outside of MOD. The paper provides a roadmap for the three increments of this vision. These increments include (1) hardware and software system components and interfaces with the NASA ground system, (2) uncertainty management and (3) re-planning and automated execution. Each of these increments provide value independently; but some may also enable building of a subsequent increment.
NASA Astrophysics Data System (ADS)
Timmermans, R.; Denier van der Gon, H.; Segers, A.; Honore, C.; Perrussel, O.; Builtjes, P.; Schaap, M.
2012-04-01
Since a major part of the Earth's population lives in cities, it is of great importance to correctly characterise the air pollution levels over these urban areas. Many studies in the past have already been dedicated to this subject and have determined so-called urban increments: the impact of large cities on the air pollution levels. The impact of large cities on air pollution levels usually is determined with models driven by so-called downscaled emission inventories. In these inventories official country total emissions are gridded using information on for example population density and location of industries and roads. The question is how accurate are the downscaled inventories over cities or large urban areas. Within the EU FP 7 project MEGAPOLI project a new emission inventory has been produced including refined local emission data for two European megacities (Paris, London) and two urban conglomerations (the Po valley, Italy and the Rhine-Ruhr region, Germany) based on a bottom-up approach. The inventory has comparable national totals but remarkable difference at the city scale. Such a bottom up inventory is thought to be more accurate as it contains local knowledge. Within this study we compared modelled nitrogen dioxide (NO2) and particulate matter (PM) concentrations from the LOTOS-EUROS chemistry transport model driven by a conventional downscaled emission inventory (TNO-MACC inventory) with the concentrations from the same model driven by the new MEGAPOLI 'bottom-up' emission inventory focusing on the Paris region. Model predictions for Paris significantly improve using the new Megapoli inventory. Both the emissions as well as the simulated average concentrations of PM over urban sites in Paris are much lower due to the different spatial distribution of the anthropogenic emissions. The difference for the nearby rural stations is small implicating that also the urban increment for PM simulated using the bottom-up emission inventory is much smaller than for the downscaled emission inventory. Urban increments for PM calculated with downscaled emissions, as is common practice, might therefore be overestimated. This finding is likely to apply to other European Megacities as well.
Oosterman, Joukje M; Heringa, Sophie M; Kessels, Roy P C; Biessels, Geert Jan; Koek, Huiberdina L; Maes, Joseph H R; van den Berg, Esther
2017-04-01
Rule induction tests such as the Wisconsin Card Sorting Test require executive control processes, but also the learning and memorization of simple stimulus-response rules. In this study, we examined the contribution of diminished learning and memorization of simple rules to complex rule induction test performance in patients with amnestic mild cognitive impairment (aMCI) or Alzheimer's dementia (AD). Twenty-six aMCI patients, 39 AD patients, and 32 control participants were included. A task was used in which the memory load and the complexity of the rules were independently manipulated. This task consisted of three conditions: a simple two-rule learning condition (Condition 1), a simple four-rule learning condition (inducing an increase in memory load, Condition 2), and a complex biconditional four-rule learning condition-inducing an increase in complexity and, hence, executive control load (Condition 3). Performance of AD patients declined disproportionately when the number of simple rules that had to be memorized increased (from Condition 1 to 2). An additional increment in complexity (from Condition 2 to 3) did not, however, disproportionately affect performance of the patients. Performance of the aMCI patients did not differ from that of the control participants. In the patient group, correlation analysis showed that memory performance correlated with Condition 1 performance, whereas executive task performance correlated with Condition 2 performance. These results indicate that the reduced learning and memorization of underlying task rules explains a significant part of the diminished complex rule induction performance commonly reported in AD, although results from the correlation analysis suggest involvement of executive control functions as well. Taken together, these findings suggest that care is needed when interpreting rule induction task performance in terms of executive function deficits in these patients.
Riu, Marta; Chiarello, Pietro; Terradas, Roser; Sala, Maria; Garcia-Alzorriz, Enric; Castells, Xavier; Grau, Santiago; Cots, Francesc
2016-01-01
To calculate the incremental cost of nosocomial bacteremia caused by the most common organisms, classified by their antimicrobial susceptibility. We selected patients who developed nosocomial bacteremia caused by Staphylococcus aureus, Escherichia coli, Klebsiella pneumoniae, or Pseudomonas aeruginosa. These microorganisms were analyzed because of their high prevalence and they frequently present multidrug resistance. A control group consisted of patients classified within the same all-patient refined-diagnosis related group without bacteremia. Our hospital has an established cost accounting system (full-costing) that uses activity-based criteria to analyze cost distribution. A logistic regression model was fitted to estimate the probability of developing bacteremia for each admission (propensity score) and was used for propensity score matching adjustment. Subsequently, the propensity score was included in an econometric model to adjust the incremental cost of patients who developed bacteremia, as well as differences in this cost, depending on whether the microorganism was multidrug-resistant or multidrug-sensitive. A total of 571 admissions with bacteremia matched the inclusion criteria and 82,022 were included in the control group. The mean cost was € 25,891 for admissions with bacteremia and € 6,750 for those without bacteremia. The mean incremental cost was estimated at € 15,151 (CI, € 11,570 to € 18,733). Multidrug-resistant P. aeruginosa bacteremia had the highest mean incremental cost, € 44,709 (CI, € 34,559 to € 54,859). Antimicrobial-susceptible E. coli nosocomial bacteremia had the lowest mean incremental cost, € 10,481 (CI, € 8,752 to € 12,210). Despite their lower cost, episodes of antimicrobial-susceptible E. coli nosocomial bacteremia had a major impact due to their high frequency. Adjustment of hospital cost according to the organism causing bacteremia and antibiotic sensitivity could improve prevention strategies and allow their prioritization according to their overall impact and costs. Infection reduction is a strategy to reduce resistance.
Riu, Marta; Chiarello, Pietro; Terradas, Roser; Sala, Maria; Garcia-Alzorriz, Enric; Castells, Xavier; Grau, Santiago; Cots, Francesc
2016-01-01
Aim To calculate the incremental cost of nosocomial bacteremia caused by the most common organisms, classified by their antimicrobial susceptibility. Methods We selected patients who developed nosocomial bacteremia caused by Staphylococcus aureus, Escherichia coli, Klebsiella pneumoniae, or Pseudomonas aeruginosa. These microorganisms were analyzed because of their high prevalence and they frequently present multidrug resistance. A control group consisted of patients classified within the same all-patient refined-diagnosis related group without bacteremia. Our hospital has an established cost accounting system (full-costing) that uses activity-based criteria to analyze cost distribution. A logistic regression model was fitted to estimate the probability of developing bacteremia for each admission (propensity score) and was used for propensity score matching adjustment. Subsequently, the propensity score was included in an econometric model to adjust the incremental cost of patients who developed bacteremia, as well as differences in this cost, depending on whether the microorganism was multidrug-resistant or multidrug-sensitive. Results A total of 571 admissions with bacteremia matched the inclusion criteria and 82,022 were included in the control group. The mean cost was € 25,891 for admissions with bacteremia and € 6,750 for those without bacteremia. The mean incremental cost was estimated at € 15,151 (CI, € 11,570 to € 18,733). Multidrug-resistant P. aeruginosa bacteremia had the highest mean incremental cost, € 44,709 (CI, € 34,559 to € 54,859). Antimicrobial-susceptible E. coli nosocomial bacteremia had the lowest mean incremental cost, € 10,481 (CI, € 8,752 to € 12,210). Despite their lower cost, episodes of antimicrobial-susceptible E. coli nosocomial bacteremia had a major impact due to their high frequency. Conclusions Adjustment of hospital cost according to the organism causing bacteremia and antibiotic sensitivity could improve prevention strategies and allow their prioritization according to their overall impact and costs. Infection reduction is a strategy to reduce resistance. PMID:27055117
Johnson, T S; Andriacchi, T P; Erdman, A G
2004-01-01
Various uses of the screw or helical axis have previously been reported in the literature in an attempt to quantify the complex displacements and coupled rotations of in vivo human knee kinematics. Multiple methods have been used by previous authors to calculate the axis parameters, and it has been theorized that the mathematical stability and accuracy of the finite helical axis (FHA) is highly dependent on experimental variability and rotation increment spacing between axis calculations. Previous research has not addressed the sensitivity of the FHA for true in vivo data collection, as required for gait laboratory analysis. This research presents a controlled series of experiments simulating continuous data collection as utilized in gait analysis to investigate the sensitivity of the three-dimensional finite screw axis parameters of rotation, displacement, orientation and location with regard to time step increment spacing, utilizing two different methods for spatial location. Six-degree-of-freedom motion parameters are measured for an idealized rigid body knee model that is constrained to a planar motion profile for the purposes of error analysis. The kinematic data are collected using a multicamera optoelectronic system combined with an error minimization algorithm known as the point cluster method. Rotation about the screw axis is seen to be repeatable, accurate and time step increment insensitive. Displacement along the axis is highly dependent on time step increment sizing, with smaller rotation angles between calculations producing more accuracy. Orientation of the axis in space is accurate with only a slight filtering effect noticed during motion reversal. Locating the screw axis by a projected point onto the screw axis from the mid-point of the finite displacement is found to be less sensitive to motion reversal than finding the intersection of the axis with a reference plane. A filtering effect of the spatial location parameters was noted for larger time step increments during periods of little or no rotation.
Are financial incentives cost-effective to support smoking cessation during pregnancy?
Boyd, Kathleen A; Briggs, Andrew H; Bauld, Linda; Sinclair, Lesley; Tappin, David
2016-02-01
To investigate the cost-effectiveness of up to £400 worth of financial incentives for smoking cessation in pregnancy as an adjunct to routine health care. Cost-effectiveness analysis based on a Phase II randomized controlled trial (RCT) and a cost-utility analysis using a life-time Markov model. The RCT was undertaken in Glasgow, Scotland. The economic analysis was undertaken from the UK National Health Service (NHS) perspective. A total of 612 pregnant women randomized to receive usual cessation support plus or minus financial incentives of up to £400 vouchers (US $609), contingent upon smoking cessation. Comparison of usual support and incentive interventions in terms of cotinine-validated quitters, quality-adjusted life years (QALYs) and direct costs to the NHS. The incremental cost per quitter at 34-38 weeks pregnant was £1127 ($1716).This is similar to the standard look-up value derived from Stapleton & West's published ICER tables, £1390 per quitter, by looking up the Cessation in Pregnancy Incentives Trial (CIPT) incremental cost (£157) and incremental 6-month quit outcome (0.14). The life-time model resulted in an incremental cost of £17 [95% confidence interval (CI) = -£93, £107] and a gain of 0.04 QALYs (95% CI = -0.058, 0.145), giving an ICER of £482/QALY ($734/QALY). Probabilistic sensitivity analysis indicates uncertainty in these results, particularly regarding relapse after birth. The expected value of perfect information was £30 million (at a willingness to pay of £30 000/QALY), so given current uncertainty, additional research is potentially worthwhile. Financial incentives for smoking cessation in pregnancy are highly cost-effective, with an incremental cost per quality-adjusted life years of £482, which is well below recommended decision thresholds. © 2015 Society for the Study of Addiction.
NASA Astrophysics Data System (ADS)
Fang, F. J.
2017-12-01
Reconciling observations at fundamentally different scales is central in understanding the global carbon cycle. This study investigates a model-based melding of forest inventory data, remote-sensing data and micrometeorological-station data ("flux towers" estimating forest heat, CO2 and H2O fluxes). The individual tree-based model FORCCHN was used to evaluate the tree DBH increment and forest carbon fluxes. These are the first simultaneous simulations of the forest carbon budgets from flux towers and individual-tree growth estimates of forest carbon budgets using the continuous forest inventory data — under circumstances in which both predictions can be tested. Along with the global implications of such findings, this also improves the capacity for forest sustainable management and the comprehensive understanding of forest ecosystems. In forest ecology, diameter at breast height (DBH) of a tree significantly determines an individual tree's cross-sectional sapwood area, its biomass and carbon storage. Evaluation the annual DBH increment (ΔDBH) of an individual tree is central to understanding tree growth and forest ecology. Ecosystem Carbon flux is a consequence of key ecosystem processes in the forest-ecosystem carbon cycle, Gross and Net Primary Production (GPP and NPP, respectively) and Net Ecosystem Respiration (NEP). All of these closely relate with tree DBH changes and tree death. Despite advances in evaluating forest carbon fluxes with flux towers and forest inventories for individual tree ΔDBH, few current ecological models can simultaneously quantify and predict the tree ΔDBH and forest carbon flux.
Iron overload induces hypogonadism in male mice via extrahypothalamic mechanisms.
Macchi, Chiara; Steffani, Liliana; Oleari, Roberto; Lettieri, Antonella; Valenti, Luca; Dongiovanni, Paola; Romero-Ruiz, Antonio; Tena-Sempere, Manuel; Cariboni, Anna; Magni, Paolo; Ruscica, Massimiliano
2017-10-15
Iron overload leads to multiple organ damage including endocrine organ dysfunctions. Hypogonadism is the most common non-diabetic endocrinopathy in primary and secondary iron overload syndromes. To explore the molecular determinants of iron overload-induced hypogonadism with specific focus on hypothalamic derangements. A dysmetabolic male murine model fed iron-enriched diet (IED) and cell-based models of gonadotropin-releasing hormone (GnRH) neurons were used. Mice fed IED showed severe hypogonadism with a significant reduction of serum levels of testosterone (-83%) and of luteinizing hormone (-86%), as well as reduced body weight gain, body fat and plasma leptin. IED mice had a significant increment in iron concentration in testes and in the pituitary. Even if iron challenge of in vitro neuronal models (GN-11 and GT1-7 GnRH cells) resulted in 10- and 5-fold iron content increments, respectively, no iron content changes were found in vivo in hypothalamus of IED mice. Conversely, mice placed on IED showed a significant increment in hypothalamic GnRH gene expression (+34%) and in the intensity of GnRH-neuron innervation of the median eminence (+1.5-fold); similar changes were found in the murine model HFE -/- , resembling human hemochromatosis. IED-fed adult male mice show severe impairment of hypothalamus-pituitary-gonadal axis without a relevant contribution of the hypothalamic compartment, which thus appears sufficiently protected from systemic iron overload. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Silaev, Mihail; Winyard, Thomas; Babaev, Egor
2018-05-01
The London model describes strongly type-2 superconductors as massive vector field theories, where the magnetic field decays exponentially at the length scale of the London penetration length. This also holds for isotropic multiband extensions, where the presence of multiple bands merely renormalizes the London penetration length. We show that, by contrast, the magnetic properties of anisotropic multiband London models are not this simple, and the anisotropy leads to the interband phase differences becoming coupled to the magnetic field. This results in the magnetic field in such systems having N +1 penetration lengths, where N is the number of field components or bands. That is, in a given direction, the magnetic field decay is described by N +1 modes with different amplitudes and different decay length scales. For certain anisotropies we obtain magnetic modes with complex masses. That means that magnetic field decay is not described by a monotonic exponential increment set by a real penetration length but instead is oscillating. Some of the penetration lengths are shown to diverge away from the superconducting phase transition when the mass of the phase-difference mode vanishes. Finally the anisotropy-driven hybridization of the London mode with the Leggett modes can provide an effectively nonlocal magnetic response in the nominally local London model. Focusing on the two-component model, we discuss the magnetic field inversion that results from the effective nonlocality, both near the surface of the superconductor and around vortices. In the regime where the magnetic field decay becomes nonmonotonic, the multiband London superconductor is shown to form weakly-bound states of vortices.
NASA Astrophysics Data System (ADS)
Dewaele, Hélène; Munier, Simon; Albergel, Clément; Planque, Carole; Laanaia, Nabil; Carrer, Dominique; Calvet, Jean-Christophe
2017-09-01
Soil maximum available water content (MaxAWC) is a key parameter in land surface models (LSMs). However, being difficult to measure, this parameter is usually uncertain. This study assesses the feasibility of using a 15-year (1999-2013) time series of satellite-derived low-resolution observations of leaf area index (LAI) to estimate MaxAWC for rainfed croplands over France. LAI interannual variability is simulated using the CO2-responsive version of the Interactions between Soil, Biosphere and Atmosphere (ISBA) LSM for various values of MaxAWC. Optimal value is then selected by using (1) a simple inverse modelling technique, comparing simulated and observed LAI and (2) a more complex method consisting in integrating observed LAI in ISBA through a land data assimilation system (LDAS) and minimising LAI analysis increments. The evaluation of the MaxAWC estimates from both methods is done using simulated annual maximum above-ground biomass (Bag) and straw cereal grain yield (GY) values from the Agreste French agricultural statistics portal, for 45 administrative units presenting a high proportion of straw cereals. Significant correlations (p value < 0.01) between Bag and GY are found for up to 36 and 53 % of the administrative units for the inverse modelling and LDAS tuning methods, respectively. It is found that the LDAS tuning experiment gives more realistic values of MaxAWC and maximum Bag than the inverse modelling experiment. Using undisaggregated LAI observations leads to an underestimation of MaxAWC and maximum Bag in both experiments. Median annual maximum values of disaggregated LAI observations are found to correlate very well with MaxAWC.
The Dynamics of Perceptual Learning: An Incremental Reweighting Model
ERIC Educational Resources Information Center
Petrov, Alexander A.; Dosher, Barbara Anne; Lu, Zhong-Lin
2005-01-01
The mechanisms of perceptual learning are analyzed theoretically, probed in an orientation-discrimination experiment involving a novel nonstationary context manipulation, and instantiated in a detailed computational model. Two hypotheses are examined: modification of early cortical representations versus task-specific selective reweighting.…
A meteorological distribution system for high-resolution terrestrial modeling (MicroMet)
Glen E. Liston; Kelly Elder
2006-01-01
An intermediate-complexity, quasi-physically based, meteorological model (MicroMet) has been developed to produce high-resolution (e.g., 30-m to 1-km horizontal grid increment) atmospheric forcings required to run spatially distributed terrestrial models over a wide variety of landscapes. The following eight variables, required to run most terrestrial models, are...
USDA-ARS?s Scientific Manuscript database
Materials and Methods The simulation exercise and model improvement were implemented in phase-wise. In the first modelling activities, the model sensitivities were evaluated to given CO2 concentrations varying from 360 to 720 'mol mol-1 at an interval of 90 'mol mol-1 and air temperature increments...
Mutual-Information-Based Incremental Relaying Communications for Wireless Biomedical Implant Systems
Liao, Yangzhe; Cai, Qing; Ai, Qingsong; Liu, Quan
2018-01-01
Network lifetime maximization of wireless biomedical implant systems is one of the major research challenges of wireless body area networks (WBANs). In this paper, a mutual information (MI)-based incremental relaying communication protocol is presented where several on-body relay nodes and one coordinator are attached to the clothes of a patient. Firstly, a comprehensive analysis of a system model is investigated in terms of channel path loss, energy consumption, and the outage probability from the network perspective. Secondly, only when the MI value becomes smaller than the predetermined threshold is data transmission allowed. The communication path selection can be either from the implanted sensor to the on-body relay then forwards to the coordinator or from the implanted sensor to the coordinator directly, depending on the communication distance. Moreover, mathematical models of quality of service (QoS) metrics are derived along with the related subjective functions. The results show that the MI-based incremental relaying technique achieves better performance in comparison to our previous proposed protocol techniques regarding several selected performance metrics. The outcome of this paper can be applied to intra-body continuous physiological signal monitoring, artificial biofeedback-oriented WBANs, and telemedicine system design. PMID:29419784
Liao, Yangzhe; Leeson, Mark S; Cai, Qing; Ai, Qingsong; Liu, Quan
2018-02-08
Network lifetime maximization of wireless biomedical implant systems is one of the major research challenges of wireless body area networks (WBANs). In this paper, a mutual information (MI)-based incremental relaying communication protocol is presented where several on-body relay nodes and one coordinator are attached to the clothes of a patient. Firstly, a comprehensive analysis of a system model is investigated in terms of channel path loss, energy consumption, and the outage probability from the network perspective. Secondly, only when the MI value becomes smaller than the predetermined threshold is data transmission allowed. The communication path selection can be either from the implanted sensor to the on-body relay then forwards to the coordinator or from the implanted sensor to the coordinator directly, depending on the communication distance. Moreover, mathematical models of quality of service (QoS) metrics are derived along with the related subjective functions. The results show that the MI-based incremental relaying technique achieves better performance in comparison to our previous proposed protocol techniques regarding several selected performance metrics. The outcome of this paper can be applied to intra-body continuous physiological signal monitoring, artificial biofeedback-oriented WBANs, and telemedicine system design.
Yeager, David S; Lee, Hae Yeon; Jamieson, Jeremy P
2016-08-01
This research integrated implicit theories of personality and the biopsychosocial model of challenge and threat, hypothesizing that adolescents would be more likely to conclude that they can meet the demands of an evaluative social situation when they were taught that people have the potential to change their socially relevant traits. In Study 1 (N = 60), high school students were assigned to an incremental-theory-of-personality or a control condition and then given a social-stress task. Relative to control participants, incremental-theory participants exhibited improved stress appraisals, more adaptive neuroendocrine and cardiovascular responses, and better performance outcomes. In Study 2 (N = 205), we used a daily-diary intervention to test high school students' stress reactivity outside the laboratory. Threat appraisals (Days 5-9 after intervention) and neuroendocrine responses (Days 8 and 9 after intervention only) were unrelated to the intensity of daily stressors when adolescents received the incremental-theory intervention. Students who received the intervention also had better grades over freshman year than those who did not. These findings offer new avenues for improving theories of adolescent stress and coping. © The Author(s) 2016.
Influence of tropospheric ozone control on exposure to ultraviolet radiation at the surface.
Madronich, Sasha; Wagner, Mark; Groth, Philip
2011-08-15
Improving air quality by reducing ambient ozone (O(3)) will likely lower O(3) concentrations throughout the troposphere and increase the transmission of solar ultraviolet (UV) radiation to the surface. The changes in surface UV radiation between two control scenarios (nominally 84 and 70 ppb O(3) for summer 2020) in the Eastern two-thirds of the contiguous U.S. are estimated, using tropospheric O(3) profiles calculated with a chemistry-transport model (Community Multi-Scale Air Quality, CMAQ) as inputs to a detailed model of the transfer of solar radiation through the atmosphere (tropospheric ultraviolet-visible, TUV) for clear skies, weighed for the wavelengths known to induce sunburn and skin cancer. Because the incremental emission controls differ according to region, strong spatial variability in O(3) reductions and in corresponding UV radiation increments is seen. The geographically averaged UV increase is 0.11 ± 0.03%, whereas the population-weighted increase is larger, 0.19 ± 0.06%, because O(3) reductions are greater in more densely populated regions. These relative increments in exposure are non-negligible given the already high incidence of UV-related health effects, but are lower by an order of magnitude or more than previous estimates.
NASA Astrophysics Data System (ADS)
Wang, Yongbo; Sheng, Yehua; Lu, Guonian; Tian, Peng; Zhang, Kai
2008-04-01
Surface reconstruction is an important task in the field of 3d-GIS, computer aided design and computer graphics (CAD & CG), virtual simulation and so on. Based on available incremental surface reconstruction methods, a feature-constrained surface reconstruction approach for point cloud is presented. Firstly features are extracted from point cloud under the rules of curvature extremes and minimum spanning tree. By projecting local sample points to the fitted tangent planes and using extracted features to guide and constrain the process of local triangulation and surface propagation, topological relationship among sample points can be achieved. For the constructed models, a process named consistent normal adjustment and regularization is adopted to adjust normal of each face so that the correct surface model is achieved. Experiments show that the presented approach inherits the convenient implementation and high efficiency of traditional incremental surface reconstruction method, meanwhile, it avoids improper propagation of normal across sharp edges, which means the applicability of incremental surface reconstruction is greatly improved. Above all, appropriate k-neighborhood can help to recognize un-sufficient sampled areas and boundary parts, the presented approach can be used to reconstruct both open and close surfaces without additional interference.
On nonstationarity and antipersistency in global temperature series
NASA Astrophysics Data System (ADS)
KäRner, O.
2002-10-01
Statistical analysis is carried out for satellite-based global daily tropospheric and stratospheric temperature anomaly and solar irradiance data sets. Behavior of the series appears to be nonstationary with stationary daily increments. Estimating long-range dependence between the increments reveals a remarkable difference between the two temperature series. Global average tropospheric temperature anomaly behaves similarly to the solar irradiance anomaly. Their daily increments show antipersistency for scales longer than 2 months. The property points at a cumulative negative feedback in the Earth climate system governing the tropospheric variability during the last 22 years. The result emphasizes a dominating role of the solar irradiance variability in variations of the tropospheric temperature and gives no support to the theory of anthropogenic climate change. The global average stratospheric temperature anomaly proceeds like a 1-dim random walk at least up to 11 years, allowing good presentation by means of the autoregressive integrated moving average (ARIMA) models for monthly series.
Systems Engineering and Integration (SE and I)
NASA Technical Reports Server (NTRS)
Chevers, ED; Haley, Sam
1990-01-01
The issue of technology advancement and future space transportation vehicles is addressed. The challenge is to develop systems which can be evolved and improved in small incremental steps where each increment reduces present cost, improves, reliability, or does neither but sets the stage for a second incremental upgrade that does. Future requirements are interface standards for commercial off the shelf products to aid in the development of integrated facilities; enhanced automated code generation system slightly coupled to specification and design documentation; modeling tools that support data flow analysis; and shared project data bases consisting of technical characteristics cast information, measurement parameters, and reusable software programs. Topics addressed include: advanced avionics development strategy; risk analysis and management; tool quality management; low cost avionics; cost estimation and benefits; computer aided software engineering; computer systems and software safety; system testability; and advanced avionics laboratories - and rapid prototyping. This presentation is represented by viewgraphs only.
El-Alayli, Amani
2006-12-01
Previous research has shown that matching person variables with achievement contexts can produce the best motivational outcomes. The current study examines whether this is also true when matching entity and incremental beliefs with the appropriate motivational climate. Participants were led to believe that a personal attribute was fixed (entity belief) or malleable (incremental belief). After thinking that they failed a test that assessed the attribute, participants performed a second (related) task in a context that facilitated the pursuit of either performance or learning goals. Participants were expected to exhibit greater effort on the second task in the congruent conditions (entity belief plus performance goal climate and incremental belief plus learning goal climate) than in the incongruent conditions. These results were obtained, but only for participants who either valued competence on the attribute or had high achievement motivation. Results are discussed in terms of developing strategies for optimizing motivation in achievement settings.
Nelson, Jason M; Canivez, Gary L; Watkins, Marley W
2013-06-01
Structural and incremental validity of the Wechsler Adult Intelligence Scale-Fourth Edition (WAIS-IV; Wechsler, 2008a) was examined with a sample of 300 individuals referred for evaluation at a university-based clinic. Confirmatory factor analysis indicated that the WAIS-IV structure was best represented by 4 first-order factors as well as a general intelligence factor in a direct hierarchical model. The general intelligence factor accounted for the most common and total variance among the subtests. Incremental validity analyses indicated that the Full Scale IQ (FSIQ) generally accounted for medium to large portions of academic achievement variance. For all measures of academic achievement, the first-order factors combined accounted for significant achievement variance beyond that accounted for by the FSIQ, but individual factor index scores contributed trivial amounts of achievement variance. Implications for interpreting WAIS-IV results are discussed. (PsycINFO Database Record (c) 2013 APA, all rights reserved).
Willan, Andrew R; Eckermann, Simon
2012-10-01
Previous applications of value of information methods for determining optimal sample size in randomized clinical trials have assumed no between-study variation in mean incremental net benefit. By adopting a hierarchical model, we provide a solution for determining optimal sample size with this assumption relaxed. The solution is illustrated with two examples from the literature. Expected net gain increases with increasing between-study variation, reflecting the increased uncertainty in incremental net benefit and reduced extent to which data are borrowed from previous evidence. Hence, a trial can become optimal where current evidence is sufficient assuming no between-study variation. However, despite the expected net gain increasing, the optimal sample size in the illustrated examples is relatively insensitive to the amount of between-study variation. Further percentage losses in expected net gain were small even when choosing sample sizes that reflected widely different between-study variation. Copyright © 2011 John Wiley & Sons, Ltd.