Sample records for incremental reaction modeling

  1. On the search for an appropriate metric for reaction time to suprathreshold increments and decrements.

    PubMed

    Vassilev, Angel; Murzac, Adrian; Zlatkova, Margarita B; Anderson, Roger S

    2009-03-01

    Weber contrast, DeltaL/L, is a widely used contrast metric for aperiodic stimuli. Zele, Cao & Pokorny [Zele, A. J., Cao, D., & Pokorny, J. (2007). Threshold units: A correct metric for reaction time? Vision Research, 47, 608-611] found that neither Weber contrast nor its transform to detection-threshold units equates human reaction times in response to luminance increments and decrements under selective rod stimulation. Here we show that their rod reaction times are equated when plotted against the spatial luminance ratio between the stimulus and its background (L(max)/L(min), the larger and smaller of background and stimulus luminances). Similarly, reaction times to parafoveal S-cone selective increments and decrements from our previous studies [Murzac, A. (2004). A comparative study of the temporal characteristics of processing of S-cone incremental and decremental signals. PhD thesis, New Bulgarian University, Sofia, Murzac, A., & Vassilev, A. (2004). Reaction time to S-cone increments and decrements. In: 7th European conference on visual perception, Budapest, August 22-26. Perception, 33, 180 (Abstract).], are better described by the spatial luminance ratio than by Weber contrast. We assume that the type of stimulus detection by temporal (successive) luminance discrimination, by spatial (simultaneous) luminance discrimination or by both [Sperling, G., & Sondhi, M. M. (1968). Model for visual luminance discrimination and flicker detection. Journal of the Optical Society of America, 58, 1133-1145.] determines the appropriateness of one or other contrast metric for reaction time.

  2. Functional data analysis on ground reaction force of military load carriage increment

    NASA Astrophysics Data System (ADS)

    Din, Wan Rozita Wan; Rambely, Azmin Sham

    2014-06-01

    Analysis of ground reaction force on military load carriage is done through functional data analysis (FDA) statistical technique. The main objective of the research is to investigate the effect of 10% load increment and to find the maximum suitable load for the Malaysian military. Ten military soldiers age 31 ± 6.2 years, weigh 71.6 ± 10.4 kg and height of 166.3 ± 5.9 cm carrying different military load range from 0% body weight (BW) up to 40% BW participated in an experiment to gather the GRF and kinematic data using Vicon Motion Analysis System, Kirstler force plates and thirty nine body markers. The analysis is conducted in sagittal, medial lateral and anterior posterior planes. The results show that 10% BW load increment has an effect when heel strike and toe-off for all the three planes analyzed with P-value less than 0.001 at 0.05 significant levels. FDA proves to be one of the best statistical techniques in analyzing the functional data. It has the ability to handle filtering, smoothing and curve aligning according to curve features and points of interest.

  3. [Individual tree diameter increment model for natural Betula platyphylla forests based on meteorological factors].

    PubMed

    Zhang, Hai Ping; Li, Feng Ri; Dong, Li Hu; Liu, Qiang

    2017-06-18

    Based on the 212 re-measured permanent plots for natural Betula platyphylla fore-sts in Daxing'an Mountains and Xiaoxing'an Mountains and 30 meteorological stations data, an individual tree growth model based on meteorological factors was constructed. The differences of stand and meteorological factors between Daxing'an Mountains and Xiaoxing'an Mountains were analyzed and the diameter increment model including the regional effects was developed by dummy variable approach. The results showed that the minimum temperature (T g min ) and mean precipitation (P g m ) in growing season were the main meteorological factors which affected the diameter increment in the two study areas. T g min and P g m were positively correlated with the diameter increment, but the influence strength of T g min was obviously different between the two research areas. The adjusted coefficient of determination (R a 2 ) of the diameter increment model with meteorological factors was 0.56 and had an 11% increase compared to the one without meteorological factors. It was concluded that meteorological factors could well explain the diameter increment of B. platyphylla. R a 2 of the model with regional effects was 0.59, and increased by 18% compared to the one without regional effects, and effectively solved the incompatible problem of parameters between the two research areas. The validation results showed that the individual tree diameter growth model with regional effect had the best prediction accuracy in estimating the diameter increment of B. platyphylla. The mean error, mean absolute error, mean error percent and mean prediction error percent were 0.0086, 0.4476, 5.8% and 20.0%, respectively. Overall, dummy variable model of individual tree diameter increment based on meteorological factors could well describe the diameter increment process of natural B. platyphylla in Daxing'an Mountains and Xiaoxing'an Mountains.

  4. Modeling the temporal periodicity of growth increments based on harmonic functions

    PubMed Central

    Morales-Bojórquez, Enrique; González-Peláez, Sergio Scarry; Bautista-Romero, J. Jesús; Lluch-Cota, Daniel Bernardo

    2018-01-01

    Age estimation methods based on hard structures require a process of validation to confirm the periodical pattern of growth marks. Among such processes, one of the most used is the marginal increment ratio (MIR), which was stated to follow a sinusoidal cycle in a population. Despite its utility, in most cases, its implementation has lacked robust statistical analysis. Accordingly, we propose a modeling approach for the temporal periodicity of growth increments based on single and second order harmonic functions. For illustrative purposes, the MIR periodicities for two geoduck species (Panopea generosa and Panopea globosa) were modeled to identify the periodical pattern of growth increments in the shell. This model identified an annual periodicity for both species but described different temporal patterns. The proposed procedure can be broadly used to objectively define the timing of the peak, the degree of symmetry, and therefore, the synchrony of band deposition of different species on the basis of MIR data. PMID:29694381

  5. Two models of minimalist, incremental syntactic analysis.

    PubMed

    Stabler, Edward P

    2013-07-01

    Minimalist grammars (MGs) and multiple context-free grammars (MCFGs) are weakly equivalent in the sense that they define the same languages, a large mildly context-sensitive class that properly includes context-free languages. But in addition, for each MG, there is an MCFG which is strongly equivalent in the sense that it defines the same language with isomorphic derivations. However, the structure-building rules of MGs but not MCFGs are defined in a way that generalizes across categories. Consequently, MGs can be exponentially more succinct than their MCFG equivalents, and this difference shows in parsing models too. An incremental, top-down beam parser for MGs is defined here, sound and complete for all MGs, and hence also capable of parsing all MCFG languages. But since the parser represents its grammar transparently, the relative succinctness of MGs is again evident. Although the determinants of MG structure are narrowly and discretely defined, probabilistic influences from a much broader domain can influence even the earliest analytic steps, allowing frequency and context effects to come early and from almost anywhere, as expected in incremental models. Copyright © 2013 Cognitive Science Society, Inc.

  6. Incremental principal component pursuit for video background modeling

    DOEpatents

    Rodriquez-Valderrama, Paul A.; Wohlberg, Brendt

    2017-03-14

    An incremental Principal Component Pursuit (PCP) algorithm for video background modeling that is able to process one frame at a time while adapting to changes in background, with a computational complexity that allows for real-time processing, having a low memory footprint and is robust to translational and rotational jitter.

  7. A comparative study of velocity increment generation between the rigid body and flexible models of MMET

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ismail, Norilmi Amilia, E-mail: aenorilmi@usm.my

    The motorized momentum exchange tether (MMET) is capable of generating useful velocity increments through spin–orbit coupling. This study presents a comparative study of the velocity increments between the rigid body and flexible models of MMET. The equations of motions of both models in the time domain are transformed into a function of true anomaly. The equations of motion are integrated, and the responses in terms of the velocity increment of the rigid body and flexible models are compared and analysed. Results show that the initial conditions, eccentricity, and flexibility of the tether have significant effects on the velocity increments ofmore » the tether.« less

  8. The balanced scorecard: an incremental approach model to health care management.

    PubMed

    Pineno, Charles J

    2002-01-01

    The balanced scorecard represents a technique used in strategic management to translate an organization's mission and strategy into a comprehensive set of performance measures that provide the framework for implementation of strategic management. This article develops an incremental approach for decision making by formulating a specific balanced scorecard model with an index of nonfinancial as well as financial measures. The incremental approach to costs, including profit contribution analysis and probabilities, allows decisionmakers to assess, for example, how their desire to meet different health care needs will cause changes in service design. This incremental approach to the balanced scorecard may prove to be useful in evaluating the existence of causality relationships between different objective and subjective measures to be included within the balanced scorecard.

  9. Integrating Incremental Learning and Episodic Memory Models of the Hippocampal Region

    ERIC Educational Resources Information Center

    Meeter, M.; Myers, C. E.; Gluck, M. A.

    2005-01-01

    By integrating previous computational models of corticohippocampal function, the authors develop and test a unified theory of the neural substrates of familiarity, recollection, and classical conditioning. This approach integrates models from 2 traditions of hippocampal modeling, those of episodic memory and incremental learning, by drawing on an…

  10. Modelling and Prediction of Spark-ignition Engine Power Performance Using Incremental Least Squares Support Vector Machines

    NASA Astrophysics Data System (ADS)

    Wong, Pak-kin; Vong, Chi-man; Wong, Hang-cheong; Li, Ke

    2010-05-01

    Modern automotive spark-ignition (SI) power performance usually refers to output power and torque, and they are significantly affected by the setup of control parameters in the engine management system (EMS). EMS calibration is done empirically through tests on the dynamometer (dyno) because no exact mathematical engine model is yet available. With an emerging nonlinear function estimation technique of Least squares support vector machines (LS-SVM), the approximate power performance model of a SI engine can be determined by training the sample data acquired from the dyno. A novel incremental algorithm based on typical LS-SVM is also proposed in this paper, so the power performance models built from the incremental LS-SVM can be updated whenever new training data arrives. With updating the models, the model accuracies can be continuously increased. The predicted results using the estimated models from the incremental LS-SVM are good agreement with the actual test results and with the almost same average accuracy of retraining the models from scratch, but the incremental algorithm can significantly shorten the model construction time when new training data arrives.

  11. Optimal tree increment models for the Northeastern United Statesq

    Treesearch

    Don C. Bragg

    2003-01-01

    used the potential relative increment (PRI) methodology to develop optimal tree diameter growth models for the Northeastern United States. Thirty species from the Eastwide Forest Inventory Database yielded 69,676 individuals, which were then reduced to fast-growing subsets for PRI analysis. For instance, only 14 individuals from the greater than 6,300-tree eastern...

  12. Evaluation of incremental reactivity and its uncertainty in Southern California.

    PubMed

    Martien, Philip T; Harley, Robert A; Milford, Jana B; Russell, Armistead G

    2003-04-15

    The incremental reactivity (IR) and relative incremental reactivity (RIR) of carbon monoxide and 30 individual volatile organic compounds (VOC) were estimated for the South Coast Air Basin using two photochemical air quality models: a 3-D, grid-based model and a vertically resolved trajectory model. Both models include an extended version of the SAPRC99 chemical mechanism. For the 3-D modeling, the decoupled direct method (DDM-3D) was used to assess reactivities. The trajectory model was applied to estimate uncertainties in reactivities due to uncertainties in chemical rate parameters, deposition parameters, and emission rates using Monte Carlo analysis with Latin hypercube sampling. For most VOC, RIRs were found to be consistent in rankings with those produced by Carter using a box model. However, 3-D simulations show that coastal regions, upwind of most of the emissions, have comparatively low IR but higher RIR than predicted by box models for C4-C5 alkenes and carbonyls that initiate the production of HOx radicals. Biogenic VOC emissions were found to have a lower RIR than predicted by box model estimates, because emissions of these VOC were mostly downwind of the areas of primary ozone production. Uncertainties in RIR of individual VOC were found to be dominated by uncertainties in the rate parameters of their primary oxidation reactions. The coefficient of variation (COV) of most RIR values ranged from 20% to 30%, whereas the COV of absolute incremental reactivity ranged from about 30% to 40%. In general, uncertainty and variability both decreased when relative rather than absolute reactivity metrics were used.

  13. Optimal Tree Increment Models for the Northeastern United States

    Treesearch

    Don C. Bragg

    2005-01-01

    I used the potential relative increment (PRI) methodology to develop optimal tree diameter growth models for the Northeastern United States. Thirty species from the Eastwide Forest Inventory Database yielded 69,676 individuals, which were then reduced to fast-growing subsets for PRI analysis. For instance, only 14 individuals from the greater than 6,300-tree eastern...

  14. Quality and Growth Implications of Incremental Costing Models for Distance Education Units

    ERIC Educational Resources Information Center

    Crawford, C. B.; Gould, Lawrence V.; King, Dennis; Parker, Carl

    2010-01-01

    The purpose of this article is to explore quality and growth implications emergent from various incremental costing models applied to distance education units. Prior research relative to costing models and three competing costing models useful in the current distance education environment are discussed. Specifically, the simple costing model, unit…

  15. The efficacy of using inventory data to develop optimal diameter increment models

    Treesearch

    Don C. Bragg

    2002-01-01

    Most optimal tree diameter growth models have arisen through either the conceptualization of physiological processes or the adaptation of empirical increment models. However, surprisingly little effort has been invested in the melding of these approaches even though it is possible to develop theoretically sound, computationally efficient optimal tree growth models...

  16. Planning Through Incrementalism

    ERIC Educational Resources Information Center

    Lasserre, Ph.

    1974-01-01

    An incremental model of decisionmaking is discussed and compared with the Comprehensive Rational Approach. A model of reconciliation between the two approaches is proposed, and examples are given in the field of economic development and educational planning. (Author/DN)

  17. A System to Derive Optimal Tree Diameter Increment Models from the Eastwide Forest Inventory Data Base (EFIDB)

    Treesearch

    Don C. Bragg

    2002-01-01

    This article is an introduction to the computer software used by the Potential Relative Increment (PRI) approach to optimal tree diameter growth modeling. These DOS programs extract qualified tree and plot data from the Eastwide Forest Inventory Data Base (EFIDB), calculate relative tree increment, sort for the highest relative increments by diameter class, and...

  18. Situation Model Updating in Young and Older Adults: Global versus Incremental Mechanisms

    PubMed Central

    Bailey, Heather R.; Zacks, Jeffrey M.

    2015-01-01

    Readers construct mental models of situations described by text. Activity in narrative text is dynamic, so readers must frequently update their situation models when dimensions of the situation change. Updating can be incremental, such that a change leads to updating just the dimension that changed, or global, such that the entire model is updated. Here, we asked whether older and young adults make differential use of incremental and global updating. Participants read narratives containing changes in characters and spatial location and responded to recognition probes throughout the texts. Responses were slower when probes followed a change, suggesting that situation models were updated at changes. When either dimension changed, responses to probes for both dimensions were slowed; this provides evidence for global updating. Moreover, older adults showed stronger evidence of global updating than did young adults. One possibility is that older adults perform more global updating to offset reduced ability to manipulate information in working memory. PMID:25938248

  19. Constructing increment-decrement life tables.

    PubMed

    Schoen, R

    1975-05-01

    A life table model which can recognize increments (or entrants) as well as decrements has proven to be of considerable value in the analysis of marital status patterns, labor force participation patterns, and other areas of substantive interest. Nonetheless, relatively little work has been done on the methodology of increment-decrement (or combined) life tables. The present paper reviews the general, recursive solution of Schoen and Nelson (1974), develops explicit solutions for three cases of particular interest, and compares alternative approaches to the construction of increment-decrement tables.

  20. Sensitivity to gaze-contingent contrast increments in naturalistic movies: An exploratory report and model comparison

    PubMed Central

    Wallis, Thomas S. A.; Dorr, Michael; Bex, Peter J.

    2015-01-01

    Sensitivity to luminance contrast is a prerequisite for all but the simplest visual systems. To examine contrast increment detection performance in a way that approximates the natural environmental input of the human visual system, we presented contrast increments gaze-contingently within naturalistic video freely viewed by observers. A band-limited contrast increment was applied to a local region of the video relative to the observer's current gaze point, and the observer made a forced-choice response to the location of the target (≈25,000 trials across five observers). We present exploratory analyses showing that performance improved as a function of the magnitude of the increment and depended on the direction of eye movements relative to the target location, the timing of eye movements relative to target presentation, and the spatiotemporal image structure at the target location. Contrast discrimination performance can be modeled by assuming that the underlying contrast response is an accelerating nonlinearity (arising from a nonlinear transducer or gain control). We implemented one such model and examined the posterior over model parameters, estimated using Markov-chain Monte Carlo methods. The parameters were poorly constrained by our data; parameters constrained using strong priors taken from previous research showed poor cross-validated prediction performance. Atheoretical logistic regression models were better constrained and provided similar prediction performance to the nonlinear transducer model. Finally, we explored the properties of an extended logistic regression that incorporates both eye movement and image content features. Models of contrast transduction may be better constrained by incorporating data from both artificial and natural contrast perception settings. PMID:26057546

  1. Prediction of height increment for models of forest growth

    Treesearch

    Albert R. Stage

    1975-01-01

    Functional forms of equations were derived for predicting 10-year periodic height increment of forest trees from height, diameter, diameter increment, and habitat type. Crown ratio was considered as an additional variable for prediction, but its contribution was negligible. Coefficients of the function were estimated for 10 species of trees growing in 10 habitat types...

  2. Phonological priming in young children who stutter: holistic versus incremental processing.

    PubMed

    Byrd, Courtney T; Conture, Edward G; Ohde, Ralph N

    2007-02-01

    To investigate the holistic versus incremental phonological encoding processes of young children who stutter (CWS; N = 26) and age- and gender-matched children who do not stutter (CWNS; N = 26) via a picture-naming auditory priming paradigm. Children named pictures during 3 auditory priming conditions: neutral, holistic, and incremental. Speech reaction time (SRT) was measured from the onset of picture presentation to the onset of participant response. CWNS shifted from being significantly faster in the holistic priming condition to being significantly faster in the incremental priming condition from 3 to 5 years of age. In contrast, the majority of 3- and 5-year-old CWS continued to exhibit faster SRT in the holistic than the incremental condition. CWS are delayed in making the developmental shift in phonological encoding from holistic to incremental processing, a delay that may contribute to their difficulties establishing fluent speech.

  3. Real-time model learning using Incremental Sparse Spectrum Gaussian Process Regression.

    PubMed

    Gijsberts, Arjan; Metta, Giorgio

    2013-05-01

    Novel applications in unstructured and non-stationary human environments require robots that learn from experience and adapt autonomously to changing conditions. Predictive models therefore not only need to be accurate, but should also be updated incrementally in real-time and require minimal human intervention. Incremental Sparse Spectrum Gaussian Process Regression is an algorithm that is targeted specifically for use in this context. Rather than developing a novel algorithm from the ground up, the method is based on the thoroughly studied Gaussian Process Regression algorithm, therefore ensuring a solid theoretical foundation. Non-linearity and a bounded update complexity are achieved simultaneously by means of a finite dimensional random feature mapping that approximates a kernel function. As a result, the computational cost for each update remains constant over time. Finally, algorithmic simplicity and support for automated hyperparameter optimization ensures convenience when employed in practice. Empirical validation on a number of synthetic and real-life learning problems confirms that the performance of Incremental Sparse Spectrum Gaussian Process Regression is superior with respect to the popular Locally Weighted Projection Regression, while computational requirements are found to be significantly lower. The method is therefore particularly suited for learning with real-time constraints or when computational resources are limited. Copyright © 2012 Elsevier Ltd. All rights reserved.

  4. Combining Accuracy and Efficiency: An Incremental Focal-Point Method Based on Pair Natural Orbitals.

    PubMed

    Fiedler, Benjamin; Schmitz, Gunnar; Hättig, Christof; Friedrich, Joachim

    2017-12-12

    In this work, we present a new pair natural orbitals (PNO)-based incremental scheme to calculate CCSD(T) and CCSD(T0) reaction, interaction, and binding energies. We perform an extensive analysis, which shows small incremental errors similar to previous non-PNO calculations. Furthermore, slight PNO errors are obtained by using T PNO = T TNO with appropriate values of 10 -7 to 10 -8 for reactions and 10 -8 for interaction or binding energies. The combination with the efficient MP2 focal-point approach yields chemical accuracy relative to the complete basis-set (CBS) limit. In this method, small basis sets (cc-pVDZ, def2-TZVP) for the CCSD(T) part are sufficient in case of reactions or interactions, while some larger ones (e.g., (aug)-cc-pVTZ) are necessary for molecular clusters. For these larger basis sets, we show the very high efficiency of our scheme. We obtain not only tremendous decreases of the wall times (i.e., factors >10 2 ) due to the parallelization of the increment calculations as well as of the total times due to the application of PNOs (i.e., compared to the normal incremental scheme) but also smaller total times with respect to the standard PNO method. That way, our new method features a perfect applicability by combining an excellent accuracy with a very high efficiency as well as the accessibility to larger systems due to the separation of the full computation into several small increments.

  5. Incremental checking of Master Data Management model based on contextual graphs

    NASA Astrophysics Data System (ADS)

    Lamolle, Myriam; Menet, Ludovic; Le Duc, Chan

    2015-10-01

    The validation of models is a crucial step in distributed heterogeneous systems. In this paper, an incremental validation method is proposed in the scope of a Model Driven Engineering (MDE) approach, which is used to develop a Master Data Management (MDM) field represented by XML Schema models. The MDE approach presented in this paper is based on the definition of an abstraction layer using UML class diagrams. The validation method aims to minimise the model errors and to optimisethe process of model checking. Therefore, the notion of validation contexts is introduced allowing the verification of data model views. Description logics specify constraints that the models have to check. An experimentation of the approach is presented through an application developed in ArgoUML IDE.

  6. Incremental generation of answers during the comprehension of questions with quantifiers.

    PubMed

    Bott, Oliver; Augurzky, Petra; Sternefeld, Wolfgang; Ulrich, Rolf

    2017-09-01

    The paper presents a study on the online interpretation of quantified questions involving complex domain restriction, for instance, are all triangles blue that are in the circle. Two probe reaction time (RT) task experiments were conducted to study the incremental nature of answer generation while manipulating visual contexts and response hand overlap between tasks. We manipulated the contexts in such a way that the incremental answer to the question changed from 'yes' to 'no' or remained the same before and after encountering the extraposed relative clause. The findings of both experiments provide evidence for incremental answer preparation but only if the context did not involve the risk of answer revision. Our results show that preliminary output from incremental semantic interpretation results in response priming that facilitates congruent responses in the probe RT task. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. An incremental anomaly detection model for virtual machines.

    PubMed

    Zhang, Hancui; Chen, Shuyu; Liu, Jun; Zhou, Zhen; Wu, Tianshu

    2017-01-01

    Self-Organizing Map (SOM) algorithm as an unsupervised learning method has been applied in anomaly detection due to its capabilities of self-organizing and automatic anomaly prediction. However, because of the algorithm is initialized in random, it takes a long time to train a detection model. Besides, the Cloud platforms with large scale virtual machines are prone to performance anomalies due to their high dynamic and resource sharing characters, which makes the algorithm present a low accuracy and a low scalability. To address these problems, an Improved Incremental Self-Organizing Map (IISOM) model is proposed for anomaly detection of virtual machines. In this model, a heuristic-based initialization algorithm and a Weighted Euclidean Distance (WED) algorithm are introduced into SOM to speed up the training process and improve model quality. Meanwhile, a neighborhood-based searching algorithm is presented to accelerate the detection time by taking into account the large scale and high dynamic features of virtual machines on cloud platform. To demonstrate the effectiveness, experiments on a common benchmark KDD Cup dataset and a real dataset have been performed. Results suggest that IISOM has advantages in accuracy and convergence velocity of anomaly detection for virtual machines on cloud platform.

  8. Implementing Kernel Methods Incrementally by Incremental Nonlinear Projection Trick.

    PubMed

    Kwak, Nojun

    2016-05-20

    Recently, the nonlinear projection trick (NPT) was introduced enabling direct computation of coordinates of samples in a reproducing kernel Hilbert space. With NPT, any machine learning algorithm can be extended to a kernel version without relying on the so called kernel trick. However, NPT is inherently difficult to be implemented incrementally because an ever increasing kernel matrix should be treated as additional training samples are introduced. In this paper, an incremental version of the NPT (INPT) is proposed based on the observation that the centerization step in NPT is unnecessary. Because the proposed INPT does not change the coordinates of the old data, the coordinates obtained by INPT can directly be used in any incremental methods to implement a kernel version of the incremental methods. The effectiveness of the INPT is shown by applying it to implement incremental versions of kernel methods such as, kernel singular value decomposition, kernel principal component analysis, and kernel discriminant analysis which are utilized for problems of kernel matrix reconstruction, letter classification, and face image retrieval, respectively.

  9. A diameter increment model for Red Fir in California and Southern Oregon

    Treesearch

    K. Leroy Dolph

    1992-01-01

    Periodic (10-year) diameter increment of individual red fir trees in Califomia and southern Oregon can be predicted from initial diameter and crown ratio of each tree, site index, percent slope, and aspect of the site. The model actually predicts the natural logarithm ofthe change in squared diameter inside bark between the startand the end of a 10-year growth period....

  10. An incremental anomaly detection model for virtual machines

    PubMed Central

    Zhang, Hancui; Chen, Shuyu; Liu, Jun; Zhou, Zhen; Wu, Tianshu

    2017-01-01

    Self-Organizing Map (SOM) algorithm as an unsupervised learning method has been applied in anomaly detection due to its capabilities of self-organizing and automatic anomaly prediction. However, because of the algorithm is initialized in random, it takes a long time to train a detection model. Besides, the Cloud platforms with large scale virtual machines are prone to performance anomalies due to their high dynamic and resource sharing characters, which makes the algorithm present a low accuracy and a low scalability. To address these problems, an Improved Incremental Self-Organizing Map (IISOM) model is proposed for anomaly detection of virtual machines. In this model, a heuristic-based initialization algorithm and a Weighted Euclidean Distance (WED) algorithm are introduced into SOM to speed up the training process and improve model quality. Meanwhile, a neighborhood-based searching algorithm is presented to accelerate the detection time by taking into account the large scale and high dynamic features of virtual machines on cloud platform. To demonstrate the effectiveness, experiments on a common benchmark KDD Cup dataset and a real dataset have been performed. Results suggest that IISOM has advantages in accuracy and convergence velocity of anomaly detection for virtual machines on cloud platform. PMID:29117245

  11. Is incremental hemodialysis ready to return on the scene? From empiricism to kinetic modelling.

    PubMed

    Basile, Carlo; Casino, Francesco Gaetano; Kalantar-Zadeh, Kamyar

    2017-08-01

    Most people who make the transition to maintenance dialysis therapy are treated with a fixed dose thrice-weekly hemodialysis regimen without considering their residual kidney function (RKF). The RKF provides effective and naturally continuous clearance of both small and middle molecules, plays a major role in metabolic homeostasis, nutritional status, and cardiovascular health, and aids in fluid management. The RKF is associated with better patient survival and greater health-related quality of life, although these effects may be confounded by patient comorbidities. Preservation of the RKF requires a careful approach, including regular monitoring, avoidance of nephrotoxins, gentle control of blood pressure to avoid intradialytic hypotension, and an individualized dialysis prescription including the consideration of incremental hemodialysis. There is currently no standardized method for applying incremental hemodialysis in practice. Infrequent (once- to twice-weekly) hemodialysis regimens are often used arbitrarily, without knowing which patients would benefit the most from them or how to escalate the dialysis dose as RKF declines over time. The recently heightened interest in incremental hemodialysis has been hindered by the current limitations of the urea kinetic models (UKM) which tend to overestimate the dialysis dose required in the presence of substantial RKF. This is due to an erroneous extrapolation of the equivalence between renal urea clearance (Kru) and dialyser urea clearance (Kd), correctly assumed by the UKM, to the clinical domain. In this context, each ml/min of Kd clears the urea from the blood just as 1 ml/min of Kru does. By no means should such kinetic equivalence imply that 1 ml/min of Kd is clinically equivalent to 1 ml/min of urea clearance provided by the native kidneys. A recent paper by Casino and Basile suggested a variable target model (VTM) as opposed to the fixed model, because the VTM gives more clinical weight to the RKF and allows

  12. The power induced effects module: A FORTRAN code which estimates lift increments due to power induced effects for V/STOL flight

    NASA Technical Reports Server (NTRS)

    Sandlin, Doral R.; Howard, Kipp E.

    1991-01-01

    A user friendly FORTRAN code that can be used for preliminary design of V/STOL aircraft is described. The program estimates lift increments, due to power induced effects, encountered by aircraft in V/STOL flight. These lift increments are calculated using empirical relations developed from wind tunnel tests and are due to suckdown, fountain, ground vortex, jet wake, and the reaction control system. The code can be used as a preliminary design tool along with NASA Ames' Aircraft Synthesis design code or as a stand-alone program for V/STOL aircraft designers. The Power Induced Effects (PIE) module was validated using experimental data and data computed from lift increment routines. Results are presented for many flat plate models along with the McDonnell Aircraft Company's MFVT (mixed flow vectored thrust) V/STOL preliminary design and a 15 percent scale model of the YAV-8B Harrier V/STOL aircraft. Trends and magnitudes of lift increments versus aircraft height above the ground were predicted well by the PIE module. The code also provided good predictions of the magnitudes of lift increments versus aircraft forward velocity. More experimental results are needed to determine how well the code predicts lift increments as they vary with jet deflection angle and angle of attack. The FORTRAN code is provided in the appendix.

  13. Reaction time for trimolecular reactions in compartment-based reaction-diffusion models

    NASA Astrophysics Data System (ADS)

    Li, Fei; Chen, Minghan; Erban, Radek; Cao, Yang

    2018-05-01

    Trimolecular reaction models are investigated in the compartment-based (lattice-based) framework for stochastic reaction-diffusion modeling. The formulae for the first collision time and the mean reaction time are derived for the case where three molecules are present in the solution under periodic boundary conditions. For the case of reflecting boundary conditions, similar formulae are obtained using a computer-assisted approach. The accuracy of these formulae is further verified through comparison with numerical results. The presented derivation is based on the first passage time analysis of Montroll [J. Math. Phys. 10, 753 (1969)]. Montroll's results for two-dimensional lattice-based random walks are adapted and applied to compartment-based models of trimolecular reactions, which are studied in one-dimensional or pseudo one-dimensional domains.

  14. Incremental online learning in high dimensions.

    PubMed

    Vijayakumar, Sethu; D'Souza, Aaron; Schaal, Stefan

    2005-12-01

    Locally weighted projection regression (LWPR) is a new algorithm for incremental nonlinear function approximation in high-dimensional spaces with redundant and irrelevant input dimensions. At its core, it employs nonparametric regression with locally linear models. In order to stay computationally efficient and numerically robust, each local model performs the regression analysis with a small number of univariate regressions in selected directions in input space in the spirit of partial least squares regression. We discuss when and how local learning techniques can successfully work in high-dimensional spaces and review the various techniques for local dimensionality reduction before finally deriving the LWPR algorithm. The properties of LWPR are that it (1) learns rapidly with second-order learning methods based on incremental training, (2) uses statistically sound stochastic leave-one-out cross validation for learning without the need to memorize training data, (3) adjusts its weighting kernels based on only local information in order to minimize the danger of negative interference of incremental learning, (4) has a computational complexity that is linear in the number of inputs, and (5) can deal with a large number of-possibly redundant-inputs, as shown in various empirical evaluations with up to 90 dimensional data sets. For a probabilistic interpretation, predictive variance and confidence intervals are derived. To our knowledge, LWPR is the first truly incremental spatially localized learning method that can successfully and efficiently operate in very high-dimensional spaces.

  15. A table of intensity increments.

    DOT National Transportation Integrated Search

    1966-01-01

    Small intensity increments can be produced by adding larger intensity increments. A table is presented covering the range of small intensity increments from 0.008682 through 6.020 dB in 60 large intensity increments of 1 dB.

  16. Multilevel models for estimating incremental net benefits in multinational studies.

    PubMed

    Grieve, Richard; Nixon, Richard; Thompson, Simon G; Cairns, John

    2007-08-01

    Multilevel models (MLMs) have been recommended for estimating incremental net benefits (INBs) in multicentre cost-effectiveness analysis (CEA). However, these models have assumed that the INBs are exchangeable and that there is a common variance across all centres. This paper examines the plausibility of these assumptions by comparing various MLMs for estimating the mean INB in a multinational CEA. The results showed that the MLMs that assumed the INBs were exchangeable and had a common variance led to incorrect inferences. The MLMs that included covariates to allow for systematic differences across the centres, and estimated different variances in each centre, made more plausible assumptions, fitted the data better and led to more appropriate inferences. We conclude that the validity of assumptions underlying MLMs used in CEA need to be critically evaluated before reliable conclusions can be drawn. Copyright 2006 John Wiley & Sons, Ltd.

  17. Support vector machine incremental learning triggered by wrongly predicted samples

    NASA Astrophysics Data System (ADS)

    Tang, Ting-long; Guan, Qiu; Wu, Yi-rong

    2018-05-01

    According to the classic Karush-Kuhn-Tucker (KKT) theorem, at every step of incremental support vector machine (SVM) learning, the newly adding sample which violates the KKT conditions will be a new support vector (SV) and migrate the old samples between SV set and non-support vector (NSV) set, and at the same time the learning model should be updated based on the SVs. However, it is not exactly clear at this moment that which of the old samples would change between SVs and NSVs. Additionally, the learning model will be unnecessarily updated, which will not greatly increase its accuracy but decrease the training speed. Therefore, how to choose the new SVs from old sets during the incremental stages and when to process incremental steps will greatly influence the accuracy and efficiency of incremental SVM learning. In this work, a new algorithm is proposed to select candidate SVs and use the wrongly predicted sample to trigger the incremental processing simultaneously. Experimental results show that the proposed algorithm can achieve good performance with high efficiency, high speed and good accuracy.

  18. Modelling Students' Visualisation of Chemical Reaction

    ERIC Educational Resources Information Center

    Cheng, Maurice M. W.; Gilbert, John K.

    2017-01-01

    This paper proposes a model-based notion of "submicro representations of chemical reactions". Based on three structural models of matter (the simple particle model, the atomic model and the free electron model of metals), we suggest there are two major models of reaction in school chemistry curricula: (a) reactions that are simple…

  19. Developing Risk Prediction Models for Kidney Injury and Assessing Incremental Value for Novel Biomarkers

    PubMed Central

    Kerr, Kathleen F.; Meisner, Allison; Thiessen-Philbrook, Heather; Coca, Steven G.

    2014-01-01

    The field of nephrology is actively involved in developing biomarkers and improving models for predicting patients’ risks of AKI and CKD and their outcomes. However, some important aspects of evaluating biomarkers and risk models are not widely appreciated, and statistical methods are still evolving. This review describes some of the most important statistical concepts for this area of research and identifies common pitfalls. Particular attention is paid to metrics proposed within the last 5 years for quantifying the incremental predictive value of a new biomarker. PMID:24855282

  20. The Crucial Role of Error Correlation for Uncertainty Modeling of CFD-Based Aerodynamics Increments

    NASA Technical Reports Server (NTRS)

    Hemsch, Michael J.; Walker, Eric L.

    2011-01-01

    The Ares I ascent aerodynamics database for Design Cycle 3 (DAC-3) was built from wind-tunnel test results and CFD solutions. The wind tunnel results were used to build the baseline response surfaces for wind-tunnel Reynolds numbers at power-off conditions. The CFD solutions were used to build increments to account for Reynolds number effects. We calculate the validation errors for the primary CFD code results at wind tunnel Reynolds number power-off conditions and would like to be able to use those errors to predict the validation errors for the CFD increments. However, the validation errors are large compared to the increments. We suggest a way forward that is consistent with common practice in wind tunnel testing which is to assume that systematic errors in the measurement process and/or the environment will subtract out when increments are calculated, thus making increments more reliable with smaller uncertainty than absolute values of the aerodynamic coefficients. A similar practice has arisen for the use of CFD to generate aerodynamic database increments. The basis of this practice is the assumption of strong correlation of the systematic errors inherent in each of the results used to generate an increment. The assumption of strong correlation is the inferential link between the observed validation uncertainties at wind-tunnel Reynolds numbers and the uncertainties to be predicted for flight. In this paper, we suggest a way to estimate the correlation coefficient and demonstrate the approach using code-to-code differences that were obtained for quality control purposes during the Ares I CFD campaign. Finally, since we can expect the increments to be relatively small compared to the baseline response surface and to be typically of the order of the baseline uncertainty, we find that it is necessary to be able to show that the correlation coefficients are close to unity to avoid overinflating the overall database uncertainty with the addition of the increments.

  1. An incremental DPMM-based method for trajectory clustering, modeling, and retrieval.

    PubMed

    Hu, Weiming; Li, Xi; Tian, Guodong; Maybank, Stephen; Zhang, Zhongfei

    2013-05-01

    Trajectory analysis is the basis for many applications, such as indexing of motion events in videos, activity recognition, and surveillance. In this paper, the Dirichlet process mixture model (DPMM) is applied to trajectory clustering, modeling, and retrieval. We propose an incremental version of a DPMM-based clustering algorithm and apply it to cluster trajectories. An appropriate number of trajectory clusters is determined automatically. When trajectories belonging to new clusters arrive, the new clusters can be identified online and added to the model without any retraining using the previous data. A time-sensitive Dirichlet process mixture model (tDPMM) is applied to each trajectory cluster for learning the trajectory pattern which represents the time-series characteristics of the trajectories in the cluster. Then, a parameterized index is constructed for each cluster. A novel likelihood estimation algorithm for the tDPMM is proposed, and a trajectory-based video retrieval model is developed. The tDPMM-based probabilistic matching method and the DPMM-based model growing method are combined to make the retrieval model scalable and adaptable. Experimental comparisons with state-of-the-art algorithms demonstrate the effectiveness of our algorithm.

  2. Product Quality Modelling Based on Incremental Support Vector Machine

    NASA Astrophysics Data System (ADS)

    Wang, J.; Zhang, W.; Qin, B.; Shi, W.

    2012-05-01

    Incremental Support vector machine (ISVM) is a new learning method developed in recent years based on the foundations of statistical learning theory. It is suitable for the problem of sequentially arriving field data and has been widely used for product quality prediction and production process optimization. However, the traditional ISVM learning does not consider the quality of the incremental data which may contain noise and redundant data; it will affect the learning speed and accuracy to a great extent. In order to improve SVM training speed and accuracy, a modified incremental support vector machine (MISVM) is proposed in this paper. Firstly, the margin vectors are extracted according to the Karush-Kuhn-Tucker (KKT) condition; then the distance from the margin vectors to the final decision hyperplane is calculated to evaluate the importance of margin vectors, where the margin vectors are removed while their distance exceed the specified value; finally, the original SVs and remaining margin vectors are used to update the SVM. The proposed MISVM can not only eliminate the unimportant samples such as noise samples, but also can preserve the important samples. The MISVM has been experimented on two public data and one field data of zinc coating weight in strip hot-dip galvanizing, and the results shows that the proposed method can improve the prediction accuracy and the training speed effectively. Furthermore, it can provide the necessary decision supports and analysis tools for auto control of product quality, and also can extend to other process industries, such as chemical process and manufacturing process.

  3. Impact of Incremental Perfusion Loss on Oxygen Transport in a Capillary Network Mathematical Model.

    PubMed

    Fraser, Graham M; Sharpe, Michael D; Goldman, Daniel; Ellis, Christopher G

    2015-07-01

    To quantify how incremental capillary PL, such as that seen in experimental models of sepsis, affects tissue oxygenation using a computation model of oxygen transport. A computational model was applied to capillary networks with dimensions 84 × 168 × 342 (NI) and 70 × 157 × 268 (NII) μm, reconstructed in vivo from rat skeletal muscle. FCD loss was applied incrementally up to ~40% and combined with high tissue oxygen consumption to simulate severe sepsis. A loss of ~40% FCD loss decreased median tissue PO2 to 22.9 and 20.1 mmHg in NI and NII compared to 28.1 and 27.5 mmHg under resting conditions. Increasing RBC SR to baseline levels returned tissue PO2 to within 5% of baseline. HC combined with a 40% FCD loss, resulted in tissue anoxia in both network volumes and median tissue PO2 of 11.5 and 8.9 mmHg in NI and NII respectively; median tissue PO2 was recovered to baseline levels by increasing total SR 3-4 fold. These results suggest a substantial increase in total SR is required in order to compensate for impaired oxygen delivery as a result of loss of capillary perfusion and increased oxygen consumption during sepsis. © 2015 John Wiley & Sons Ltd.

  4. Developing risk prediction models for kidney injury and assessing incremental value for novel biomarkers.

    PubMed

    Kerr, Kathleen F; Meisner, Allison; Thiessen-Philbrook, Heather; Coca, Steven G; Parikh, Chirag R

    2014-08-07

    The field of nephrology is actively involved in developing biomarkers and improving models for predicting patients' risks of AKI and CKD and their outcomes. However, some important aspects of evaluating biomarkers and risk models are not widely appreciated, and statistical methods are still evolving. This review describes some of the most important statistical concepts for this area of research and identifies common pitfalls. Particular attention is paid to metrics proposed within the last 5 years for quantifying the incremental predictive value of a new biomarker. Copyright © 2014 by the American Society of Nephrology.

  5. Quasi-static incremental behavior of granular materials: Elastic-plastic coupling and micro-scale dissipation

    NASA Astrophysics Data System (ADS)

    Kuhn, Matthew R.; Daouadji, Ali

    2018-05-01

    The paper addresses a common assumption of elastoplastic modeling: that the recoverable, elastic strain increment is unaffected by alterations of the elastic moduli that accompany loading. This assumption is found to be false for a granular material, and discrete element (DEM) simulations demonstrate that granular materials are coupled materials at both micro- and macro-scales. Elasto-plastic coupling at the macro-scale is placed in the context of thermomechanics framework of Tomasz Hueckel and Hans Ziegler, in which the elastic moduli are altered by irreversible processes during loading. This complex behavior is explored for multi-directional loading probes that follow an initial monotonic loading. An advanced DEM model is used in the study, with non-convex non-spherical particles and two different contact models: a conventional linear-frictional model and an exact implementation of the Hertz-like Cattaneo-Mindlin model. Orthotropic true-triaxial probes were used in the study (i.e., no direct shear strain), with tiny strain increments of 2 ×10-6 . At the micro-scale, contact movements were monitored during small increments of loading and load-reversal, and results show that these movements are not reversed by a reversal of strain direction, and some contacts that were sliding during a loading increment continue to slide during reversal. The probes show that the coupled part of a strain increment, the difference between the recoverable (elastic) increment and its reversible part, must be considered when partitioning strain increments into elastic and plastic parts. Small increments of irreversible (and plastic) strain and contact slipping and frictional dissipation occur for all directions of loading, and an elastic domain, if it exists at all, is smaller than the strain increment used in the simulations.

  6. Incremental Bayesian Category Learning From Natural Language.

    PubMed

    Frermann, Lea; Lapata, Mirella

    2016-08-01

    Models of category learning have been extensively studied in cognitive science and primarily tested on perceptual abstractions or artificial stimuli. In this paper, we focus on categories acquired from natural language stimuli, that is, words (e.g., chair is a member of the furniture category). We present a Bayesian model that, unlike previous work, learns both categories and their features in a single process. We model category induction as two interrelated subproblems: (a) the acquisition of features that discriminate among categories, and (b) the grouping of concepts into categories based on those features. Our model learns categories incrementally using particle filters, a sequential Monte Carlo method commonly used for approximate probabilistic inference that sequentially integrates newly observed data and can be viewed as a plausible mechanism for human learning. Experimental results show that our incremental learner obtains meaningful categories which yield a closer fit to behavioral data compared to related models while at the same time acquiring features which characterize the learned categories. (An earlier version of this work was published in Frermann and Lapata .). Copyright © 2015 Cognitive Science Society, Inc.

  7. Incremental Refinement of FAÇADE Models with Attribute Grammar from 3d Point Clouds

    NASA Astrophysics Data System (ADS)

    Dehbi, Y.; Staat, C.; Mandtler, L.; Pl¨umer, L.

    2016-06-01

    Data acquisition using unmanned aerial vehicles (UAVs) has gotten more and more attention over the last years. Especially in the field of building reconstruction the incremental interpretation of such data is a demanding task. In this context formal grammars play an important role for the top-down identification and reconstruction of building objects. Up to now, the available approaches expect offline data in order to parse an a-priori known grammar. For mapping on demand an on the fly reconstruction based on UAV data is required. An incremental interpretation of the data stream is inevitable. This paper presents an incremental parser of grammar rules for an automatic 3D building reconstruction. The parser enables a model refinement based on new observations with respect to a weighted attribute context-free grammar (WACFG). The falsification or rejection of hypotheses is supported as well. The parser can deal with and adapt available parse trees acquired from previous interpretations or predictions. Parse trees derived so far are updated in an iterative way using transformation rules. A diagnostic step searches for mismatches between current and new nodes. Prior knowledge on façades is incorporated. It is given by probability densities as well as architectural patterns. Since we cannot always assume normal distributions, the derivation of location and shape parameters of building objects is based on a kernel density estimation (KDE). While the level of detail is continuously improved, the geometrical, semantic and topological consistency is ensured.

  8. The effect of narrow-band noise maskers on increment detection1

    PubMed Central

    Messersmith, Jessica J.; Patra, Harisadhan; Jesteadt, Walt

    2010-01-01

    It is often assumed that listeners detect an increment in the intensity of a pure tone by detecting an increase in the energy falling within the critical band centered on the signal frequency. A noise masker can be used to limit the use of signal energy falling outside of the critical band, but facets of the noise may impact increment detection beyond this intended purpose. The current study evaluated the impact of envelope fluctuation in a noise masker on thresholds for detection of an increment. Thresholds were obtained for detection of an increment in the intensity of a 0.25- or 4-kHz pedestal in quiet and in the presence of noise of varying bandwidth. Results indicate that thresholds for detection of an increment in the intensity of a pure tone increase with increasing bandwidth for an on-frequency noise masker, but are unchanged by an off-frequency noise masker. Neither a model that includes a modulation-filter-bank analysis of envelope modulation nor a model based on discrimination of spectral patterns can account for all aspects of the observed data. PMID:21110593

  9. Simulation tools for particle-based reaction-diffusion dynamics in continuous space

    PubMed Central

    2014-01-01

    Particle-based reaction-diffusion algorithms facilitate the modeling of the diffusional motion of individual molecules and the reactions between them in cellular environments. A physically realistic model, depending on the system at hand and the questions asked, would require different levels of modeling detail such as particle diffusion, geometrical confinement, particle volume exclusion or particle-particle interaction potentials. Higher levels of detail usually correspond to increased number of parameters and higher computational cost. Certain systems however, require these investments to be modeled adequately. Here we present a review on the current field of particle-based reaction-diffusion software packages operating on continuous space. Four nested levels of modeling detail are identified that capture incrementing amount of detail. Their applicability to different biological questions is discussed, arching from straight diffusion simulations to sophisticated and expensive models that bridge towards coarse grained molecular dynamics. PMID:25737778

  10. Modelling Tethered Enzymatic Reactions

    NASA Astrophysics Data System (ADS)

    Solis Salas, Citlali; Goyette, Jesse; Coker-Gordon, Nicola; Bridge, Marcus; Isaacson, Samuel; Allard, Jun; Maini, Philip; Dushek, Omer

    Enzymatic reactions are key to cell functioning, and whilst much work has been done in protein interaction in cases where diffusion is possible, interactions of tethered proteins are poorly understood. Yet, because of the large role cell membranes play in enzymatic reactions, several reactions may take place where one of the proteins is bound to a fixed point in space. We develop a model to characterize tethered signalling between the phosphatase SHP-1 interacting with a tethered, phosphorylated protein. We compare our model to experimental data obtained using surface plasmon resonance (SPR). We show that a single SPR experiment recovers 5 independent biophysical/biochemical constants. We also compare the results between a three dimensional model and a two dimensional model. The work gives the opportunity to use known techniques to learn more about signalling processes, and new insights into how enzyme tethering alters cellular signalling. With support from the Mexican Council for Science and Technology (CONACyT), the Public Education Secretariat (SEP), and the Mexican National Autonomous University's Foundation (Fundacion UNAM).

  11. New scaling model for variables and increments with heavy-tailed distributions

    NASA Astrophysics Data System (ADS)

    Riva, Monica; Neuman, Shlomo P.; Guadagnini, Alberto

    2015-06-01

    Many hydrological (as well as diverse earth, environmental, ecological, biological, physical, social, financial and other) variables, Y, exhibit frequency distributions that are difficult to reconcile with those of their spatial or temporal increments, ΔY. Whereas distributions of Y (or its logarithm) are at times slightly asymmetric with relatively mild peaks and tails, those of ΔY tend to be symmetric with peaks that grow sharper, and tails that become heavier, as the separation distance (lag) between pairs of Y values decreases. No statistical model known to us captures these behaviors of Y and ΔY in a unified and consistent manner. We propose a new, generalized sub-Gaussian model that does so. We derive analytical expressions for probability distribution functions (pdfs) of Y and ΔY as well as corresponding lead statistical moments. In our model the peak and tails of the ΔY pdf scale with lag in line with observed behavior. The model allows one to estimate, accurately and efficiently, all relevant parameters by analyzing jointly sample moments of Y and ΔY. We illustrate key features of our new model and method of inference on synthetically generated samples and neutron porosity data from a deep borehole.

  12. The Variance Reaction Time Model

    ERIC Educational Resources Information Center

    Sikstrom, Sverker

    2004-01-01

    The variance reaction time model (VRTM) is proposed to account for various recognition data on reaction time, the mirror effect, receiver-operating-characteristic (ROC) curves, etc. The model is based on simple and plausible assumptions within a neural network: VRTM is a two layer neural network where one layer represents items and one layer…

  13. Comparison of the Incremental Validity of the Old and New MCAT.

    ERIC Educational Resources Information Center

    Wolf, Fredric M.; And Others

    The predictive and incremental validity of both the Old and New Medical College Admission Test (MCAT) was examined and compared with a sample of over 300 medical students. Results of zero order and incremental validity coefficients, as well as prediction models resulting from all possible subsets regression analyses using Mallow's Cp criterion,…

  14. Atmospheric response to Saharan dust deduced from ECMWF reanalysis increments

    NASA Astrophysics Data System (ADS)

    Kishcha, P.; Alpert, P.; Barkan, J.; Kirchner, I.; Machenhauer, B.

    2003-04-01

    This study focuses on the atmospheric temperature response to dust deduced from a new source of data - the European Reanalysis (ERA) increments. These increments are the systematic errors of global climate models, generated in reanalysis procedure. The model errors result not only from the lack of desert dust but also from a complex combination of many kinds of model errors. Over the Sahara desert the dust radiative effect is believed to be a predominant model defect which should significantly affect the increments. This dust effect was examined by considering correlation between the increments and remotely-sensed dust. Comparisons were made between April temporal variations of the ERA analysis increments and the variations of the Total Ozone Mapping Spectrometer aerosol index (AI) between 1979 and 1993. The distinctive structure was identified in the distribution of correlation composed of three nested areas with high positive correlation (> 0.5), low correlation, and high negative correlation (<-0.5). The innermost positive correlation area (PCA) is a large area near the center of the Sahara desert. For some local maxima inside this area the correlation even exceeds 0.8. The outermost negative correlation area (NCA) is not uniform. It consists of some areas over the eastern and western parts of North Africa with a relatively small amount of dust. Inside those areas both positive and negative high correlations exist at pressure levels ranging from 850 to 700 hPa, with the peak values near 775 hPa. Dust-forced heating (cooling) inside the PCA (NCA) is accompanied by changes in the static stability of the atmosphere above the dust layer. The reanalysis data of the European Center for Medium Range Weather Forecast(ECMWF) suggests that the PCA (NCA) corresponds mainly to anticyclonic (cyclonic) flow, negative (positive) vorticity, and downward (upward) airflow. These facts indicate an interaction between dust-forced heating /cooling and atmospheric circulation. The

  15. 48 CFR 3452.232-71 - Incremental funding.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 48 Federal Acquisition Regulations System 7 2013-10-01 2012-10-01 true Incremental funding. 3452....232-71 Incremental funding. As prescribed in 3432.705-2, insert the following provision in solicitations if a cost-reimbursement contract using incremental funding is contemplated: Incremental Funding...

  16. 48 CFR 3452.232-71 - Incremental funding.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 48 Federal Acquisition Regulations System 7 2014-10-01 2014-10-01 false Incremental funding. 3452....232-71 Incremental funding. As prescribed in 3432.705-2, insert the following provision in solicitations if a cost-reimbursement contract using incremental funding is contemplated: Incremental Funding...

  17. 48 CFR 3452.232-71 - Incremental funding.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 48 Federal Acquisition Regulations System 7 2012-10-01 2012-10-01 false Incremental funding. 3452....232-71 Incremental funding. As prescribed in 3432.705-2, insert the following provision in solicitations if a cost-reimbursement contract using incremental funding is contemplated: Incremental Funding...

  18. 48 CFR 3452.232-71 - Incremental funding.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 7 2011-10-01 2011-10-01 false Incremental funding. 3452....232-71 Incremental funding. As prescribed in 3432.705-2, insert the following provision in solicitations if a cost-reimbursement contract using incremental funding is contemplated: Incremental Funding...

  19. 14 CFR 1260.53 - Incremental funding.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 14 Aeronautics and Space 5 2010-01-01 2010-01-01 false Incremental funding. 1260.53 Section 1260.53 Aeronautics and Space NATIONAL AERONAUTICS AND SPACE ADMINISTRATION GRANTS AND COOPERATIVE AGREEMENTS General Special Conditions § 1260.53 Incremental funding. Incremental Funding October 2000 (a...

  20. 14 CFR 1260.53 - Incremental funding.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 14 Aeronautics and Space 5 2013-01-01 2013-01-01 false Incremental funding. 1260.53 Section 1260.53 Aeronautics and Space NATIONAL AERONAUTICS AND SPACE ADMINISTRATION GRANTS AND COOPERATIVE AGREEMENTS General Special Conditions § 1260.53 Incremental funding. Incremental Funding October 2000 (a...

  1. 14 CFR 1260.53 - Incremental funding.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 14 Aeronautics and Space 5 2012-01-01 2012-01-01 false Incremental funding. 1260.53 Section 1260.53 Aeronautics and Space NATIONAL AERONAUTICS AND SPACE ADMINISTRATION GRANTS AND COOPERATIVE AGREEMENTS General Special Conditions § 1260.53 Incremental funding. Incremental Funding October 2000 (a...

  2. 39 CFR 3050.23 - Documentation supporting incremental cost estimates in the Postal Service's section 3652 report.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... incremental cost model shall be reported. ... 39 Postal Service 1 2010-07-01 2010-07-01 false Documentation supporting incremental cost... REGULATORY COMMISSION PERSONNEL PERIODIC REPORTING § 3050.23 Documentation supporting incremental cost...

  3. 14 CFR 1260.53 - Incremental funding.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 14 Aeronautics and Space 5 2011-01-01 2010-01-01 true Incremental funding. 1260.53 Section 1260.53 Aeronautics and Space NATIONAL AERONAUTICS AND SPACE ADMINISTRATION GRANTS AND COOPERATIVE AGREEMENTS General Special Conditions § 1260.53 Incremental funding. Incremental Funding October 2000 (a) Only $___ of the...

  4. 18 CFR 154.309 - Incremental expansions.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 18 Conservation of Power and Water Resources 1 2010-04-01 2010-04-01 false Incremental expansions. 154.309 Section 154.309 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY COMMISSION... Changes § 154.309 Incremental expansions. (a) For every expansion for which incremental rates are charged...

  5. 18 CFR 154.309 - Incremental expansions.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 18 Conservation of Power and Water Resources 1 2014-04-01 2014-04-01 false Incremental expansions. 154.309 Section 154.309 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY COMMISSION... Changes § 154.309 Incremental expansions. (a) For every expansion for which incremental rates are charged...

  6. 18 CFR 154.309 - Incremental expansions.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 18 Conservation of Power and Water Resources 1 2011-04-01 2011-04-01 false Incremental expansions. 154.309 Section 154.309 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY COMMISSION... Changes § 154.309 Incremental expansions. (a) For every expansion for which incremental rates are charged...

  7. 18 CFR 154.309 - Incremental expansions.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 18 Conservation of Power and Water Resources 1 2013-04-01 2013-04-01 false Incremental expansions. 154.309 Section 154.309 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY COMMISSION... Changes § 154.309 Incremental expansions. (a) For every expansion for which incremental rates are charged...

  8. 18 CFR 154.309 - Incremental expansions.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 18 Conservation of Power and Water Resources 1 2012-04-01 2012-04-01 false Incremental expansions. 154.309 Section 154.309 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY COMMISSION... Changes § 154.309 Incremental expansions. (a) For every expansion for which incremental rates are charged...

  9. Modelling Chemical Reasoning to Predict and Invent Reactions.

    PubMed

    Segler, Marwin H S; Waller, Mark P

    2017-05-02

    The ability to reason beyond established knowledge allows organic chemists to solve synthetic problems and invent novel transformations. Herein, we propose a model that mimics chemical reasoning, and formalises reaction prediction as finding missing links in a knowledge graph. We have constructed a knowledge graph containing 14.4 million molecules and 8.2 million binary reactions, which represents the bulk of all chemical reactions ever published in the scientific literature. Our model outperforms a rule-based expert system in the reaction prediction task for 180 000 randomly selected binary reactions. The data-driven model generalises even beyond known reaction types, and is thus capable of effectively (re-)discovering novel transformations (even including transition metal-catalysed reactions). Our model enables computers to infer hypotheses about reactivity and reactions by only considering the intrinsic local structure of the graph and because each single reaction prediction is typically achieved in a sub-second time frame, the model can be used as a high-throughput generator of reaction hypotheses for reaction discovery. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. 14 CFR 1274.918 - Incremental funding.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 14 Aeronautics and Space 5 2013-01-01 2013-01-01 false Incremental funding. 1274.918 Section 1274... COMMERCIAL FIRMS Other Provisions and Special Conditions § 1274.918 Incremental funding. Incremental Funding... Agreement, as required, until it is fully funded. Any work beyond the funding limit will be at the recipient...

  11. 14 CFR 1274.918 - Incremental funding.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 14 Aeronautics and Space 5 2012-01-01 2012-01-01 false Incremental funding. 1274.918 Section 1274... COMMERCIAL FIRMS Other Provisions and Special Conditions § 1274.918 Incremental funding. Incremental Funding... Agreement, as required, until it is fully funded. Any work beyond the funding limit will be at the recipient...

  12. 14 CFR 1274.918 - Incremental funding.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 14 Aeronautics and Space 5 2010-01-01 2010-01-01 false Incremental funding. 1274.918 Section 1274... COMMERCIAL FIRMS Other Provisions and Special Conditions § 1274.918 Incremental funding. Incremental Funding... Agreement, as required, until it is fully funded. Any work beyond the funding limit will be at the recipient...

  13. 14 CFR 1274.918 - Incremental funding.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 14 Aeronautics and Space 5 2011-01-01 2010-01-01 true Incremental funding. 1274.918 Section 1274... COMMERCIAL FIRMS Other Provisions and Special Conditions § 1274.918 Incremental funding. Incremental Funding... Agreement, as required, until it is fully funded. Any work beyond the funding limit will be at the recipient...

  14. Hardiness scales in Iranian managers: evidence of incremental validity in relationships with the five factor model and with organizational and psychological adjustment.

    PubMed

    Ghorbani, Nima; Watson, P J

    2005-06-01

    This study examined the incremental validity of Hardiness scales in a sample of Iranian managers. Along with measures of the Five Factor Model and of Organizational and Psychological Adjustment, Hardiness scales were administered to 159 male managers (M age = 39.9, SD = 7.5) who had worked in their organizations for 7.9 yr. (SD=5.4). Hardiness predicted greater Job Satisfaction, higher Organization-based Self-esteem, and perceptions of the work environment as being less stressful and constraining. Hardiness also correlated positively with Assertiveness, Emotional Stability, Extraversion, Openness to Experience, Agreeableness, and Conscientiousness and negatively with Depression, Anxiety, Perceived Stress, Chance External Control, and a Powerful Others External Control. Evidence of incremental validity was obtained when the Hardiness scales supplemented the Five Factor Model in predicting organizational and psychological adjustment. These data documented the incremental validity of the Hardiness scales in a non-Western sample and thus confirmed once again that Hardiness has a relevance that extends beyond the culture in which it was developed.

  15. Applying and Individual-Based Model to Simultaneously Evaluate Net Ecosystem Production and Tree Diameter Increment

    NASA Astrophysics Data System (ADS)

    Fang, F. J.

    2017-12-01

    Reconciling observations at fundamentally different scales is central in understanding the global carbon cycle. This study investigates a model-based melding of forest inventory data, remote-sensing data and micrometeorological-station data ("flux towers" estimating forest heat, CO2 and H2O fluxes). The individual tree-based model FORCCHN was used to evaluate the tree DBH increment and forest carbon fluxes. These are the first simultaneous simulations of the forest carbon budgets from flux towers and individual-tree growth estimates of forest carbon budgets using the continuous forest inventory data — under circumstances in which both predictions can be tested. Along with the global implications of such findings, this also improves the capacity for forest sustainable management and the comprehensive understanding of forest ecosystems. In forest ecology, diameter at breast height (DBH) of a tree significantly determines an individual tree's cross-sectional sapwood area, its biomass and carbon storage. Evaluation the annual DBH increment (ΔDBH) of an individual tree is central to understanding tree growth and forest ecology. Ecosystem Carbon flux is a consequence of key ecosystem processes in the forest-ecosystem carbon cycle, Gross and Net Primary Production (GPP and NPP, respectively) and Net Ecosystem Respiration (NEP). All of these closely relate with tree DBH changes and tree death. Despite advances in evaluating forest carbon fluxes with flux towers and forest inventories for individual tree ΔDBH, few current ecological models can simultaneously quantify and predict the tree ΔDBH and forest carbon flux.

  16. International Space Station Increment Operations Services

    NASA Astrophysics Data System (ADS)

    Michaelis, Horst; Sielaff, Christian

    2002-01-01

    The Industrial Operator (IO) has defined End-to-End services to perform efficiently all required operations tasks for the Manned Space Program (MSP) as agreed during the Ministerial Council in Edinburgh in November 2001. Those services are the result of a detailed task analysis based on the operations processes as derived from the Space Station Program Implementation Plans (SPIP) and defined in the Operations Processes Documents (OPD). These services are related to ISS Increment Operations and ATV Mission Operations. Each of these End-to-End services is typically characterised by the following properties: It has a clearly defined starting point, where all requirements on the end-product are fixed and associated performance metrics of the customer are well defined. It has a clearly defined ending point, when the product or service is delivered to the customer and accepted by him, according to the performance metrics defined at the start point. The implementation of the process might be restricted by external boundary conditions and constraints mutually agreed with the customer. As far as those are respected the IO has the free choice to select methods and means of implementation. The ISS Increment Operations Service (IOS) activities required for the MSP Exploitation program cover the complete increment specific cycle starting with the support to strategic planning and ending with the post increment evaluation. These activities are divided into sub-services including the following tasks: - ISS Planning Support covering the support to strategic and tactical planning up to the generation - Development &Payload Integration Support - ISS Increment Preparation - ISS Increment Execution These processes are tight together by the Increment Integration Management, which provides the planning and scheduling of all activities as well as the technical management of the overall process . The paper describes the entire End-to-End ISS Increment Operations service and the

  17. On the validity of the incremental approach to estimate the impact of cities on air quality

    NASA Astrophysics Data System (ADS)

    Thunis, Philippe

    2018-01-01

    The question of how much cities are the sources of their own air pollution is not only theoretical as it is critical to the design of effective strategies for urban air quality planning. In this work, we assess the validity of the commonly used incremental approach to estimate the likely impact of cities on their air pollution. With the incremental approach, the city impact (i.e. the concentration change generated by the city emissions) is estimated as the concentration difference between a rural background and an urban background location, also known as the urban increment. We show that the city impact is in reality made up of the urban increment and two additional components and consequently two assumptions need to be fulfilled for the urban increment to be representative of the urban impact. The first assumption is that the rural background location is not influenced by emissions from within the city whereas the second requires that background concentration levels, obtained with zero city emissions, are equal at both locations. Because the urban impact is not measurable, the SHERPA modelling approach, based on a full air quality modelling system, is used in this work to assess the validity of these assumptions for some European cities. Results indicate that for PM2.5, these two assumptions are far from being fulfilled for many large or medium city sizes. For this type of cities, urban increments are largely underestimating city impacts. Although results are in better agreement for NO2, similar issues are met. In many situations the incremental approach is therefore not an adequate estimate of the urban impact on air pollution. This poses issues in terms of interpretation when these increments are used to define strategic options in terms of air quality planning. We finally illustrate the interest of comparing modelled and measured increments to improve our confidence in the model results.

  18. Individual tree diameter increment model for managed even-aged stands of ponderosa pine throughout the western United States using a multilevel linear mixed effects model

    Treesearch

    Fabian C.C. Uzoh; William W. Oliver

    2008-01-01

    A diameter increment model is developed and evaluated for individual trees of ponderosa pine throughout the species range in the United States using a multilevel linear mixed model. Stochastic variability is broken down among period, locale, plot, tree and within-tree components. Covariates acting at tree and stand level, as breast height diameter, density, site index...

  19. Space-time quantitative source apportionment of soil heavy metal concentration increments.

    PubMed

    Yang, Yong; Christakos, George; Guo, Mingwu; Xiao, Lu; Huang, Wei

    2017-04-01

    Assessing the space-time trends and detecting the sources of heavy metal accumulation in soils have important consequences in the prevention and treatment of soil heavy metal pollution. In this study, we collected soil samples in the eastern part of the Qingshan district, Wuhan city, Hubei Province, China, during the period 2010-2014. The Cd, Cu, Pb and Zn concentrations in soils exhibited a significant accumulation during 2010-2014. The spatiotemporal Kriging technique, based on a quantitative characterization of soil heavy metal concentration variations in terms of non-separable variogram models, was employed to estimate the spatiotemporal soil heavy metal distribution in the study region. Our findings showed that the Cd, Cu, and Zn concentrations have an obvious incremental tendency from the southwestern to the central part of the study region. However, the Pb concentrations exhibited an obvious tendency from the northern part to the central part of the region. Then, spatial overlay analysis was used to obtain absolute and relative concentration increments of adjacent 1- or 5-year periods during 2010-2014. The spatial distribution of soil heavy metal concentration increments showed that the larger increments occurred in the center of the study region. Lastly, the principal component analysis combined with the multiple linear regression method were employed to quantify the source apportionment of the soil heavy metal concentration increments in the region. Our results led to the conclusion that the sources of soil heavy metal concentration increments should be ascribed to industry, agriculture and traffic. In particular, 82.5% of soil heavy metal concentration increment during 2010-2014 was ascribed to industrial/agricultural activities sources. Using STK and SOA to obtain the spatial distribution of heavy metal concentration increments in soils. Using PCA-MLR to quantify the source apportionment of soil heavy metal concentration increments. Copyright © 2017

  20. 40 CFR 60.2595 - What if I do not meet an increment of progress?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... or Before November 30, 1999 Model Rule-Increments of Progress § 60.2595 What if I do not meet an... Administrator postmarked within 10 business days after the date for that increment of progress in table 1 of...

  1. Greedy Sampling and Incremental Surrogate Model-Based Tailoring of Aeroservoelastic Model Database for Flexible Aircraft

    NASA Technical Reports Server (NTRS)

    Wang, Yi; Pant, Kapil; Brenner, Martin J.; Ouellette, Jeffrey A.

    2018-01-01

    This paper presents a data analysis and modeling framework to tailor and develop linear parameter-varying (LPV) aeroservoelastic (ASE) model database for flexible aircrafts in broad 2D flight parameter space. The Kriging surrogate model is constructed using ASE models at a fraction of grid points within the original model database, and then the ASE model at any flight condition can be obtained simply through surrogate model interpolation. The greedy sampling algorithm is developed to select the next sample point that carries the worst relative error between the surrogate model prediction and the benchmark model in the frequency domain among all input-output channels. The process is iterated to incrementally improve surrogate model accuracy till a pre-determined tolerance or iteration budget is met. The methodology is applied to the ASE model database of a flexible aircraft currently being tested at NASA/AFRC for flutter suppression and gust load alleviation. Our studies indicate that the proposed method can reduce the number of models in the original database by 67%. Even so the ASE models obtained through Kriging interpolation match the model in the original database constructed directly from the physics-based tool with the worst relative error far below 1%. The interpolated ASE model exhibits continuously-varying gains along a set of prescribed flight conditions. More importantly, the selected grid points are distributed non-uniformly in the parameter space, a) capturing the distinctly different dynamic behavior and its dependence on flight parameters, and b) reiterating the need and utility for adaptive space sampling techniques for ASE model database compaction. The present framework is directly extendible to high-dimensional flight parameter space, and can be used to guide the ASE model development, model order reduction, robust control synthesis and novel vehicle design of flexible aircraft.

  2. Broad Learning System: An Effective and Efficient Incremental Learning System Without the Need for Deep Architecture.

    PubMed

    Chen, C L Philip; Liu, Zhulin

    2018-01-01

    Broad Learning System (BLS) that aims to offer an alternative way of learning in deep structure is proposed in this paper. Deep structure and learning suffer from a time-consuming training process because of a large number of connecting parameters in filters and layers. Moreover, it encounters a complete retraining process if the structure is not sufficient to model the system. The BLS is established in the form of a flat network, where the original inputs are transferred and placed as "mapped features" in feature nodes and the structure is expanded in wide sense in the "enhancement nodes." The incremental learning algorithms are developed for fast remodeling in broad expansion without a retraining process if the network deems to be expanded. Two incremental learning algorithms are given for both the increment of the feature nodes (or filters in deep structure) and the increment of the enhancement nodes. The designed model and algorithms are very versatile for selecting a model rapidly. In addition, another incremental learning is developed for a system that has been modeled encounters a new incoming input. Specifically, the system can be remodeled in an incremental way without the entire retraining from the beginning. Satisfactory result for model reduction using singular value decomposition is conducted to simplify the final structure. Compared with existing deep neural networks, experimental results on the Modified National Institute of Standards and Technology database and NYU NORB object recognition dataset benchmark data demonstrate the effectiveness of the proposed BLS.

  3. 48 CFR 3452.232-71 - Incremental funding.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 7 2010-10-01 2010-10-01 false Incremental funding. 3452... 3452.232-71 Incremental funding. As prescribed in 3452.771, insert the following provision in solicitations: Incremental Funding (AUG 1987) (a) Sufficient funds are not presently available to cover the...

  4. Anomalous Impact in Reaction-Diffusion Financial Models

    NASA Astrophysics Data System (ADS)

    Mastromatteo, I.; Tóth, B.; Bouchaud, J.-P.

    2014-12-01

    We generalize the reaction-diffusion model A +B → /0 in order to study the impact of an excess of A (or B ) at the reaction front. We provide an exact solution of the model, which shows that the linear response breaks down: the average displacement of the reaction front grows as the square root of the imbalance. We argue that this model provides a highly simplified but generic framework to understand the square-root impact of large orders in financial markets.

  5. Propulsive Reaction Control System Model

    NASA Technical Reports Server (NTRS)

    Brugarolas, Paul; Phan, Linh H.; Serricchio, Frederick; San Martin, Alejandro M.

    2011-01-01

    This software models a propulsive reaction control system (RCS) for guidance, navigation, and control simulation purposes. The model includes the drive electronics, the electromechanical valve dynamics, the combustion dynamics, and thrust. This innovation follows the Mars Science Laboratory entry reaction control system design, and has been created to meet the Mars Science Laboratory (MSL) entry, descent, and landing simulation needs. It has been built to be plug-and-play on multiple MSL testbeds [analysis, Monte Carlo, flight software development, hardware-in-the-loop, and ATLO (assembly, test and launch operations) testbeds]. This RCS model is a C language program. It contains two main functions: the RCS electronics model function that models the RCS FPGA (field-programmable-gate-array) processing and commanding of the RCS valve, and the RCS dynamic model function that models the valve and combustion dynamics. In addition, this software provides support functions to initialize the model states, set parameters, access model telemetry, and access calculated thruster forces.

  6. 14 CFR § 1260.53 - Incremental funding.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 14 Aeronautics and Space 5 2014-01-01 2014-01-01 false Incremental funding. § 1260.53 Section § 1260.53 Aeronautics and Space NATIONAL AERONAUTICS AND SPACE ADMINISTRATION GRANTS AND COOPERATIVE AGREEMENTS General Special Conditions § 1260.53 Incremental funding. Incremental Funding October 2000 (a...

  7. A Model Based Approach to Increase the Part Accuracy in Robot Based Incremental Sheet Metal Forming

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meier, Horst; Laurischkat, Roman; Zhu Junhong

    One main influence on the dimensional accuracy in robot based incremental sheet metal forming results from the compliance of the involved robot structures. Compared to conventional machine tools the low stiffness of the robot's kinematic results in a significant deviation of the planned tool path and therefore in a shape of insufficient quality. To predict and compensate these deviations offline, a model based approach, consisting of a finite element approach, to simulate the sheet forming, and a multi body system, modeling the compliant robot structure, has been developed. This paper describes the implementation and experimental verification of the multi bodymore » system model and its included compensation method.« less

  8. 14 CFR § 1274.918 - Incremental funding.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 14 Aeronautics and Space 5 2014-01-01 2014-01-01 false Incremental funding. § 1274.918 Section Â... WITH COMMERCIAL FIRMS Other Provisions and Special Conditions § 1274.918 Incremental funding. Incremental Funding July 2002 (a) Of the award amount indicated on the cover page of this Agreement, only the...

  9. Thermodynamically Feasible Kinetic Models of Reaction Networks

    PubMed Central

    Ederer, Michael; Gilles, Ernst Dieter

    2007-01-01

    The dynamics of biological reaction networks are strongly constrained by thermodynamics. An holistic understanding of their behavior and regulation requires mathematical models that observe these constraints. However, kinetic models may easily violate the constraints imposed by the principle of detailed balance, if no special care is taken. Detailed balance demands that in thermodynamic equilibrium all fluxes vanish. We introduce a thermodynamic-kinetic modeling (TKM) formalism that adapts the concepts of potentials and forces from irreversible thermodynamics to kinetic modeling. In the proposed formalism, the thermokinetic potential of a compound is proportional to its concentration. The proportionality factor is a compound-specific parameter called capacity. The thermokinetic force of a reaction is a function of the potentials. Every reaction has a resistance that is the ratio of thermokinetic force and reaction rate. For mass-action type kinetics, the resistances are constant. Since it relies on the thermodynamic concept of potentials and forces, the TKM formalism structurally observes detailed balance for all values of capacities and resistances. Thus, it provides an easy way to formulate physically feasible, kinetic models of biological reaction networks. The TKM formalism is useful for modeling large biological networks that are subject to many detailed balance relations. PMID:17208985

  10. Atmospheric response to Saharan dust deduced from ECMWF reanalysis (ERA) temperature increments

    NASA Astrophysics Data System (ADS)

    Kishcha, P.; Alpert, P.; Barkan, J.; Kirchner, I.; Machenhauer, B.

    2003-09-01

    This study focuses on the atmospheric temperature response to dust deduced from a new source of data the European Reanalysis (ERA) increments. These increments are the systematic errors of global climate models, generated in the reanalysis procedure. The model errors result not only from the lack of desert dust but also from a complex combination of many kinds of model errors. Over the Sahara desert the lack of dust radiative effect is believed to be a predominant model defect which should significantly affect the increments. This dust effect was examined by considering correlation between the increments and remotely sensed dust. Comparisons were made between April temporal variations of the ERA analysis increments and the variations of the Total Ozone Mapping Spectrometer aerosol index (AI) between 1979 and 1993. The distinctive structure was identified in the distribution of correlation composed of three nested areas with high positive correlation (>0.5), low correlation and high negative correlation (<-0.5). The innermost positive correlation area (PCA) is a large area near the center of the Sahara desert. For some local maxima inside this area the correlation even exceeds 0.8. The outermost negative correlation area (NCA) is not uniform. It consists of some areas over the eastern and western parts of North Africa with a relatively small amount of dust. Inside those areas both positive and negative high correlations exist at pressure levels ranging from 850 to 700 hPa, with the peak values near 775 hPa. Dust-forced heating (cooling) inside the PCA (NCA) is accompanied by changes in the static instability of the atmosphere above the dust layer. The reanalysis data of the European Center for Medium Range Weather Forecast (ECMWF) suggest that the PCA (NCA) corresponds mainly to anticyclonic (cyclonic) flow, negative (positive) vorticity and downward (upward) airflow. These findings are associated with the interaction between dust-forced heating/cooling and

  11. Reduction of chemical reaction models

    NASA Technical Reports Server (NTRS)

    Frenklach, Michael

    1991-01-01

    An attempt is made to reconcile the different terminologies pertaining to reduction of chemical reaction models. The approaches considered include global modeling, response modeling, detailed reduction, chemical lumping, and statistical lumping. The advantages and drawbacks of each of these methods are pointed out.

  12. A dual-process model of reactions to perceived stigma.

    PubMed

    Pryor, John B; Reeder, Glenn D; Yeadon, Christopher; Hesson-McLnnis, Matthew

    2004-10-01

    The authors propose a theoretical model of individual psychological reactions to perceived stigma. This model suggests that 2 psychological systems may be involved in reactions to stigma across a variety of social contexts. One system is primarily reflexive, or associative, whereas the other is rule based, or reflective. This model assumes a temporal pattern of reactions to the stigmatized, such that initial reactions are governed by the reflexive system, whereas subsequent reactions or "adjustments" are governed by the rule-based system. Support for this model was found in 2 studies. Both studies examined participants' moment-by-moment approach-avoidance reactions to the stigmatized. The 1st involved participants' reactions to persons with HIV/AIDS, and the 2nd, participants' reactions to 15 different stigmatizing conditions. (c) 2004 APA, all rights reserved

  13. Modeling of uncertainties in biochemical reactions.

    PubMed

    Mišković, Ljubiša; Hatzimanikatis, Vassily

    2011-02-01

    Mathematical modeling is an indispensable tool for research and development in biotechnology and bioengineering. The formulation of kinetic models of biochemical networks depends on knowledge of the kinetic properties of the enzymes of the individual reactions. However, kinetic data acquired from experimental observations bring along uncertainties due to various experimental conditions and measurement methods. In this contribution, we propose a novel way to model the uncertainty in the enzyme kinetics and to predict quantitatively the responses of metabolic reactions to the changes in enzyme activities under uncertainty. The proposed methodology accounts explicitly for mechanistic properties of enzymes and physico-chemical and thermodynamic constraints, and is based on formalism from systems theory and metabolic control analysis. We achieve this by observing that kinetic responses of metabolic reactions depend: (i) on the distribution of the enzymes among their free form and all reactive states; (ii) on the equilibrium displacements of the overall reaction and that of the individual enzymatic steps; and (iii) on the net fluxes through the enzyme. Relying on this observation, we develop a novel, efficient Monte Carlo sampling procedure to generate all states within a metabolic reaction that satisfy imposed constrains. Thus, we derive the statistics of the expected responses of the metabolic reactions to changes in enzyme levels and activities, in the levels of metabolites, and in the values of the kinetic parameters. We present aspects of the proposed framework through an example of the fundamental three-step reversible enzymatic reaction mechanism. We demonstrate that the equilibrium displacements of the individual enzymatic steps have an important influence on kinetic responses of the enzyme. Furthermore, we derive the conditions that must be satisfied by a reversible three-step enzymatic reaction operating far away from the equilibrium in order to respond to

  14. Model Experiment of Thermal Runaway Reactions Using the Aluminum-Hydrochloric Acid Reaction

    ERIC Educational Resources Information Center

    Kitabayashi, Suguru; Nakano, Masayoshi; Nishikawa, Kazuyuki; Koga, Nobuyoshi

    2016-01-01

    A laboratory exercise for the education of students about thermal runaway reactions based on the reaction between aluminum and hydrochloric acid as a model reaction is proposed. In the introductory part of the exercise, the induction period and subsequent thermal runaway behavior are evaluated via a simple observation of hydrogen gas evolution and…

  15. Individual tree height increment model for managed even-aged stands of ponderosa pine throughout the western United States using linear mixed effects models

    Treesearch

    Fabian Uzoh; William W. Oliver

    2006-01-01

    A height increment model is developed and evaluated for individual trees of ponderosa pine throughout the species range in western United States. The data set used in this study came from long-term permanent research plots in even-aged, pure stands both planted and of natural origin. The data base consists of six levels-of-growing stock studies supplemented by initial...

  16. Mechanism of multinucleon transfer reaction based on the GRAZING model and DNS model

    NASA Astrophysics Data System (ADS)

    Wen, Pei-wei; Li, Cheng; Zhu, Long; Lin, Cheng-jian; Zhang, Feng-shou

    2017-11-01

    Multinucleon transfer (MNT) reactions have been studied by either the GRAZING model or dinuclear system (DNS) model before. MNT reactions in the grazing regime have been described quite well by the GRAZING model. The DNS model is able to deal with MNT reactions, which happen in the closer overlapped regime after contact of two colliding nuclei. Since MNT reactions can happen in both areas and cannot be distinguished in view of experimental work, it is beneficial to compare these two models to clarify mechanism of MNT reactions. In this study, the mechanism of the MNT reaction has been studied by comparing the GRAZING model and DNS model for the first time. Reaction systems 136Xe+208Pb at {E}{{c}.{{m}}.}=450 MeV and 64Ni+238U at {E}{{c}.{{m}}.}=307 MeV are taken as examples in this paper. It is found that the gradients of transfer cross sections with respect to the impact parameter of the GRAZING model and DNS model are mainly concentrated on two different areas, which represents two kinds of transfer mechanisms. The theoretical framework of these two models are exclusive according to whether capture happens, which guarantees that the theoretical results calculated by these two models have no overlap and can be added up. Results indicate that the description of experimental MNT reaction cross sections can be significantly improved if calculations of the GRAZING model and DNS model are both considered.

  17. Mathematical model to predict drivers' reaction speeds.

    PubMed

    Long, Benjamin L; Gillespie, A Isabella; Tanaka, Martin L

    2012-02-01

    Mental distractions and physical impairments can increase the risk of accidents by affecting a driver's ability to control the vehicle. In this article, we developed a linear mathematical model that can be used to quantitatively predict drivers' performance over a variety of possible driving conditions. Predictions were not limited only to conditions tested, but also included linear combinations of these tests conditions. Two groups of 12 participants were evaluated using a custom drivers' reaction speed testing device to evaluate the effect of cell phone talking, texting, and a fixed knee brace on the components of drivers' reaction speed. Cognitive reaction time was found to increase by 24% for cell phone talking and 74% for texting. The fixed knee brace increased musculoskeletal reaction time by 24%. These experimental data were used to develop a mathematical model to predict reaction speed for an untested condition, talking on a cell phone with a fixed knee brace. The model was verified by comparing the predicted reaction speed to measured experimental values from an independent test. The model predicted full braking time within 3% of the measured value. Although only a few influential conditions were evaluated, we present a general approach that can be expanded to include other types of distractions, impairments, and environmental conditions.

  18. Robot-based additive manufacturing for flexible die-modelling in incremental sheet forming

    NASA Astrophysics Data System (ADS)

    Rieger, Michael; Störkle, Denis Daniel; Thyssen, Lars; Kuhlenkötter, Bernd

    2017-10-01

    The paper describes the application concept of additive manufactured dies to support the robot-based incremental sheet metal forming process (`Roboforming') for the production of sheet metal components in small batch sizes. Compared to the dieless kinematic-based generation of a shape by means of two cooperating industrial robots, the supporting robot models a die on the back of the metal sheet by using the robot-based fused layer manufacturing process (FLM). This tool chain is software-defined and preserves the high geometrical form flexibility of Roboforming while flexibly generating support structures adapted to the final part's geometry. Test series serve to confirm the feasibility of the concept by investigating the process challenges of the adhesion to the sheet surface and the general stability as well as the influence on the geometric accuracy compared to the well-known forming strategies.

  19. 40 CFR 60.1615 - How do I comply with the increment of progress for awarding contracts?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Emission..., 1999 Model Rule-Increments of Progress § 60.1615 How do I comply with the increment of progress for...

  20. Information-processing under incremental levels of physical loads: comparing racquet to combat sports.

    PubMed

    Mouelhi Guizani, S; Tenenbaum, G; Bouzaouach, I; Ben Kheder, A; Feki, Y; Bouaziz, M

    2006-06-01

    Skillful performance in combat and racquet sports consists of proficient technique accompanied with efficient information-processing while engaged in moderate to high physical effort. This study examined information processing and decision-making using simple reaction time (SRT) and choice reaction time (CRT) paradigms in athletes of combat sports and racquet ball games while undergoing incrementally increasing physical effort ranging from low to high intensities. Forty national level experienced athletics in the sports of tennis, table tennis, fencing, and boxing were selected for this study. Each subject performed both simple (SRT) and four-choice reaction time (4-CRT) tasks at rest, and while pedaling on a cycle ergometer at 20%, 40%, 60%, and 80% of their own maximal aerobic power (Pmax). RM MANCOVA revealed significant sport-type by physical load interaction effect mainly on CRT. Least significant difference (LSD) posthoc contrasts indicated that fencers and tennis players process information faster with incrementally increasing workload, while different patterns were obtained for boxers and table-tennis players. The error rate remained stable for each sport type over all conditions. Between-sport differences in SRT and CRT among the athletes were also noted. Findings provide evidence that the 4-CRT is a task that more closely corresponds to the original task athletes are familiar with and utilize in their practices and competitions. However, additional tests that mimic the real world experiences of each sport must be developed and used to capture the nature of information processing and response-selection in specific sports.

  1. Numerical simulation of pseudoelastic shape memory alloys using the large time increment method

    NASA Astrophysics Data System (ADS)

    Gu, Xiaojun; Zhang, Weihong; Zaki, Wael; Moumni, Ziad

    2017-04-01

    The paper presents a numerical implementation of the large time increment (LATIN) method for the simulation of shape memory alloys (SMAs) in the pseudoelastic range. The method was initially proposed as an alternative to the conventional incremental approach for the integration of nonlinear constitutive models. It is adapted here for the simulation of pseudoelastic SMA behavior using the Zaki-Moumni model and is shown to be especially useful in situations where the phase transformation process presents little or lack of hardening. In these situations, a slight stress variation in a load increment can result in large variations of strain and local state variables, which may lead to difficulties in numerical convergence. In contrast to the conventional incremental method, the LATIN method solve the global equilibrium and local consistency conditions sequentially for the entire loading path. The achieved solution must satisfy the conditions of static and kinematic admissibility and consistency simultaneously after several iterations. 3D numerical implementation is accomplished using an implicit algorithm and is then used for finite element simulation using the software Abaqus. Computational tests demonstrate the ability of this approach to simulate SMAs presenting flat phase transformation plateaus and subjected to complex loading cases, such as the quasi-static behavior of a stent structure. Some numerical results are contrasted to those obtained using step-by-step incremental integration.

  2. Incremental Support Vector Machine Framework for Visual Sensor Networks

    NASA Astrophysics Data System (ADS)

    Awad, Mariette; Jiang, Xianhua; Motai, Yuichi

    2006-12-01

    Motivated by the emerging requirements of surveillance networks, we present in this paper an incremental multiclassification support vector machine (SVM) technique as a new framework for action classification based on real-time multivideo collected by homogeneous sites. The technique is based on an adaptation of least square SVM (LS-SVM) formulation but extends beyond the static image-based learning of current SVM methodologies. In applying the technique, an initial supervised offline learning phase is followed by a visual behavior data acquisition and an online learning phase during which the cluster head performs an ensemble of model aggregations based on the sensor nodes inputs. The cluster head then selectively switches on designated sensor nodes for future incremental learning. Combining sensor data offers an improvement over single camera sensing especially when the latter has an occluded view of the target object. The optimization involved alleviates the burdens of power consumption and communication bandwidth requirements. The resulting misclassification error rate, the iterative error reduction rate of the proposed incremental learning, and the decision fusion technique prove its validity when applied to visual sensor networks. Furthermore, the enabled online learning allows an adaptive domain knowledge insertion and offers the advantage of reducing both the model training time and the information storage requirements of the overall system which makes it even more attractive for distributed sensor networks communication.

  3. 40 CFR 60.2590 - When must I submit the notifications of achievement of increments of progress?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Commenced Construction On or Before November 30, 1999 Model Rule-Increments of Progress § 60.2590 When must... increments of progress must be postmarked no later than 10 business days after the compliance date for the...

  4. 40 CFR 60.2575 - What are my requirements for meeting increments of progress and achieving final compliance?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY... Units Model Rule-Increments of Progress § 60.2575 What are my requirements for meeting increments of...

  5. 40 CFR 60.2575 - What are my requirements for meeting increments of progress and achieving final compliance?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY... Units Model Rule-Increments of Progress § 60.2575 What are my requirements for meeting increments of...

  6. 40 CFR 60.2575 - What are my requirements for meeting increments of progress and achieving final compliance?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY... Units Model Rule-Increments of Progress § 60.2575 What are my requirements for meeting increments of...

  7. Study of Reaction Mechanism in Tracer Munitions

    DTIC Science & Technology

    1974-12-01

    Effect of Fuel Particle Size on Reaction Zone Thickness 39 10 Temperature Distribution in Solid 41 11 Computed Reaction Rates as Func’ion of Heat Flux...dissociation (cal/g) R = gan constant (cal/mole K) r radius of fuel droplet (cm) s or x = distance increments in solid phase (cm) T = surface temperature ...of solid (*K) S T = arerage temperature in the reaction zone (°K) a t = ti-ne (sec) tb = avaporation time for droplet (sec) v = regression or burning

  8. Multiscale Characterization of the Probability Density Functions of Velocity and Temperature Increment Fields

    NASA Astrophysics Data System (ADS)

    DeMarco, Adam Ward

    The turbulent motions with the atmospheric boundary layer exist over a wide range of spatial and temporal scales and are very difficult to characterize. Thus, to explore the behavior of such complex flow enviroments, it is customary to examine their properties from a statistical perspective. Utilizing the probability density functions of velocity and temperature increments, deltau and deltaT, respectively, this work investigates their multiscale behavior to uncover the unique traits that have yet to be thoroughly studied. Utilizing diverse datasets, including idealized, wind tunnel experiments, atmospheric turbulence field measurements, multi-year ABL tower observations, and mesoscale models simulations, this study reveals remarkable similiarities (and some differences) between the small and larger scale components of the probability density functions increments fields. This comprehensive analysis also utilizes a set of statistical distributions to showcase their ability to capture features of the velocity and temperature increments' probability density functions (pdfs) across multiscale atmospheric motions. An approach is proposed for estimating their pdfs utilizing the maximum likelihood estimation (MLE) technique, which has never been conducted utilizing atmospheric data. Using this technique, we reveal the ability to estimate higher-order moments accurately with a limited sample size, which has been a persistent concern for atmospheric turbulence research. With the use robust Goodness of Fit (GoF) metrics, we quantitatively reveal the accuracy of the distributions to the diverse dataset. Through this analysis, it is shown that the normal inverse Gaussian (NIG) distribution is a prime candidate to be used as an estimate of the increment pdfs fields. Therefore, using the NIG model and its parameters, we display the variations in the increments over a range of scales revealing some unique scale-dependent qualities under various stability and ow conditions. This

  9. Effects of reaction-kinetic parameters on modeling reaction pathways in GaN MOVPE growth

    NASA Astrophysics Data System (ADS)

    Zhang, Hong; Zuo, Ran; Zhang, Guoyi

    2017-11-01

    In the modeling of the reaction-transport process in GaN MOVPE growth, the selections of kinetic parameters (activation energy Ea and pre-exponential factor A) for gas reactions are quite uncertain, which cause uncertainties in both gas reaction path and growth rate. In this study, numerical modeling of the reaction-transport process for GaN MOVPE growth in a vertical rotating disk reactor is conducted with varying kinetic parameters for main reaction paths. By comparisons of the molar concentrations of major Ga-containing species and the growth rates, the effects of kinetic parameters on gas reaction paths are determined. The results show that, depending on the values of the kinetic parameters, the gas reaction path may be dominated either by adduct/amide formation path, or by TMG pyrolysis path, or by both. Although the reaction path varies with different kinetic parameters, the predicted growth rates change only slightly because the total transport rate of Ga-containing species to the substrate changes slightly with reaction paths. This explains why previous authors using different chemical models predicted growth rates close to the experiment values. By varying the pre-exponential factor for the amide trimerization, it is found that the more trimers are formed, the lower the growth rates are than the experimental value, which indicates that trimers are poor growth precursors, because of thermal diffusion effect caused by high temperature gradient. The effective order for the contribution of major species to growth rate is found as: pyrolysis species > amides > trimers. The study also shows that radical reactions have little effect on gas reaction path because of the generation and depletion of H radicals in the chain reactions when NH2 is considered as the end species.

  10. Generalized Pseudo-Reaction Zone Model for Non-Ideal Explosives

    NASA Astrophysics Data System (ADS)

    Wescott, Bradley

    2007-06-01

    The pseudo-reaction zone model was proposed to improve engineering scale simulations when using Detonation Shock Dynamics with high explosives that have a slow reaction component. In this work an extension of the pseudo-reaction zone model is developed for non-ideal explosives that propagate well below their steady-planar Chapman-Jouguet velocity. A programmed burn method utilizing Detonation Shock Dynamics and a detonation velocity dependent pseudo-reaction rate has been developed for non-ideal explosives and applied to the explosive mixture of ammonium nitrate and fuel oil (ANFO). The pseudo-reaction rate is calibrated to the experimentally obtained normal detonation velocity---shock curvature relation. The generalized pseudo-reaction zone model proposed here predicts the cylinder expansion to within 1% by accounting for the slow reaction in ANFO.

  11. Kinetic modeling of electro-Fenton reaction in aqueous solution.

    PubMed

    Liu, H; Li, X Z; Leng, Y J; Wang, C

    2007-03-01

    To well describe the electro-Fenton (E-Fenton) reaction in aqueous solution, a new kinetic model was established according to the generally accepted mechanism of E-Fenton reaction. The model has special consideration on the rates of hydrogen peroxide (H(2)O(2)) generation and consumption in the reaction solution. The model also embraces three key operating factors affecting the organic degradation in the E-Fenton reaction, including current density, dissolved oxygen concentration and initial ferrous ion concentration. This analytical model was then validated by the experiments of phenol degradation in aqueous solution. The experiments demonstrated that the H(2)O(2) gradually built up with time and eventually approached its maximum value in the reaction solution. The experiments also showed that phenol was degraded at a slow rate at the early stage of the reaction, a faster rate during the middle stage, and a slow rate again at the final stage. It was confirmed in all experiments that the curves of phenol degradation (concentration vs. time) appeared to be an inverted "S" shape. The experimental data were fitted using both the normal first-order model and our new model, respectively. The goodness of fittings demonstrated that the new model could better fit the experimental data than the first-order model appreciably, which indicates that this analytical model can better describe the kinetics of the E-Fenton reaction mathematically and also chemically.

  12. Development of the Nonstationary Incremental Analysis Update Algorithm for Sequential Data Assimilation System

    NASA Astrophysics Data System (ADS)

    Ham, Yoo-Geun; Song, Hyo-Jong; Jung, Jaehee; Lim, Gyu-Ho

    2017-04-01

    This study introduces a altered version of the incremental analysis updates (IAU), called the nonstationary IAU (NIAU) method, to enhance the assimilation accuracy of the IAU while retaining the continuity of the analysis. Analogous to the IAU, the NIAU is designed to add analysis increments at every model time step to improve the continuity in the intermittent data assimilation. Still, unlike the IAU, the NIAU method applies time-evolved forcing employing the forward operator as rectifications to the model. The solution of the NIAU is better than that of the IAU, of which analysis is performed at the start of the time window for adding the IAU forcing, in terms of the accuracy of the analysis field. It is because, in the linear systems, the NIAU solution equals that in an intermittent data assimilation method at the end of the assimilation interval. To have the filtering property in the NIAU, a forward operator to propagate the increment is reconstructed with only dominant singular vectors. An illustration of those advantages of the NIAU is given using the simple 40-variable Lorenz model.

  13. 26 CFR 1.41-8 - Alternative incremental credit.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 26 Internal Revenue 1 2010-04-01 2010-04-01 true Alternative incremental credit. 1.41-8 Section 1... Credits Against Tax § 1.41-8 Alternative incremental credit. (a) Determination of credit. At the election... alternative incremental research credit (AIRC) in section 41(c)(4) for any taxable year of the taxpayer...

  14. 40 CFR 60.1630 - How do I comply with the increment of progress for achieving final compliance?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES... Before August 30, 1999 Model Rule-Increments of Progress § 60.1630 How do I comply with the increment of...

  15. Thermomechanical simulations and experimental validation for high speed incremental forming

    NASA Astrophysics Data System (ADS)

    Ambrogio, Giuseppina; Gagliardi, Francesco; Filice, Luigino; Romero, Natalia

    2016-10-01

    Incremental sheet forming (ISF) consists in deforming only a small region of the workspace through a punch driven by a NC machine. The drawback of this process is its slowness. In this study, a high speed variant has been investigated from both numerical and experimental points of view. The aim has been the design of a FEM model able to perform the material behavior during the high speed process by defining a thermomechanical model. An experimental campaign has been performed by a CNC lathe with high speed to test process feasibility. The first results have shown how the material presents the same performance than in conventional speed ISF and, in some cases, better material behavior due to the temperature increment. An accurate numerical simulation has been performed to investigate the material behavior during the high speed process confirming substantially experimental evidence.

  16. Incremental social learning in particle swarms.

    PubMed

    de Oca, Marco A Montes; Stutzle, Thomas; Van den Enden, Ken; Dorigo, Marco

    2011-04-01

    Incremental social learning (ISL) was proposed as a way to improve the scalability of systems composed of multiple learning agents. In this paper, we show that ISL can be very useful to improve the performance of population-based optimization algorithms. Our study focuses on two particle swarm optimization (PSO) algorithms: a) the incremental particle swarm optimizer (IPSO), which is a PSO algorithm with a growing population size in which the initial position of new particles is biased toward the best-so-far solution, and b) the incremental particle swarm optimizer with local search (IPSOLS), in which solutions are further improved through a local search procedure. We first derive analytically the probability density function induced by the proposed initialization rule applied to new particles. Then, we compare the performance of IPSO and IPSOLS on a set of benchmark functions with that of other PSO algorithms (with and without local search) and a random restart local search algorithm. Finally, we measure the benefits of using incremental social learning on PSO algorithms by running IPSO and IPSOLS on problems with different fitness distance correlations.

  17. Reduced Order Models for Reactions of Energetic Materials

    NASA Astrophysics Data System (ADS)

    Kober, Edward

    The formulation of reduced order models for the reaction chemistry of energetic materials under high pressures is needed for the development of mesoscale models in the areas of initiation, deflagration and detonation. Phenomenologically, 4-8 step models have been formulated from the analysis of cook-off data by analyzing the temperature rise of heated samples. Reactive molecular dynamics simulations have been used to simulate many of these processes, but reducing the results of those simulations to simple models has not been achieved. Typically, these efforts have focused on identifying molecular species and detailing specific chemical reactions. An alternative approach is presented here that is based on identifying the coordination geometries of each atom in the simulation and tracking classes of reactions by correlated changes in these geometries. Here, every atom and type of reaction is documented for every time step; no information is lost from unsuccessful molecular identification. Principal Component Analysis methods can then be used to map out the effective chemical reaction steps. For HMX and TATB decompositions simulated with ReaxFF, 90% of the data can be explained by 4-6 steps, generating models similar to those from the cook-off analysis. By performing these simulations at a variety of temperatures and pressures, both the activation and reaction energies and volumes can then be extracted.

  18. Health level seven interoperability strategy: big data, incrementally structured.

    PubMed

    Dolin, R H; Rogers, B; Jaffe, C

    2015-01-01

    Describe how the HL7 Clinical Document Architecture (CDA), a foundational standard in US Meaningful Use, contributes to a "big data, incrementally structured" interoperability strategy, whereby data structured incrementally gets large amounts of data flowing faster. We present cases showing how this approach is leveraged for big data analysis. To support the assertion that semi-structured narrative in CDA format can be a useful adjunct in an overall big data analytic approach, we present two case studies. The first assesses an organization's ability to generate clinical quality reports using coded data alone vs. coded data supplemented by CDA narrative. The second leverages CDA to construct a network model for referral management, from which additional observations can be gleaned. The first case shows that coded data supplemented by CDA narrative resulted in significant variances in calculated performance scores. In the second case, we found that the constructed network model enables the identification of differences in patient characteristics among different referral work flows. The CDA approach goes after data indirectly, by focusing first on the flow of narrative, which is then incrementally structured. A quantitative assessment of whether this approach will lead to a greater flow of data and ultimately a greater flow of structured data vs. other approaches is planned as a future exercise. Along with growing adoption of CDA, we are now seeing the big data community explore the standard, particularly given its potential to supply analytic en- gines with volumes of data previously not possible.

  19. Chemical reactions simulated by ground-water-quality models

    USGS Publications Warehouse

    Grove, David B.; Stollenwerk, Kenneth G.

    1987-01-01

    Recent literature concerning the modeling of chemical reactions during transport in ground water is examined with emphasis on sorption reactions. The theory of transport and reactions in porous media has been well documented. Numerous equations have been developed from this theory, to provide both continuous and sequential or multistep models, with the water phase considered for both mobile and immobile phases. Chemical reactions can be either equilibrium or non-equilibrium, and can be quantified in linear or non-linear mathematical forms. Non-equilibrium reactions can be separated into kinetic and diffusional rate-limiting mechanisms. Solutions to the equations are available by either analytical expressions or numerical techniques. Saturated and unsaturated batch, column, and field studies are discussed with one-dimensional, laboratory-column experiments predominating. A summary table is presented that references the various kinds of models studied and their applications in predicting chemical concentrations in ground waters.

  20. Evaluation of maillard reaction variables and their effect on heterocyclic amine formation in chemical model systems.

    PubMed

    Dennis, Cara; Karim, Faris; Smith, J Scott

    2015-02-01

    Heterocyclic amines (HCAs), highly mutagenic and potentially carcinogenic by-products, form during Maillard browning reactions, specifically in muscle-rich foods. Chemical model systems allow examination of in vitro formation of HCAs while eliminating complex matrices of meat. Limited research has evaluated the effects of Maillard reaction parameters on HCA formation. Therefore, 4 essential Maillard variables (precursors molar concentrations, water amount, sugar type, and sugar amounts) were evaluated to optimize a model system for the study of 4 HCAs: 2-amino-3-methylimidazo-[4,5-f]quinoline, 2-amino-3-methylimidazo[4,5-f]quinoxaline, 2-amino-3,8-dimethylimidazo[4,5-f]quinoxaline, and 2-amino-3,4,8-trimethyl-imidazo[4,5-f]quinoxaline. Model systems were dissolved in diethylene glycol, heated at 175 °C for 40 min, and separated using reversed-phase liquid chromatography. To define the model system, precursor amounts (threonine and creatinine) were adjusted in molar increments (0.2/0.2, 0.4/0.4, 0.6/0.6, and 0.8/0.8 mmol) and water amounts by percentage (0%, 5%, 10%, and 15%). Sugars (lactose, glucose, galactose, and fructose) were evaluated in several molar amounts proportional to threonine and creatinine (quarter, half, equi, and double). The precursor levels and amounts of sugar were significantly different (P < 0.05) in regards to total HCA formation, with 0.6/0.6/1.2 mmol producing higher levels. Water concentration and sugar type also had a significant effect (P < 0.05), with 5% water and lactose producing higher total HCA amounts. A model system containing threonine (0.6 mmol), creatinine (0.6 mmol), and glucose (1.2 mmol), with 15% water was determined to be the optimal model system with glucose and 15% water being a better representation of meat systems. © 2015 Institute of Food Technologists®

  1. A Geostatistical Scaling Approach for the Generation of Non Gaussian Random Variables and Increments

    NASA Astrophysics Data System (ADS)

    Guadagnini, Alberto; Neuman, Shlomo P.; Riva, Monica; Panzeri, Marco

    2016-04-01

    We address manifestations of non-Gaussian statistical scaling displayed by many variables, Y, and their (spatial or temporal) increments. Evidence of such behavior includes symmetry of increment distributions at all separation distances (or lags) with sharp peaks and heavy tails which tend to decay asymptotically as lag increases. Variables reported to exhibit such distributions include quantities of direct relevance to hydrogeological sciences, e.g. porosity, log permeability, electrical resistivity, soil and sediment texture, sediment transport rate, rainfall, measured and simulated turbulent fluid velocity, and other. No model known to us captures all of the documented statistical scaling behaviors in a unique and consistent manner. We recently proposed a generalized sub-Gaussian model (GSG) which reconciles within a unique theoretical framework the probability distributions of a target variable and its increments. We presented an algorithm to generate unconditional random realizations of statistically isotropic or anisotropic GSG functions and illustrated it in two dimensions. In this context, we demonstrated the feasibility of estimating all key parameters of a GSG model underlying a single realization of Y by analyzing jointly spatial moments of Y data and corresponding increments. Here, we extend our GSG model to account for noisy measurements of Y at a discrete set of points in space (or time), present an algorithm to generate conditional realizations of corresponding isotropic or anisotropic random field, and explore them on one- and two-dimensional synthetic test cases.

  2. 40 CFR 60.2580 - When must I complete each increment of progress?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Emissions Guidelines and Compliance Times for Commercial and Industrial Solid Waste Incineration Units Model Rule-Increments of...

  3. 40 CFR 60.2580 - When must I complete each increment of progress?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Emissions Guidelines and Compliance Times for Commercial and Industrial Solid Waste Incineration Units Model Rule-Increments of...

  4. 40 CFR 60.2580 - When must I complete each increment of progress?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Emissions Guidelines and Compliance Times for Commercial and Industrial Solid Waste Incineration Units Model Rule-Increments of...

  5. An incremental knowledge assimilation system (IKAS) for mine detection

    NASA Astrophysics Data System (ADS)

    Porway, Jake; Raju, Chaitanya; Varadarajan, Karthik Mahesh; Nguyen, Hieu; Yadegar, Joseph

    2010-04-01

    In this paper we present an adaptive incremental learning system for underwater mine detection and classification that utilizes statistical models of seabed texture and an adaptive nearest-neighbor classifier to identify varied underwater targets in many different environments. The first stage of processing uses our Background Adaptive ANomaly detector (BAAN), which identifies statistically likely target regions using Gabor filter responses over the image. Using this information, BAAN classifies the background type and updates its detection using background-specific parameters. To perform classification, a Fully Adaptive Nearest Neighbor (FAAN) determines the best label for each detection. FAAN uses an extremely fast version of Nearest Neighbor to find the most likely label for the target. The classifier perpetually assimilates new and relevant information into its existing knowledge database in an incremental fashion, allowing improved classification accuracy and capturing concept drift in the target classes. Experiments show that the system achieves >90% classification accuracy on underwater mine detection tasks performed on synthesized datasets provided by the Office of Naval Research. We have also demonstrated that the system can incrementally improve its detection accuracy by constantly learning from new samples.

  6. Calibrating reaction rates for the CREST model

    NASA Astrophysics Data System (ADS)

    Handley, Caroline A.; Christie, Michael A.

    2017-01-01

    The CREST reactive-burn model uses entropy-dependent reaction rates that, until now, have been manually tuned to fit shock-initiation and detonation data in hydrocode simulations. This paper describes the initial development of an automatic method for calibrating CREST reaction-rate coefficients, using particle swarm optimisation. The automatic method is applied to EDC32, to help develop the first CREST model for this conventional high explosive.

  7. Incremental passivity and output regulation for switched nonlinear systems

    NASA Astrophysics Data System (ADS)

    Pang, Hongbo; Zhao, Jun

    2017-10-01

    This paper studies incremental passivity and global output regulation for switched nonlinear systems, whose subsystems are not required to be incrementally passive. A concept of incremental passivity for switched systems is put forward. First, a switched system is rendered incrementally passive by the design of a state-dependent switching law. Second, the feedback incremental passification is achieved by the design of a state-dependent switching law and a set of state feedback controllers. Finally, we show that once the incremental passivity for switched nonlinear systems is assured, the output regulation problem is solved by the design of global nonlinear regulator controllers comprising two components: the steady-state control and the linear output feedback stabilising controllers, even though the problem for none of subsystems is solvable. Two examples are presented to illustrate the effectiveness of the proposed approach.

  8. Incremental terrain processing for large digital elevation models

    NASA Astrophysics Data System (ADS)

    Ye, Z.

    2012-12-01

    Incremental terrain processing for large digital elevation models Zichuan Ye, Dean Djokic, Lori Armstrong Esri, 380 New York Street, Redlands, CA 92373, USA (E-mail: zye@esri.com, ddjokic@esri.com , larmstrong@esri.com) Efficient analyses of large digital elevation models (DEM) require generation of additional DEM artifacts such as flow direction, flow accumulation and other DEM derivatives. When the DEMs to analyze have a large number of grid cells (usually > 1,000,000,000) the generation of these DEM derivatives is either impractical (it takes too long) or impossible (software is incapable of processing such a large number of cells). Different strategies and algorithms can be put in place to alleviate this situation. This paper describes an approach where the overall DEM is partitioned in smaller processing units that can be efficiently processed. The processed DEM derivatives for each partition can then be either mosaicked back into a single large entity or managed on partition level. For dendritic terrain morphologies, the way in which partitions are to be derived and the order in which they are to be processed depend on the river and catchment patterns. These patterns are not available until flow pattern of the whole region is created, which in turn cannot be established upfront due to the size issues. This paper describes a procedure that solves this problem: (1) Resample the original large DEM grid so that the total number of cells is reduced to a level for which the drainage pattern can be established. (2) Run standard terrain preprocessing operations on the resampled DEM to generate the river and catchment system. (3) Define the processing units and their processing order based on the river and catchment system created in step (2). (4) Based on the processing order, apply the analysis, i.e., flow accumulation operation to each of the processing units, at the full resolution DEM. (5) As each processing unit is processed based on the processing order defined

  9. Modeling chemical reactions for drug design.

    PubMed

    Gasteiger, Johann

    2007-01-01

    Chemical reactions are involved at many stages of the drug design process. This starts with the analysis of biochemical pathways that are controlled by enzymes that might be downregulated in certain diseases. In the lead discovery and lead optimization process compounds have to be synthesized in order to test them for their biological activity. And finally, the metabolism of a drug has to be established. A better understanding of chemical reactions could strongly help in making the drug design process more efficient. We have developed methods for quantifying the concepts an organic chemist is using in rationalizing reaction mechanisms. These methods allow a comprehensive modeling of chemical reactivity and thus are applicable to a wide variety of chemical reactions, from gas phase reactions to biochemical pathways. They are empirical in nature and therefore allow the rapid processing of large sets of structures and reactions. We will show here how methods have been developed for the prediction of acidity values and of the regioselectivity in organic reactions, for designing the synthesis of organic molecules and of combinatorial libraries, and for furthering our understanding of enzyme-catalyzed reactions and of the metabolism of drugs.

  10. Modeling individual tree growth by fusing diameter tape and increment core data

    Treesearch

    Erin M. Schliep; Tracy Qi Dong; Alan E. Gelfand; Fan. Li

    2014-01-01

    Tree growth estimation is a challenging task as difficulties associated with data collection and inference often result in inaccurate estimates. Two main methods for tree growth estimation are diameter tape measurements and increment cores. The former involves repeatedly measuring tree diameters with a cloth or metal tape whose scale has been adjusted to give diameter...

  11. An Incremental Weighted Least Squares Approach to Surface Lights Fields

    NASA Astrophysics Data System (ADS)

    Coombe, Greg; Lastra, Anselmo

    An Image-Based Rendering (IBR) approach to appearance modelling enables the capture of a wide variety of real physical surfaces with complex reflectance behaviour. The challenges with this approach are handling the large amount of data, rendering the data efficiently, and previewing the model as it is being constructed. In this paper, we introduce the Incremental Weighted Least Squares approach to the representation and rendering of spatially and directionally varying illumination. Each surface patch consists of a set of Weighted Least Squares (WLS) node centers, which are low-degree polynomial representations of the anisotropic exitant radiance. During rendering, the representations are combined in a non-linear fashion to generate a full reconstruction of the exitant radiance. The rendering algorithm is fast, efficient, and implemented entirely on the GPU. The construction algorithm is incremental, which means that images are processed as they arrive instead of in the traditional batch fashion. This human-in-the-loop process enables the user to preview the model as it is being constructed and to adapt to over-sampling and under-sampling of the surface appearance.

  12. Spallation reactions: A successful interplay between modeling and applications

    NASA Astrophysics Data System (ADS)

    David, J.-C.

    2015-06-01

    The spallation reactions are a type of nuclear reaction which occur in space by interaction of the cosmic rays with interstellar bodies. The first spallation reactions induced with an accelerator took place in 1947 at the Berkeley cyclotron (University of California) with 200MeV deuterons and 400MeV alpha beams. They highlighted the multiple emission of neutrons and charged particles and the production of a large number of residual nuclei far different from the target nuclei. In the same year, R. Serber described the reaction in two steps: a first and fast one with high-energy particle emission leading to an excited remnant nucleus, and a second one, much slower, the de-excitation of the remnant. In 2010 IAEA organized a workshop to present the results of the most widely used spallation codes within a benchmark of spallation models. If one of the goals was to understand the deficiencies, if any, in each code, one remarkable outcome points out the overall high-quality level of some models and so the great improvements achieved since Serber. Particle transport codes can then rely on such spallation models to treat the reactions between a light particle and an atomic nucleus with energies spanning from few tens of MeV up to some GeV. An overview of the spallation reactions modeling is presented in order to point out the incomparable contribution of models based on basic physics to numerous applications where such reactions occur. Validations or benchmarks, which are necessary steps in the improvement process, are also addressed, as well as the potential future domains of development. Spallation reactions modeling is a representative case of continuous studies aiming at understanding a reaction mechanism and which end up in a powerful tool.

  13. Students' Visualisation of Chemical Reactions--Insights into the Particle Model and the Atomic Model

    ERIC Educational Resources Information Center

    Cheng, Maurice M. W.

    2018-01-01

    This paper reports on an interview study of 18 Grade 10-12 students' model-based reasoning of a chemical reaction: the reaction of magnesium and oxygen at the submicro level. It has been proposed that chemical reactions can be conceptualised using two models: (i) the "particle model," in which a reaction is regarded as the simple…

  14. On the statistics of increments in strong Alfvenic turbulence

    NASA Astrophysics Data System (ADS)

    Palacios, J. C.; Perez, J. C.

    2017-12-01

    In-situ measurements have shown that the solar wind is dominated by non-compressive Alfvén-like fluctuations of plasma velocity and magnetic field over a broad range of scales. In this work, we present recent progress in understanding intermittency in Alfvenic turbulence by investigating the statistics of Elsasser increments from simulations of steadily driven Reduced MHD with numerical resolutions up to 2048^3. The nature of these statistics guards a close relation to the fundamental properties of small-scale structures in which the turbulence is ultimately dissipated and therefore has profound implications in the possible contribution of turbulence to the heating of the solar wind. We extensively investigate the properties and three-dimensional structure of probability density functions (PDFs) of increments and compare with recent phenomenological models of intermittency in MHD turbulence.

  15. A Networks Approach to Modeling Enzymatic Reactions.

    PubMed

    Imhof, P

    2016-01-01

    Modeling enzymatic reactions is a demanding task due to the complexity of the system, the many degrees of freedom involved and the complex, chemical, and conformational transitions associated with the reaction. Consequently, enzymatic reactions are not determined by precisely one reaction pathway. Hence, it is beneficial to obtain a comprehensive picture of possible reaction paths and competing mechanisms. By combining individually generated intermediate states and chemical transition steps a network of such pathways can be constructed. Transition networks are a discretized representation of a potential energy landscape consisting of a multitude of reaction pathways connecting the end states of the reaction. The graph structure of the network allows an easy identification of the energetically most favorable pathways as well as a number of alternative routes. © 2016 Elsevier Inc. All rights reserved.

  16. 40 CFR 60.5090 - When must I complete each increment of progress?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Emission Guidelines and Compliance Times for Existing Sewage Sludge Incineration Units Model Rule-Increments of Progress § 60.5090 When...

  17. Incremental k-core decomposition: Algorithms and evaluation

    DOE PAGES

    Sariyuce, Ahmet Erdem; Gedik, Bugra; Jacques-SIlva, Gabriela; ...

    2016-02-01

    A k-core of a graph is a maximal connected subgraph in which every vertex is connected to at least k vertices in the subgraph. k-core decomposition is often used in large-scale network analysis, such as community detection, protein function prediction, visualization, and solving NP-hard problems on real networks efficiently, like maximal clique finding. In many real-world applications, networks change over time. As a result, it is essential to develop efficient incremental algorithms for dynamic graph data. In this paper, we propose a suite of incremental k-core decomposition algorithms for dynamic graph data. These algorithms locate a small subgraph that ismore » guaranteed to contain the list of vertices whose maximum k-core values have changed and efficiently process this subgraph to update the k-core decomposition. We present incremental algorithms for both insertion and deletion operations, and propose auxiliary vertex state maintenance techniques that can further accelerate these operations. Our results show a significant reduction in runtime compared to non-incremental alternatives. We illustrate the efficiency of our algorithms on different types of real and synthetic graphs, at varying scales. Furthermore, for a graph of 16 million vertices, we observe relative throughputs reaching a million times, relative to the non-incremental algorithms.« less

  18. Efficiency of Oral Incremental Rehearsal versus Written Incremental Rehearsal on Students' Rate, Retention, and Generalization of Spelling Words

    ERIC Educational Resources Information Center

    Garcia, Dru; Joseph, Laurice M.; Alber-Morgan, Sheila; Konrad, Moira

    2014-01-01

    The purpose of this study was to examine the efficiency of an incremental rehearsal oral versus an incremental rehearsal written procedure on a sample of primary grade children's weekly spelling performance. Participants included five second and one first grader who were in need of help with their spelling according to their teachers. An…

  19. Incremental Validity of the Trait Emotional Intelligence Questionnaire-Short Form (TEIQue-SF).

    PubMed

    Siegling, A B; Vesely, Ashley K; Petrides, K V; Saklofske, Donald H

    2015-01-01

    This study examined the incremental validity of the adult short form of the Trait Emotional Intelligence Questionnaire (TEIQue-SF) in predicting 7 construct-relevant criteria beyond the variance explained by the Five-factor model and coping strategies. Additionally, the relative contributions of the questionnaire's 4 subscales were assessed. Two samples of Canadian university students completed the TEIQue-SF, along with measures of the Big Five, coping strategies (Sample 1 only), and emotion-laden criteria. The TEIQue-SF showed consistent incremental effects beyond the Big Five or the Big Five and coping strategies, predicting all 7 criteria examined across the 2 samples. Furthermore, 2 of the 4 TEIQue-SF subscales accounted for the measure's incremental validity. Although the findings provide good support for the validity and utility of the TEIQue-SF, directions for further research are emphasized.

  20. 17 CFR 242.612 - Minimum pricing increment.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 17 Commodity and Securities Exchanges 3 2010-04-01 2010-04-01 false Minimum pricing increment. 242.612 Section 242.612 Commodity and Securities Exchanges SECURITIES AND EXCHANGE COMMISSION (CONTINUED...-Regulation of the National Market System § 242.612 Minimum pricing increment. (a) No national securities...

  1. Logistics Modernization Program Increment 2 (LMP Inc 2)

    DTIC Science & Technology

    2016-03-01

    Executive DoD - Department of Defense DoDAF - DoD Architecture Framework FD - Full Deployment FDD - Full Deployment Decision FY - Fiscal Year IA...Documentation within the LMP Increment 2 MS C ADM, the LMP Increment 2 Business Case was updated for the FDD using change pages to remove information...following approval of the Army Cost Position being developed for the FDD . The LMP Increment 2 Business Case Change Pages were approved and signed by the

  2. A chain reaction approach to modelling gene pathways.

    PubMed

    Cheng, Gary C; Chen, Dung-Tsa; Chen, James J; Soong, Seng-Jaw; Lamartiniere, Coral; Barnes, Stephen

    2012-08-01

    BACKGROUND: Of great interest in cancer prevention is how nutrient components affect gene pathways associated with the physiological events of puberty. Nutrient-gene interactions may cause changes in breast or prostate cells and, therefore, may result in cancer risk later in life. Analysis of gene pathways can lead to insights about nutrient-gene interactions and the development of more effective prevention approaches to reduce cancer risk. To date, researchers have relied heavily upon experimental assays (such as microarray analysis, etc.) to identify genes and their associated pathways that are affected by nutrient and diets. However, the vast number of genes and combinations of gene pathways, coupled with the expense of the experimental analyses, has delayed the progress of gene-pathway research. The development of an analytical approach based on available test data could greatly benefit the evaluation of gene pathways, and thus advance the study of nutrient-gene interactions in cancer prevention. In the present study, we have proposed a chain reaction model to simulate gene pathways, in which the gene expression changes through the pathway are represented by the species undergoing a set of chemical reactions. We have also developed a numerical tool to solve for the species changes due to the chain reactions over time. Through this approach we can examine the impact of nutrient-containing diets on the gene pathway; moreover, transformation of genes over time with a nutrient treatment can be observed numerically, which is very difficult to achieve experimentally. We apply this approach to microarray analysis data from an experiment which involved the effects of three polyphenols (nutrient treatments), epigallo-catechin-3-O-gallate (EGCG), genistein, and resveratrol, in a study of nutrient-gene interaction in the estrogen synthesis pathway during puberty. RESULTS: In this preliminary study, the estrogen synthesis pathway was simulated by a chain reaction model. By

  3. External Device to Incrementally Skid the Habitat (E-DISH)

    NASA Technical Reports Server (NTRS)

    Brazell, J. W.; Introne, Steve; Bedell, Lisa; Credle, Ben; Holp, Graham; Ly, Siao; Tait, Terry

    1994-01-01

    A Mars habitat transport system was designed as part of the NASA Mars exploration program. The transport system, the External Device to Incrementally Skid the Habitat (E - DISH), will be used to transport Mars habitats from their landing sites to the colony base and will be detached after unloading. The system requirements for Mars were calculated and scaled for model purposes. Specific model materials are commonly found and recommendations for materials for the Mars design are included.

  4. A Generic Microdisturbanace Transmissibility Model For Reaction Wheels

    NASA Astrophysics Data System (ADS)

    Penate Castro, Jose; Seiler, Rene

    2012-07-01

    The increasing demand for space missions with high- precision pointing requirements for their payload instruments is underlining the importance of studying the impact of micro-level disturbances on the overall performance of spacecraft. For example, a satellite with an optical telescope taking high-resolution images might be very sensitive to perturbations, generated by moving equipment and amplified by the structure of the equipment itself as well as that of the host spacecraft that is accommodating both, the sources of mechanical disturbances and sensitive payload instruments. One of the major sources of mechanical disturbances inside a satellite may be found with reaction wheels. For investigation of their disturbance generation and propagation characteristics, a finite element model with parametric geometry definition has been developed. The model covers the main structural features of typical reaction wheel assemblies and can be used for a transmissibility representation of the equipment. With the parametric geometry definition approach, a wide range of different reaction wheel types and sizes can be analysed, without the need for (re-)defining an individual reaction wheel configuration from scratch. The reaction wheel model can be combined with a finite element model of the spacecraft structure and the payload for an end-to-end modelling and simulation of the microdisturbance generation and propagation. The finite element model has been generated in Patran® Command Language (PCL), which provides a powerful and time-efficient way to change parameters in the model, for creating a new or modifying an existing geometry, without requiring comprehensive manual interactions in the modelling pre-processor. As part of the overall modelling approach, a tailored structural model of the mechanical ball bearings has been implemented, which is one of the more complex problems to deal with, among others, due to the anisotropic stiffness and damping characteristics

  5. Incremental dynamical downscaling for probabilistic analysis based on multiple GCM projections

    NASA Astrophysics Data System (ADS)

    Wakazuki, Y.

    2015-12-01

    A dynamical downscaling method for probabilistic regional scale climate change projections was developed to cover an uncertainty of multiple general circulation model (GCM) climate simulations. The climatological increments (future minus present climate states) estimated by GCM simulation results were statistically analyzed using the singular vector decomposition. Both positive and negative perturbations from the ensemble mean with the magnitudes of their standard deviations were extracted and were added to the ensemble mean of the climatological increments. The analyzed multiple modal increments were utilized to create multiple modal lateral boundary conditions for the future climate regional climate model (RCM) simulations by adding to an objective analysis data. This data handling is regarded to be an advanced method of the pseudo-global-warming (PGW) method previously developed by Kimura and Kitoh (2007). The incremental handling for GCM simulations realized approximated probabilistic climate change projections with the smaller number of RCM simulations. Three values of a climatological variable simulated by RCMs for a mode were used to estimate the response to the perturbation of the mode. For the probabilistic analysis, climatological variables of RCMs were assumed to show linear response to the multiple modal perturbations, although the non-linearity was seen for local scale rainfall. Probability of temperature was able to be estimated within two modes perturbation simulations, where the number of RCM simulations for the future climate is five. On the other hand, local scale rainfalls needed four modes simulations, where the number of the RCM simulations is nine. The probabilistic method is expected to be used for regional scale climate change impact assessment in the future.

  6. Increment cores : how to collect, handle and use them

    Treesearch

    Robert R. Maeglin

    1979-01-01

    This paper describes increment cores (a useful tool in forestry and wood technology) and their uses which include age determination, growth increment, specific gravity determination, fiber length measurements, fibril angle measurements, cell measurements, and pathological investigations. Also described is the use and care of the increment borer which is essential in...

  7. Analytical and Experimental Investigation of Process Loads on Incremental Severe Plastic Deformation

    NASA Astrophysics Data System (ADS)

    Okan Görtan, Mehmet

    2017-05-01

    From the processing point of view, friction is a major problem in the severe plastic deformation (SPD) using equal channel angular pressing (ECAP) process. Incremental ECAP can be used in order to optimize frictional effects during SPD. A new incremental ECAP has been proposed recently. This new process called as equal channel angular swaging (ECAS) combines the conventional ECAP and the incremental bulk metal forming method rotary swaging. ECAS tool system consists of two dies with an angled channel that contains two shear zones. During ECAS process, two forming tool halves, which are concentrically arranged around the workpiece, perform high frequency radial movements with short strokes, while samples are pushed through these. The oscillation direction nearly coincides with the shearing direction in the workpiece. The most important advantages in comparison to conventional ECAP are a significant reduction in the forces in material feeding direction plus the potential to be extended to continuous processing. In the current study, the mechanics of the ECAS process is investigated using slip line field approach. An analytical model is developed to predict process loads. The proposed model is validated using experiments and FE simulations.

  8. Effect of task load and task load increment on performance and workload

    NASA Technical Reports Server (NTRS)

    Hancock, P. A.; Williams, G.

    1993-01-01

    The goal of adaptive automated task allocation is the 'seamless' transfer of work demand between human and machine. Clearly, at the present time, we are far from this objective. One of the barriers to achieving effortless human-machine symbiosis is an inadequate understanding of the way in which operators themselves seek to reallocate demand among their own personal 'resources.' The paper addresses this through an examination of workload response, which scales an individual's reaction to common levels of experienced external demand. The results indicate the primary driver of performance is the absolute level of task demand over the increment in that demand.

  9. Model parameter estimation approach based on incremental analysis for lithium-ion batteries without using open circuit voltage

    NASA Astrophysics Data System (ADS)

    Wu, Hongjie; Yuan, Shifei; Zhang, Xi; Yin, Chengliang; Ma, Xuerui

    2015-08-01

    To improve the suitability of lithium-ion battery model under varying scenarios, such as fluctuating temperature and SoC variation, dynamic model with parameters updated realtime should be developed. In this paper, an incremental analysis-based auto regressive exogenous (I-ARX) modeling method is proposed to eliminate the modeling error caused by the OCV effect and improve the accuracy of parameter estimation. Then, its numerical stability, modeling error, and parametric sensitivity are analyzed at different sampling rates (0.02, 0.1, 0.5 and 1 s). To identify the model parameters recursively, a bias-correction recursive least squares (CRLS) algorithm is applied. Finally, the pseudo random binary sequence (PRBS) and urban dynamic driving sequences (UDDSs) profiles are performed to verify the realtime performance and robustness of the newly proposed model and algorithm. Different sampling rates (1 Hz and 10 Hz) and multiple temperature points (5, 25, and 45 °C) are covered in our experiments. The experimental and simulation results indicate that the proposed I-ARX model can present high accuracy and suitability for parameter identification without using open circuit voltage.

  10. An incremental approach to genetic-algorithms-based classification.

    PubMed

    Guan, Sheng-Uei; Zhu, Fangming

    2005-04-01

    Incremental learning has been widely addressed in the machine learning literature to cope with learning tasks where the learning environment is ever changing or training samples become available over time. However, most research work explores incremental learning with statistical algorithms or neural networks, rather than evolutionary algorithms. The work in this paper employs genetic algorithms (GAs) as basic learning algorithms for incremental learning within one or more classifier agents in a multiagent environment. Four new approaches with different initialization schemes are proposed. They keep the old solutions and use an "integration" operation to integrate them with new elements to accommodate new attributes, while biased mutation and crossover operations are adopted to further evolve a reinforced solution. The simulation results on benchmark classification data sets show that the proposed approaches can deal with the arrival of new input attributes and integrate them with the original input space. It is also shown that the proposed approaches can be successfully used for incremental learning and improve classification rates as compared to the retraining GA. Possible applications for continuous incremental training and feature selection are also discussed.

  11. Analytic description of the frictionally engaged in-plane bending process incremental swivel bending (ISB)

    NASA Astrophysics Data System (ADS)

    Frohn, Peter; Engel, Bernd; Groth, Sebastian

    2018-05-01

    Kinematic forming processes shape geometries by the process parameters to achieve a more universal process utilizations regarding geometric configurations. The kinematic forming process Incremental Swivel Bending (ISB) bends sheet metal strips or profiles in plane. The sequence for bending an arc increment is composed of the steps clamping, bending, force release and feed. The bending moment is frictionally engaged by two clamping units in a laterally adjustable bending pivot. A minimum clamping force hindering the material from slipping through the clamping units is a crucial criterion to achieve a well-defined incremental arc. Therefore, an analytic description of a singular bent increment is developed in this paper. The bending moment is calculated by the uniaxial stress distribution over the profiles' width depending on the bending pivot's position. By a Coulomb' based friction model, necessary clamping force is described in dependence of friction, offset, dimensions of the clamping tools and strip thickness as well as material parameters. Boundaries for the uniaxial stress calculation are given in dependence of friction, tools' dimensions and strip thickness. The results indicate that changing the bending pivot to an eccentric position significantly affects the process' bending moment and, hence, clamping force, which is given in dependence of yield stress and hardening exponent. FE simulations validate the model with satisfactory accordance.

  12. Using Analysis Increments (AI) to Estimate and Correct Systematic Errors in the Global Forecast System (GFS) Online

    NASA Astrophysics Data System (ADS)

    Bhargava, K.; Kalnay, E.; Carton, J.; Yang, F.

    2017-12-01

    Systematic forecast errors, arising from model deficiencies, form a significant portion of the total forecast error in weather prediction models like the Global Forecast System (GFS). While much effort has been expended to improve models, substantial model error remains. The aim here is to (i) estimate the model deficiencies in the GFS that lead to systematic forecast errors, (ii) implement an online correction (i.e., within the model) scheme to correct GFS following the methodology of Danforth et al. [2007] and Danforth and Kalnay [2008, GRL]. Analysis Increments represent the corrections that new observations make on, in this case, the 6-hr forecast in the analysis cycle. Model bias corrections are estimated from the time average of the analysis increments divided by 6-hr, assuming that initial model errors grow linearly and first ignoring the impact of observation bias. During 2012-2016, seasonal means of the 6-hr model bias are generally robust despite changes in model resolution and data assimilation systems, and their broad continental scales explain their insensitivity to model resolution. The daily bias dominates the sub-monthly analysis increments and consists primarily of diurnal and semidiurnal components, also requiring a low dimensional correction. Analysis increments in 2015 and 2016 are reduced over oceans, which is attributed to improvements in the specification of the SSTs. These results encourage application of online correction, as suggested by Danforth and Kalnay, for mean, seasonal and diurnal and semidiurnal model biases in GFS to reduce both systematic and random errors. As the error growth in the short-term is still linear, estimated model bias corrections can be added as a forcing term in the model tendency equation to correct online. Preliminary experiments with GFS, correcting temperature and specific humidity online show reduction in model bias in 6-hr forecast. This approach can then be used to guide and optimize the design of sub

  13. Generalized Pseudo-Reaction Zone Model for Non-Ideal Explosives

    NASA Astrophysics Data System (ADS)

    Wescott, B. L.

    2007-12-01

    The pseudo-reaction zone model was proposed to improve engineering scale simulations with high explosives that have a slow reaction component. In this work an extension of the pseudo-reaction zone model is developed for non-ideal explosives that propagate well below the steady-planar Chapman-Jouguet velocity. A programmed burn method utilizing Detonation Shock Dynamics (DSD) and a detonation velocity dependent pseudo-reaction rate has been developed for non-ideal explosives and applied to the explosive mixture of ammonium nitrate and fuel oil (ANFO). The pseudo-reaction rate is calibrated to the experimentally obtained normal detonation velocity—shock curvature relation. Cylinder test simulations predict the proper expansion to within 1% even though significant reaction occurs as the cylinder expands.

  14. Collecting, preparing, crossdating, and measuring tree increment cores

    USGS Publications Warehouse

    Phipps, R.L.

    1985-01-01

    Techniques for collecting and handling increment tree cores are described. Procedures include those for cleaning and caring for increment borers, extracting the sample from a tree, core surfacing, crossdating, and measuring. (USGS)

  15. Incremental short daily home hemodialysis: a case series.

    PubMed

    Toth-Manikowski, Stephanie M; Mullangi, Surekha; Hwang, Seungyoung; Shafi, Tariq

    2017-07-05

    Patients starting dialysis often have substantial residual kidney function. Incremental hemodialysis provides a hemodialysis prescription that supplements patients' residual kidney function while maintaining total (residual + dialysis) urea clearance (standard Kt/Vurea) targets. We describe our experience with incremental hemodialysis in patients using NxStage System One for home hemodialysis. From 2011 to 2015, we initiated 5 incident hemodialysis patients on an incremental home hemodialysis regimen. The biochemical parameters of all patients remained stable on the incremental hemodialysis regimen and they consistently achieved standard Kt/Vurea targets. Of the two patients with follow-up >6 months, residual kidney function was preserved for ≥2 years. Importantly, the patients were able to transition to home hemodialysis without automatically requiring 5 sessions per week at the outset and gradually increased the number of treatments and/or dialysate volume as the residual kidney function declined. An incremental home hemodialysis regimen can be safely prescribed and may improve acceptability of home hemodialysis. Reducing hemodialysis frequency by even one treatment per week can reduce the number of fistula or graft cannulations or catheter connections by >100 per year, an important consideration for patient well-being, access longevity, and access-related infections. The incremental hemodialysis approach, supported by national guidelines, can be considered for all home hemodialysis patients with residual kidney function.

  16. Top-down attention based on object representation and incremental memory for knowledge building and inference.

    PubMed

    Kim, Bumhwi; Ban, Sang-Woo; Lee, Minho

    2013-10-01

    Humans can efficiently perceive arbitrary visual objects based on an incremental learning mechanism with selective attention. This paper proposes a new task specific top-down attention model to locate a target object based on its form and color representation along with a bottom-up saliency based on relativity of primitive visual features and some memory modules. In the proposed model top-down bias signals corresponding to the target form and color features are generated, which draw the preferential attention to the desired object by the proposed selective attention model in concomitance with the bottom-up saliency process. The object form and color representation and memory modules have an incremental learning mechanism together with a proper object feature representation scheme. The proposed model includes a Growing Fuzzy Topology Adaptive Resonance Theory (GFTART) network which plays two important roles in object color and form biased attention; one is to incrementally learn and memorize color and form features of various objects, and the other is to generate a top-down bias signal to localize a target object by focusing on the candidate local areas. Moreover, the GFTART network can be utilized for knowledge inference which enables the perception of new unknown objects on the basis of the object form and color features stored in the memory during training. Experimental results show that the proposed model is successful in focusing on the specified target objects, in addition to the incremental representation and memorization of various objects in natural scenes. In addition, the proposed model properly infers new unknown objects based on the form and color features of previously trained objects. Copyright © 2013 Elsevier Ltd. All rights reserved.

  17. Multi-criteria comparative evaluation of spallation reaction models

    NASA Astrophysics Data System (ADS)

    Andrianov, Andrey; Andrianova, Olga; Konobeev, Alexandr; Korovin, Yury; Kuptsov, Ilya

    2017-09-01

    This paper presents an approach to a comparative evaluation of the predictive ability of spallation reaction models based on widely used, well-proven multiple-criteria decision analysis methods (MAVT/MAUT, AHP, TOPSIS, PROMETHEE) and the results of such a comparison for 17 spallation reaction models in the presence of the interaction of high-energy protons with natPb.

  18. Modeling human behaviors and reactions under dangerous environment.

    PubMed

    Kang, J; Wright, D K; Qin, S F; Zhao, Y

    2005-01-01

    This paper describes the framework of a real-time simulation system to model human behavior and reactions in dangerous environments. The system utilizes the latest 3D computer animation techniques, combined with artificial intelligence, robotics and psychology, to model human behavior, reactions and decision making under expected/unexpected dangers in real-time in virtual environments. The development of the system includes: classification on the conscious/subconscious behaviors and reactions of different people; capturing different motion postures by the Eagle Digital System; establishing 3D character animation models; establishing 3D models for the scene; planning the scenario and the contents; and programming within Virtools Dev. Programming within Virtools Dev is subdivided into modeling dangerous events, modeling character's perceptions, modeling character's decision making, modeling character's movements, modeling character's interaction with environment and setting up the virtual cameras. The real-time simulation of human reactions in hazardous environments is invaluable in military defense, fire escape, rescue operation planning, traffic safety studies, and safety planning in chemical factories, the design of buildings, airplanes, ships and trains. Currently, human motion modeling can be realized through established technology, whereas to integrate perception and intelligence into virtual human's motion is still a huge undertaking. The challenges here are the synchronization of motion and intelligence, the accurate modeling of human's vision, smell, touch and hearing, the diversity and effects of emotion and personality in decision making. There are three types of software platforms which could be employed to realize the motion and intelligence within one system, and their advantages and disadvantages are discussed.

  19. 40 CFR 60.1610 - How do I comply with the increment of progress for submittal of a control plan?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES... Before August 30, 1999 Model Rule-Increments of Progress § 60.1610 How do I comply with the increment of...

  20. Quasi-brittle damage modeling based on incremental energy relaxation combined with a viscous-type regularization

    NASA Astrophysics Data System (ADS)

    Langenfeld, K.; Junker, P.; Mosler, J.

    2018-05-01

    This paper deals with a constitutive model suitable for the analysis of quasi-brittle damage in structures. The model is based on incremental energy relaxation combined with a viscous-type regularization. A similar approach—which also represents the inspiration for the improved model presented in this paper—was recently proposed in Junker et al. (Contin Mech Thermodyn 29(1):291-310, 2017). Within this work, the model introduced in Junker et al. (2017) is critically analyzed first. This analysis leads to an improved model which shows the same features as that in Junker et al. (2017), but which (i) eliminates unnecessary model parameters, (ii) can be better interpreted from a physics point of view, (iii) can capture a fully softened state (zero stresses), and (iv) is characterized by a very simple evolution equation. In contrast to the cited work, this evolution equation is (v) integrated fully implicitly and (vi) the resulting time-discrete evolution equation can be solved analytically providing a numerically efficient closed-form solution. It is shown that the final model is indeed well-posed (i.e., its tangent is positive definite). Explicit conditions guaranteeing this well-posedness are derived. Furthermore, by additively decomposing the stress rate into deformation- and purely time-dependent terms, the functionality of the model is explained. Illustrative numerical examples confirm the theoretical findings.

  1. Structure-reactivity modeling using mixture-based representation of chemical reactions.

    PubMed

    Polishchuk, Pavel; Madzhidov, Timur; Gimadiev, Timur; Bodrov, Andrey; Nugmanov, Ramil; Varnek, Alexandre

    2017-09-01

    We describe a novel approach of reaction representation as a combination of two mixtures: a mixture of reactants and a mixture of products. In turn, each mixture can be encoded using an earlier reported approach involving simplex descriptors (SiRMS). The feature vector representing these two mixtures results from either concatenated product and reactant descriptors or the difference between descriptors of products and reactants. This reaction representation doesn't need an explicit labeling of a reaction center. The rigorous "product-out" cross-validation (CV) strategy has been suggested. Unlike the naïve "reaction-out" CV approach based on a random selection of items, the proposed one provides with more realistic estimation of prediction accuracy for reactions resulting in novel products. The new methodology has been applied to model rate constants of E2 reactions. It has been demonstrated that the use of the fragment control domain applicability approach significantly increases prediction accuracy of the models. The models obtained with new "mixture" approach performed better than those required either explicit (Condensed Graph of Reaction) or implicit (reaction fingerprints) reaction center labeling.

  2. Quasiglobal reaction model for ethylene combustion

    NASA Technical Reports Server (NTRS)

    Singh, D. J.; Jachimowski, Casimir J.

    1994-01-01

    The objective of this study is to develop a reduced mechanism for ethylene oxidation. The authors are interested in a model with a minimum number of species and reactions that still models the chemistry with reasonable accuracy for the expected combustor conditions. The model will be validated by comparing the results to those calculated with a detailed kinetic model that has been validated against the experimental data.

  3. A reaction-based paradigm to model reactive chemical transport in groundwater with general kinetic and equilibrium reactions.

    PubMed

    Zhang, Fan; Yeh, Gour-Tsyh; Parker, Jack C; Brooks, Scott C; Pace, Molly N; Kim, Young-Jin; Jardine, Philip M; Watson, David B

    2007-06-16

    This paper presents a reaction-based water quality transport model in subsurface flow systems. Transport of chemical species with a variety of chemical and physical processes is mathematically described by M partial differential equations (PDEs). Decomposition via Gauss-Jordan column reduction of the reaction network transforms M species reactive transport equations into two sets of equations: a set of thermodynamic equilibrium equations representing N(E) equilibrium reactions and a set of reactive transport equations of M-N(E) kinetic-variables involving no equilibrium reactions (a kinetic-variable is a linear combination of species). The elimination of equilibrium reactions from reactive transport equations allows robust and efficient numerical integration. The model solves the PDEs of kinetic-variables rather than individual chemical species, which reduces the number of reactive transport equations and simplifies the reaction terms in the equations. A variety of numerical methods are investigated for solving the coupled transport and reaction equations. Simulation comparisons with exact solutions were performed to verify numerical accuracy and assess the effectiveness of various numerical strategies to deal with different application circumstances. Two validation examples involving simulations of uranium transport in soil columns are presented to evaluate the ability of the model to simulate reactive transport with complex reaction networks involving both kinetic and equilibrium reactions.

  4. Incorporating reaction-rate dependence in reaction-front models of wellbore-cement/carbonated-brine systems

    DOE PAGES

    Iyer, Jaisree; Walsh, Stuart D. C.; Hao, Yue; ...

    2017-03-08

    Contact between wellbore cement and carbonated brine produces reaction zones that alter the cement's chemical composition and its mechanical properties. The reaction zones have profound implications on the ability of wellbore cement to serve as a seal to prevent the flow of carbonated brine. Under certain circumstances, the reactions may cause resealing of leakage pathways within the cement or at cement-interfaces; either due to fracture closure in response to mechanical weakening or due to the precipitation of calcium carbonate within the fracture. In prior work, we showed how mechanical sealing can be simulated using a diffusion-controlled reaction-front model that linksmore » the growth of the cement reaction zones to the mechanical response of the fracture. Here, we describe how such models may be extended to account for the effects of the calcite reaction-rate. We discuss how the relative rates of reaction and diffusion within the cement affect the precipitation of calcium carbonate within narrow leakage pathways, and how such behavior relates to the formation of characteristic reaction modes in the direction of flow. In addition, we compare the relative impact of precipitation and mechanical deformation on fracture sealing for a range of flow conditions and fracture apertures. Here, we conclude by considering how the prior leaching of calcium from cement may influence the sealing behavior of fractures, and the implication of prior leaching on the ability of laboratory tests to predict long-term sealing.« less

  5. Incorporating reaction-rate dependence in reaction-front models of wellbore-cement/carbonated-brine systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Iyer, Jaisree; Walsh, Stuart D. C.; Hao, Yue

    Contact between wellbore cement and carbonated brine produces reaction zones that alter the cement's chemical composition and its mechanical properties. The reaction zones have profound implications on the ability of wellbore cement to serve as a seal to prevent the flow of carbonated brine. Under certain circumstances, the reactions may cause resealing of leakage pathways within the cement or at cement-interfaces; either due to fracture closure in response to mechanical weakening or due to the precipitation of calcium carbonate within the fracture. In prior work, we showed how mechanical sealing can be simulated using a diffusion-controlled reaction-front model that linksmore » the growth of the cement reaction zones to the mechanical response of the fracture. Here, we describe how such models may be extended to account for the effects of the calcite reaction-rate. We discuss how the relative rates of reaction and diffusion within the cement affect the precipitation of calcium carbonate within narrow leakage pathways, and how such behavior relates to the formation of characteristic reaction modes in the direction of flow. In addition, we compare the relative impact of precipitation and mechanical deformation on fracture sealing for a range of flow conditions and fracture apertures. Here, we conclude by considering how the prior leaching of calcium from cement may influence the sealing behavior of fractures, and the implication of prior leaching on the ability of laboratory tests to predict long-term sealing.« less

  6. The incremental impact of cardiac MRI on clinical decision-making.

    PubMed

    Rajwani, Adil; Stewart, Michael J; Richardson, James D; Child, Nicholas M; Maredia, Neil

    2016-01-01

    Despite a significant expansion in the use of cardiac MRI (CMR), there is inadequate evaluation of its incremental impact on clinical decision-making over and above other well-established modalities. We sought to determine the incremental utility of CMR in routine practice. 629 consecutive CMR studies referred by 44 clinicians from 9 institutions were evaluated. Pre-defined algorithms were used to determine the incremental influence on diagnostic thinking, influence on clinical management and thus the overall clinical utility. Studies were also subdivided and evaluated according to the indication for CMR. CMR provided incremental information to the clinician in 85% of cases, with incremental influence on diagnostic thinking in 85% of cases and incremental impact on management in 42% of cases. The overall incremental utility of CMR exceeded 90% in 7 out of the 13 indications, whereas in settings such as the evaluation of unexplained ventricular arrhythmia or mild left ventricular systolic dysfunction, this was <50%. CMR was frequently able to inform and influence decision-making in routine clinical practice, even with analyses that accepted only incremental clinical information and excluded a redundant duplication of imaging. Significant variations in yield were noted according to the indication for CMR. These data support a wider integration of CMR services into cardiac imaging departments. These data are the first to objectively evaluate the incremental value of a UK CMR service in clinical decision-making. Such data are essential when seeking justification for a CMR service.

  7. Incremental viscosity by non-equilibrium molecular dynamics and the Eyring model

    NASA Astrophysics Data System (ADS)

    Heyes, D. M.; Dini, D.; Smith, E. R.

    2018-05-01

    The viscoelastic behavior of sheared fluids is calculated by Non-Equilibrium Molecular Dynamics (NEMD) simulation, and complementary analytic solutions of a time-dependent extension of Eyring's model (EM) for shear thinning are derived. It is argued that an "incremental viscosity," ηi, or IV which is the derivative of the steady state stress with respect to the shear rate is a better measure of the physical state of the system than the conventional definition of the shear rate dependent viscosity (i.e., the shear stress divided by the strain rate). The stress relaxation function, Ci(t), associated with ηi is consistent with Boltzmann's superposition principle and is computed by NEMD and the EM. The IV of the Eyring model is shown to be a special case of the Carreau formula for shear thinning. An analytic solution for the transient time correlation function for the EM is derived. An extension of the EM to allow for significant local shear stress fluctuations on a molecular level, represented by a gaussian distribution, is shown to have the same analytic form as the original EM but with the EM stress replaced by its time and spatial average. Even at high shear rates and on small scales, the probability distribution function is almost gaussian (apart from in the wings) with the peak shifted by the shear. The Eyring formula approximately satisfies the Fluctuation Theorem, which may in part explain its success in representing the shear thinning curves of a wide range of different types of chemical systems.

  8. Effects of frequency and duration on psychometric functions for detection of increments and decrements in sinusoids in noise.

    PubMed

    Moore, B C; Peters, R W; Glasberg, B R

    1999-12-01

    Psychometric functions for detecting increments or decrements in level of sinusoidal pedestals were measured for increment and decrement durations of 5, 10, 20, 50, 100, and 200 ms and for frequencies of 250, 1000, and 4000 Hz. The sinusoids were presented in background noise intended to mask spectral splatter. A three-interval, three-alternative procedure was used. The results indicated that, for increments, the detectability index d' was approximately proportional to delta I/I. For decrements, d' was approximately proportional to delta L. The slopes of the psychometric functions increased (indicating better performance) with increasing frequency for both increments and decrements. For increments, the slopes increased with increasing increment duration up to 200 ms at 250 and 1000 Hz, but at 4000 Hz they increased only up to 50 ms. For decrements, the slopes increased for durations up to 50 ms, and then remained roughly constant, for all frequencies. For a center frequency of 250 Hz, the slopes of the psychometric functions for increment detection increased with duration more rapidly than predicted by a "multiple-looks" hypothesis, i.e., more rapidly than the square root of duration, for durations up to 50 ms. For center frequencies of 1000 and 4000 Hz, the slopes increased less rapidly than predicted by a multiple-looks hypothesis, for durations greater than about 20 ms. The slopes of the psychometric functions for decrement detection increased with decrement duration at a rate slightly greater than the square root of duration, for durations up to 50 ms, at all three frequencies. For greater durations, the increase in slope was less than proportional to the square root of duration. The results were analyzed using a model incorporating a simulated auditory filter, a compressive nonlinearity, a sliding temporal integrator, and a decision device based on a template mechanism. The model took into account the effects of both the external noise and an assumed internal

  9. Power-law confusion: You say incremental, I say differential

    NASA Technical Reports Server (NTRS)

    Colwell, Joshua E.

    1993-01-01

    Power-law distributions are commonly used to describe the frequency of occurrences of crater diameters, stellar masses, ring particle sizes, planetesimal sizes, and meteoroid masses to name a few. The distributions are simple, and this simplicity has led to a number of misstatements in the literature about the kind of power-law that is being used: differential, cumulative, or incremental. Although differential and cumulative power-laws are mathematically trivial, it is a hybrid incremental distribution that is often used and the relationship between the incremental distribution and the differential or cumulative distributions is not trivial. In many cases the slope of an incremental power-law will be nearly identical to the slope of the cumulative power-law of the same distribution, not the differential slope. The discussion that follows argues for a consistent usage of these terms and against the oft-made implicit claim that incremental and differential distributions are indistinguishable.

  10. Geographic Variations in Incremental Costs of Heart Disease Among Medicare Beneficiaries, by Type of Service, 2012.

    PubMed

    Wakim, Rita; Ritchey, Matthew; Hockenberry, Jason; Casper, Michele

    2016-12-29

    Using 2012 data on fee-for-service Medicare claims, we documented regional and county variation in incremental standardized costs of heart disease (ie, comparing costs between beneficiaries with heart disease and beneficiaries without heart disease) by type of service (eg, inpatient, outpatient, post-acute care). Absolute incremental total costs varied by region. Although the largest absolute incremental total costs of heart disease were concentrated in southern and Appalachian counties, geographic patterns of costs varied by type of service. These data can be used to inform development of policies and payment models that address the observed geographic disparities.

  11. Validation of daily increments in otoliths of northern squawfish larvae

    USGS Publications Warehouse

    Wertheimer, R.H.; Barfoot, C.A.

    1998-01-01

    Otoliths from laboratory-reared northern squawfish, Ptychocheilus oregonensis, larvae were examined to determine the periodicity of increment deposition. Increment deposition began in both sagittae and lapilli after hatching. Reader counts indicated that increment formation was daily in sagittae of 1-29-day-old larvae. However, increment counts from lapilli were significantly less than the known ages of northern squawfish larvae, possibly because some increments were not detectable. Otolith readability and age agreement among readers were greatest for young (<11 days) northern squawfish larvae. This was primarily because a transitional zone of low-contrast material began forming in otoliths of 8-11-day-old larvae and persisted until approximately 20 days after hatching. Formation of the transition zone appeared to coincide with the onset of exogenous feeding and continued through yolk sac absorption. Our results indicate that aging wild-caught northern squawfish larvae using daily otolith increment counts is possible.

  12. Reaction Wheel Disturbance Model Extraction Software - RWDMES

    NASA Technical Reports Server (NTRS)

    Blaurock, Carl

    2009-01-01

    The RWDMES is a tool for modeling the disturbances imparted on spacecraft by spinning reaction wheels. Reaction wheels are usually the largest disturbance source on a precision pointing spacecraft, and can be the dominating source of pointing error. Accurate knowledge of the disturbance environment is critical to accurate prediction of the pointing performance. In the past, it has been difficult to extract an accurate wheel disturbance model since the forcing mechanisms are difficult to model physically, and the forcing amplitudes are filtered by the dynamics of the reaction wheel. RWDMES captures the wheel-induced disturbances using a hybrid physical/empirical model that is extracted directly from measured forcing data. The empirical models capture the tonal forces that occur at harmonics of the spin rate, and the broadband forces that arise from random effects. The empirical forcing functions are filtered by a physical model of the wheel structure that includes spin-rate-dependent moments (gyroscopic terms). The resulting hybrid model creates a highly accurate prediction of wheel-induced forces. It accounts for variation in disturbance frequency, as well as the shifts in structural amplification by the whirl modes, as the spin rate changes. This software provides a point-and-click environment for producing accurate models with minimal user effort. Where conventional approaches may take weeks to produce a model of variable quality, RWDMES can create a demonstrably high accuracy model in two hours. The software consists of a graphical user interface (GUI) that enables the user to specify all analysis parameters, to evaluate analysis results and to iteratively refine the model. Underlying algorithms automatically extract disturbance harmonics, initialize and tune harmonic models, and initialize and tune broadband noise models. The component steps are described in the RWDMES user s guide and include: converting time domain data to waterfall PSDs (power spectral

  13. Incremental Contingency Planning

    NASA Technical Reports Server (NTRS)

    Dearden, Richard; Meuleau, Nicolas; Ramakrishnan, Sailesh; Smith, David E.; Washington, Rich

    2003-01-01

    There has been considerable work in AI on planning under uncertainty. However, this work generally assumes an extremely simple model of action that does not consider continuous time and resources. These assumptions are not reasonable for a Mars rover, which must cope with uncertainty about the duration of tasks, the energy required, the data storage necessary, and its current position and orientation. In this paper, we outline an approach to generating contingency plans when the sources of uncertainty involve continuous quantities such as time and resources. The approach involves first constructing a "seed" plan, and then incrementally adding contingent branches to this plan in order to improve utility. The challenge is to figure out the best places to insert contingency branches. This requires an estimate of how much utility could be gained by building a contingent branch at any given place in the seed plan. Computing this utility exactly is intractable, but we outline an approximation method that back propagates utility distributions through a graph structure similar to that of a plan graph.

  14. Longitudinal associations between dental caries increment and risk factors in late childhood and adolescence.

    PubMed

    Curtis, Alexandra M; VanBuren, John; Cavanaugh, Joseph E; Warren, John J; Marshall, Teresa A; Levy, Steven M

    2018-05-12

    To assess longitudinal associations between permanent tooth caries increment and both modifiable and non-modifiable risk factors, using best subsets model selection. The Iowa Fluoride Study has followed a birth cohort with standardized caries exams without radiographs of the permanent dentition conducted at about ages 9, 13, and 17 years. Questionnaires were sent semi-annually to assess fluoride exposures and intakes, select food and beverage intakes, and tooth brushing frequency. Exposure variables were averaged over ages 7-9, 11-13, and 15-17, reflecting exposure 2 years prior to the caries exam. Longitudinal models were used to relate period-specific averaged exposures and demographic variables to adjusted decayed and filled surface increments (ADJCI) (n = 392). The Akaike Information Criterion (AIC) was used to assess optimal explanatory variable combinations. From birth to age 9, 9-13, and 13-17 years, 24, 30, and 55 percent of subjects had positive permanent ADJCI, respectively. Ten models had AIC values within two units of the lowest AIC model and were deemed optimal based on AIC. Younger age, being male, higher mother's education, and higher brushing frequency were associated with lower caries increment in all 10 models, while milk intake was included in 3 of 10 models. Higher milk intakes were slightly associated with lower ADJCI. With the exception of brushing frequency, modifiable risk factors under study were not significantly associated with ADJCI. When possible, researchers should consider presenting multiple models if fit criteria cannot discern among a group of optimal models. © 2018 American Association of Public Health Dentistry.

  15. Calibration of Complex Subsurface Reaction Models Using a Surrogate-Model Approach

    EPA Science Inventory

    Application of model assessment techniques to complex subsurface reaction models involves numerous difficulties, including non-trivial model selection, parameter non-uniqueness, and excessive computational burden. To overcome these difficulties, this study introduces SAMM (Simult...

  16. Modelling the Maillard reaction during the cooking of a model cheese.

    PubMed

    Bertrand, Emmanuel; Meyer, Xuân-Mi; Machado-Maturana, Elizabeth; Berdagué, Jean-Louis; Kondjoyan, Alain

    2015-10-01

    During processing and storage of industrial processed cheese, odorous compounds are formed. Some of them are potentially unwanted for the flavour of the product. To reduce the appearance of these compounds, a methodological approach was employed. It consists of: (i) the identification of the key compounds or precursors responsible for the off-flavour observed, (ii) the monitoring of these markers during the heat treatments applied to the cheese medium, (iii) the establishment of an observable reaction scheme adapted from a literature survey to the compounds identified in the heated cheese medium (iv) the multi-responses stoichiokinetic modelling of these reaction markers. Systematic two-dimensional gas chromatography time-of-flight mass spectrometry was used for the semi-quantitation of trace compounds. Precursors were quantitated by high-performance liquid chromatography. The experimental data obtained were fitted to the model with 14 elementary linked reactions forming a multi-response observable reaction scheme. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. Tracking and recognition face in videos with incremental local sparse representation model

    NASA Astrophysics Data System (ADS)

    Wang, Chao; Wang, Yunhong; Zhang, Zhaoxiang

    2013-10-01

    This paper addresses the problem of tracking and recognizing faces via incremental local sparse representation. First a robust face tracking algorithm is proposed via employing local sparse appearance and covariance pooling method. In the following face recognition stage, with the employment of a novel template update strategy, which combines incremental subspace learning, our recognition algorithm adapts the template to appearance changes and reduces the influence of occlusion and illumination variation. This leads to a robust video-based face tracking and recognition with desirable performance. In the experiments, we test the quality of face recognition in real-world noisy videos on YouTube database, which includes 47 celebrities. Our proposed method produces a high face recognition rate at 95% of all videos. The proposed face tracking and recognition algorithms are also tested on a set of noisy videos under heavy occlusion and illumination variation. The tracking results on challenging benchmark videos demonstrate that the proposed tracking algorithm performs favorably against several state-of-the-art methods. In the case of the challenging dataset in which faces undergo occlusion and illumination variation, and tracking and recognition experiments under significant pose variation on the University of California, San Diego (Honda/UCSD) database, our proposed method also consistently demonstrates a high recognition rate.

  18. EMPIRE: A Reaction Model Code for Nuclear Astrophysics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Palumbo, A., E-mail: apalumbo@bnl.gov; Herman, M.; Capote, R.

    The correct modeling of abundances requires knowledge of nuclear cross sections for a variety of neutron, charged particle and γ induced reactions. These involve targets far from stability and are therefore difficult (or currently impossible) to measure. Nuclear reaction theory provides the only way to estimate values of such cross sections. In this paper we present application of the EMPIRE reaction code to nuclear astrophysics. Recent measurements are compared to the calculated cross sections showing consistent agreement for n-, p- and α-induced reactions of strophysical relevance.

  19. 40 CFR 60.2595 - What if I do not meet an increment of progress?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Emissions Guidelines and Compliance Times for Commercial and Industrial Solid Waste Incineration Units Model Rule-Increments of...

  20. 40 CFR 60.2595 - What if I do not meet an increment of progress?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Emissions Guidelines and Compliance Times for Commercial and Industrial Solid Waste Incineration Units Model Rule-Increments of...

  1. 40 CFR 60.2595 - What if I do not meet an increment of progress?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Emissions Guidelines and Compliance Times for Commercial and Industrial Solid Waste Incineration Units Model Rule-Increments of...

  2. Retrosynthetic Reaction Prediction Using Neural Sequence-to-Sequence Models

    PubMed Central

    2017-01-01

    We describe a fully data driven model that learns to perform a retrosynthetic reaction prediction task, which is treated as a sequence-to-sequence mapping problem. The end-to-end trained model has an encoder–decoder architecture that consists of two recurrent neural networks, which has previously shown great success in solving other sequence-to-sequence prediction tasks such as machine translation. The model is trained on 50,000 experimental reaction examples from the United States patent literature, which span 10 broad reaction types that are commonly used by medicinal chemists. We find that our model performs comparably with a rule-based expert system baseline model, and also overcomes certain limitations associated with rule-based expert systems and with any machine learning approach that contains a rule-based expert system component. Our model provides an important first step toward solving the challenging problem of computational retrosynthetic analysis. PMID:29104927

  3. Effect of high altitude exposure on the hemodynamics of the bidirectional Glenn physiology: modeling incremented pulmonary vascular resistance and heart rate.

    PubMed

    Vallecilla, Carolina; Khiabani, Reza H; Sandoval, Néstor; Fogel, Mark; Briceño, Juan Carlos; Yoganathan, Ajit P

    2014-06-03

    The considerable blood mixing in the bidirectional Glenn (BDG) physiology further limits the capacity of the single working ventricle to pump enough oxygenated blood to the circulatory system. This condition is exacerbated under severe conditions such as physical activity or high altitude. In this study, the effect of high altitude exposure on hemodynamics and ventricular function of the BDG physiology is investigated. For this purpose, a mathematical approach based on a lumped parameter model was developed to model the BDG circulation. Catheterization data from 39 BDG patients at stabilized oxygen conditions was used to determine baseline flows and pressures for the model. The effect of high altitude exposure was modeled by increasing the pulmonary vascular resistance (PVR) and heart rate (HR) in increments up to 80% and 40%, respectively. The resulting differences in vascular flows, pressures and ventricular function parameters were analyzed. By simultaneously increasing PVR and HR, significant changes (p <0.05) were observed in cardiac index (11% increase at an 80% PVR and 40% HR increase) and pulmonary flow (26% decrease at an 80% PVR and 40% HR increase). Significant increase in mean systemic pressure (9%) was observed at 80% PVR (40% HR) increase. The results show that the poor ventricular function fails to overcome the increased preload and implied low oxygenation in BDG patients at higher altitudes, especially for those with high baseline PVRs. The presented mathematical model provides a framework to estimate the hemodynamic performance of BDG patients at different PVR increments. Copyright © 2014 Elsevier Ltd. All rights reserved.

  4. BMI and BMI SDS in childhood: annual increments and conditional change.

    PubMed

    Brannsether, Bente; Eide, Geir Egil; Roelants, Mathieu; Bjerknes, Robert; Júlíusson, Pétur Benedikt

    2017-02-01

    Background Early detection of abnormal weight gain in childhood may be important for preventive purposes. It is still debated which annual changes in BMI should warrant attention. Aim To analyse 1-year increments of Body Mass Index (BMI) and standardised BMI (BMI SDS) in childhood and explore conditional change in BMI SDS as an alternative method to evaluate 1-year changes in BMI. Subjects and methods The distributions of 1-year increments of BMI (kg/m 2 ) and BMI SDS are summarised by percentiles. Differences according to sex, age, height, weight, initial BMI and weight status on the BMI and BMI SDS increments were assessed with multiple linear regression. Conditional change in BMI SDS was based on the correlation between annual BMI measurements converted to SDS. Results BMI increments depended significantly on sex, height, weight and initial BMI. Changes in BMI SDS depended significantly only on the initial BMI SDS. The distribution of conditional change in BMI SDS using a two-correlation model was close to normal (mean = 0.11, SD = 1.02, n = 1167), with 3.2% (2.3-4.4%) of the observations below -2 SD and 2.8% (2.0-4.0%) above +2 SD. Conclusion Conditional change in BMI SDS can be used to detect unexpected large changes in BMI SDS. Although this method requires the use of a computer, it may be clinically useful to detect aberrant weight development.

  5. EMPIRE: Nuclear Reaction Model Code System for Data Evaluation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Herman, M.; Capote, R.; Carlson, B.V.

    EMPIRE is a modular system of nuclear reaction codes, comprising various nuclear models, and designed for calculations over a broad range of energies and incident particles. A projectile can be a neutron, proton, any ion (including heavy-ions) or a photon. The energy range extends from the beginning of the unresolved resonance region for neutron-induced reactions ({approx} keV) and goes up to several hundred MeV for heavy-ion induced reactions. The code accounts for the major nuclear reaction mechanisms, including direct, pre-equilibrium and compound nucleus ones. Direct reactions are described by a generalized optical model (ECIS03) or by the simplified coupled-channels approachmore » (CCFUS). The pre-equilibrium mechanism can be treated by a deformation dependent multi-step direct (ORION + TRISTAN) model, by a NVWY multi-step compound one or by either a pre-equilibrium exciton model with cluster emission (PCROSS) or by another with full angular momentum coupling (DEGAS). Finally, the compound nucleus decay is described by the full featured Hauser-Feshbach model with {gamma}-cascade and width-fluctuations. Advanced treatment of the fission channel takes into account transmission through a multiple-humped fission barrier with absorption in the wells. The fission probability is derived in the WKB approximation within the optical model of fission. Several options for nuclear level densities include the EMPIRE-specific approach, which accounts for the effects of the dynamic deformation of a fast rotating nucleus, the classical Gilbert-Cameron approach and pre-calculated tables obtained with a microscopic model based on HFB single-particle level schemes with collective enhancement. A comprehensive library of input parameters covers nuclear masses, optical model parameters, ground state deformations, discrete levels and decay schemes, level densities, fission barriers, moments of inertia and {gamma}-ray strength functions. The results can be converted into ENDF-6

  6. Pornographic image recognition and filtering using incremental learning in compressed domain

    NASA Astrophysics Data System (ADS)

    Zhang, Jing; Wang, Chao; Zhuo, Li; Geng, Wenhao

    2015-11-01

    With the rapid development and popularity of the network, the openness, anonymity, and interactivity of networks have led to the spread and proliferation of pornographic images on the Internet, which have done great harm to adolescents' physical and mental health. With the establishment of image compression standards, pornographic images are mainly stored with compressed formats. Therefore, how to efficiently filter pornographic images is one of the challenging issues for information security. A pornographic image recognition and filtering method in the compressed domain is proposed by using incremental learning, which includes the following steps: (1) low-resolution (LR) images are first reconstructed from the compressed stream of pornographic images, (2) visual words are created from the LR image to represent the pornographic image, and (3) incremental learning is adopted to continuously adjust the classification rules to recognize the new pornographic image samples after the covering algorithm is utilized to train and recognize the visual words in order to build the initial classification model of pornographic images. The experimental results show that the proposed pornographic image recognition method using incremental learning has a higher recognition rate as well as costing less recognition time in the compressed domain.

  7. The variable target model: a paradigm shift in the incremental haemodialysis prescription.

    PubMed

    Casino, Francesco Gaetano; Basile, Carlo

    2017-01-01

    The recent interest in incremental haemodialysis (HD) is hindered by the current prescription based on a fixed target model (FTM) for the total (dialytic + renal) equivalent continuous clearance (ECC). The latter is expressed either as standard Kt/V (stdKt/V), i.e. the pre-dialysis averaged concentration of urea-based ECC, or EKRc, i.e. the time averaged concentration-based ECC, corrected for volume (V) = 40 L. Accordingly, there are two different targets: stdKt/V = 2.3 volumes per week (v/wk) and EKRc = 13 mL/min/40 L. However, fixing the total ECC necessarily implies perfect equivalence of its components-the residual renal urea clearance (Kru) and dialysis clearance (Kd). This assumption is wrong because Kru has much greater clinical weight than Kd. Here we propose that the ECC target varies as an inverse function of Kru, from a maximum value in anuria to a minimum value at Kru levels not yet requiring dialysis. The aim of the present study was to compare the current FTM with the proposed variable target model (VTM). The double pool urea kinetic model was used to model dialysis sessions for 360 virtual patients and establish equations predicting the ECC as a function of Kd, Kru and the number of sessions per week. An end-dialysis urea distribution V of 35 L (corresponding to a body surface area of 1.73 m 2 ) was used, so that the current EKRc target of 13 mL/min/40 L could be recalculated at an EKRc 35 value of 12 mL/min/35 L equal to 12 mL/min/1.73 m 2 . The latter also coincides with the maximum value of the EKRc 35 variable target in anuria. The minimum target value of EKRc 35 was assumed to coincide with Kru corrected for V = 35 L (i.e. Krc 35 = 6 mL/min/1.73 m 2 ). The corresponding target for stdKt/V was assumed to vary from 2.3 v/wk at Krc 35 = 0 to 1.7 v/wk at Krc 35 = 6 mL/min/1.73 m 2 . On this basis, the variable target values can be obtained from the following linear equations: target EKRc 35 = 12 - Krc 35 ; target stdKt/V = 2.3 - 0.1 × Krc 35 . Two

  8. Incremental change or initial differences? Testing two models of marital deterioration.

    PubMed

    Lavner, Justin A; Bradbury, Thomas N; Karney, Benjamin R

    2012-08-01

    Most couples begin marriage intent on maintaining a fulfilling relationship, but some newlyweds soon struggle, and others continue to experience high levels of satisfaction. Do these diverse outcomes result from an incremental process that unfolds over time, as prevailing models suggest, or are they a manifestation of initial differences that are largely evident at the start of the marriage? Using 8 waves of data collected over the first 4 years of marriage (N = 502 spouses, or 251 newlywed marriages), we tested these competing perspectives first by identifying 3 qualitatively distinct relationship satisfaction trajectory groups and then by determining the extent to which spouses in these groups were differentiated on the basis of (a) initial scores and (b) 4-year changes in a set of established predictor variables, including relationship problems, aggression, attributions, stress, and self-esteem. The majority of spouses exhibited high, stable satisfaction over the first 4 years of marriage, whereas declining satisfaction was isolated among couples with relatively low initial satisfaction. Across all predictor variables, initial values afforded stronger discrimination of outcome groups than did rates of change in these variables. Thus, readily measured initial differences are potent antecedents of relationship deterioration, and studies are now needed to clarify the specific ways in which initial indices of risk come to influence changes in spouses' judgments of relationship satisfaction. PsycINFO Database Record (c) 2012 APA, all rights reserved.

  9. Incremental Change or Initial Differences? Testing Two Models of Marital Deterioration

    PubMed Central

    Lavner, Justin A.; Bradbury, Thomas N.; Karney, Benjamin R.

    2012-01-01

    Most couples begin marriage intent on maintaining a fulfilling relationship, but some newlyweds soon struggle while others continue to experience high levels of satisfaction. Do these diverse outcomes result from an incremental process that unfolds over time, as prevailing models suggest, or are they a manifestation of initial differences that are largely evident at the start of the marriage? Using eight waves of data collected over the first 4 years of marriage (N = 502 spouses, or 251 newlywed marriages), we tested these competing perspectives first by identifying three qualitatively distinct relationship satisfaction trajectory groups and then by determining the extent to which spouses in these groups were differentiated on the basis of (a) initial scores and (b) 4-year changes in a set of established predictor variables, including relationship problems, aggression, attributions, stress, and self-esteem. The majority of spouses exhibited high, stable satisfaction over the first four years of marriage, whereas declining satisfaction was isolating among couples with relatively low initial satisfaction. Across all predictor variables, initial values afforded stronger discrimination of outcome groups than did rates of change in these variables. Thus, readily-measured initial differences are potent antecedents of relationship deterioration, and studies are now needed to clarify the specific ways in which initial indices of risk come to influence changes in spouses’ judgments of relationship satisfaction. PMID:22709260

  10. DEPENDENCE OF X-RAY BURST MODELS ON NUCLEAR REACTION RATES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cyburt, R. H.; Keek, L.; Schatz, H.

    2016-10-20

    X-ray bursts are thermonuclear flashes on the surface of accreting neutron stars, and reliable burst models are needed to interpret observations in terms of properties of the neutron star and the binary system. We investigate the dependence of X-ray burst models on uncertainties in (p, γ ), ( α , γ ), and ( α , p) nuclear reaction rates using fully self-consistent burst models that account for the feedbacks between changes in nuclear energy generation and changes in astrophysical conditions. A two-step approach first identified sensitive nuclear reaction rates in a single-zone model with ignition conditions chosen to matchmore » calculations with a state-of-the-art 1D multi-zone model based on the Kepler stellar evolution code. All relevant reaction rates on neutron-deficient isotopes up to mass 106 were individually varied by a factor of 100 up and down. Calculations of the 84 changes in reaction rate with the highest impact were then repeated in the 1D multi-zone model. We find a number of uncertain reaction rates that affect predictions of light curves and burst ashes significantly. The results provide insights into the nuclear processes that shape observables from X-ray bursts, and guidance for future nuclear physics work to reduce nuclear uncertainties in X-ray burst models.« less

  11. Resonances in the cumulative reaction probability for a model electronically nonadiabatic reaction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qi, J.; Bowman, J.M.

    1996-05-01

    The cumulative reaction probability, flux{endash}flux correlation function, and rate constant are calculated for a model, two-state, electronically nonadiabatic reaction, given by Shin and Light [S. Shin and J. C. Light, J. Chem. Phys. {bold 101}, 2836 (1994)]. We apply straightforward generalizations of the flux matrix/absorbing boundary condition approach of Miller and co-workers to obtain these quantities. The upper adiabatic electronic potential supports bound states, and these manifest themselves as {open_quote}{open_quote}recrossing{close_quote}{close_quote} resonances in the cumulative reaction probability, at total energies above the barrier to reaction on the lower adiabatic potential. At energies below the barrier, the cumulative reaction probability for themore » coupled system is shifted to higher energies relative to the one obtained for the ground state potential. This is due to the effect of an additional effective barrier caused by the nuclear kinetic operator acting on the ground state, adiabatic electronic wave function, as discussed earlier by Shin and Light. Calculations are reported for five sets of electronically nonadiabatic coupling parameters. {copyright} {ital 1996 American Institute of Physics.}« less

  12. A mathematical model for foreign body reactions in 2D.

    PubMed

    Su, Jianzhong; Gonzales, Humberto Perez; Todorov, Michail; Kojouharov, Hristo; Tang, Liping

    2011-02-01

    The foreign body reactions are commonly referred to the network of immune and inflammatory reactions of human or animals to foreign objects placed in tissues. They are basic biological processes, and are also highly relevant to bioengineering applications in implants, as fibrotic tissue formations surrounding medical implants have been found to substantially reduce the effectiveness of devices. Despite of intensive research on determining the mechanisms governing such complex responses, few mechanistic mathematical models have been developed to study such foreign body reactions. This study focuses on a kinetics-based predictive tool in order to analyze outcomes of multiple interactive complex reactions of various cells/proteins and biochemical processes and to understand transient behavior during the entire period (up to several months). A computational model in two spatial dimensions is constructed to investigate the time dynamics as well as spatial variation of foreign body reaction kinetics. The simulation results have been consistent with experimental data and the model can facilitate quantitative insights for study of foreign body reaction process in general.

  13. 40 CFR 60.5105 - What if I do not meet an increment of progress?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Emission Guidelines and Compliance Times for Existing Sewage Sludge Incineration Units Model Rule-Increments of Progress § 60.5105 What if...

  14. Invariant characteristics of self-organization modes in Belousov reaction modeling

    NASA Astrophysics Data System (ADS)

    Glyzin, S. D.; Goryunov, V. E.; Kolesov, A. Yu

    2018-01-01

    We consider the problem of mathematical modeling of oxidation-reduction oscillatory chemical reactions based on the mechanism of Belousov reaction. The process of the main components interaction in such reaction can be interpreted by a phenomenologically similar to it “predator-prey” model. Thereby, we consider a parabolic boundary value problem consisting of three Volterra-type equations, which is a mathematical model of this reaction. We carry out a local study of the neighborhood of the system’s non-trivial equilibrium state and construct the normal form of the considering system. Finally, we do a numerical analysis of the coexisting chaotic oscillatory modes of the boundary value problem in a flat area, which have different nature and occur as the diffusion coefficient decreases.

  15. Modeling and simulation of pressure waves generated by nano-thermite reactions

    NASA Astrophysics Data System (ADS)

    Martirosyan, Karen S.; Zyskin, Maxim; Jenkins, Charles M.; (Yuki) Horie, Yasuyuki

    2012-11-01

    This paper reports the modeling of pressure waves from the explosive reaction of nano-thermites consisting of mixtures of nanosized aluminum and oxidizer granules. Such nanostructured thermites have higher energy density (up to 26 kJ/cm3) and can generate a transient pressure pulse four times larger than that from trinitrotoluene (TNT) based on volume equivalence. A plausible explanation for the high pressure generation is that the reaction times are much shorter than the time for a shock wave to propagate away from the reagents region so that all the reaction energy is dumped into the gaseous products almost instantaneously and thereby a strong shock wave is generated. The goal of the modeling is to characterize the gas dynamic behavior for thermite reactions in a cylindrical reaction chamber and to model the experimentally measured pressure histories. To simplify the details of the initial stage of the explosive reaction, it is assumed that the reaction generates a one dimensional shock wave into an air-filled cylinder and propagates down the tube in a self-similar mode. Experimental data for Al/Bi2O3 mixtures were used to validate the model with attention focused on the ratio of specific heats and the drag coefficient. Model predictions are in good agreement with the measured pressure histories.

  16. Impulse processing: A dynamical systems model of incremental eye movements in the visual world paradigm

    PubMed Central

    Kukona, Anuenue; Tabor, Whitney

    2011-01-01

    The visual world paradigm presents listeners with a challenging problem: they must integrate two disparate signals, the spoken language and the visual context, in support of action (e.g., complex movements of the eyes across a scene). We present Impulse Processing, a dynamical systems approach to incremental eye movements in the visual world that suggests a framework for integrating language, vision, and action generally. Our approach assumes that impulses driven by the language and the visual context impinge minutely on a dynamical landscape of attractors corresponding to the potential eye-movement behaviors of the system. We test three unique predictions of our approach in an empirical study in the visual world paradigm, and describe an implementation in an artificial neural network. We discuss the Impulse Processing framework in relation to other models of the visual world paradigm. PMID:21609355

  17. A heuristic approach to incremental and reactive scheduling

    NASA Technical Reports Server (NTRS)

    Odubiyi, Jide B.; Zoch, David R.

    1989-01-01

    An heuristic approach to incremental and reactive scheduling is described. Incremental scheduling is the process of modifying an existing schedule if the initial schedule does not meet its stated initial goals. Reactive scheduling occurs in near real-time in response to changes in available resources or the occurrence of targets of opportunity. Only minor changes are made during both incremental and reactive scheduling because a goal of re-scheduling procedures is to minimally impact the schedule. The described heuristic search techniques, which are employed by the Request Oriented Scheduling Engine (ROSE), a prototype generic scheduler, efficiently approximate the cost of reaching a goal from a given state and effective mechanisms for controlling search.

  18. Binary counting with chemical reactions.

    PubMed

    Kharam, Aleksandra; Jiang, Hua; Riedel, Marc; Parhi, Keshab

    2011-01-01

    This paper describes a scheme for implementing a binary counter with chemical reactions. The value of the counter is encoded by logical values of "0" and "1" that correspond to the absence and presence of specific molecular types, respectively. It is incremented when molecules of a trigger type are injected. Synchronization is achieved with reactions that produce a sustained three-phase oscillation. This oscillation plays a role analogous to a clock signal in digital electronics. Quantities are transferred between molecular types in different phases of the oscillation. Unlike all previous schemes for chemical computation, this scheme is dependent only on coarse rate categories for the reactions ("fast" and "slow"). Given such categories, the computation is exact and independent of the specific reaction rates. Although conceptual for the time being, the methodology has potential applications in domains of synthetic biology such as biochemical sensing and drug delivery. We are exploring DNA-based computation via strand displacement as a possible experimental chassis.

  19. Conservation of wildlife populations: factoring in incremental disturbance.

    PubMed

    Stewart, Abbie; Komers, Petr E

    2017-06-01

    Progressive anthropogenic disturbance can alter ecosystem organization potentially causing shifts from one stable state to another. This potential for ecosystem shifts must be considered when establishing targets and objectives for conservation. We ask whether a predator-prey system response to incremental anthropogenic disturbance might shift along a disturbance gradient and, if it does, whether any disturbance thresholds are evident for this system. Development of linear corridors in forested areas increases wolf predation effectiveness, while high density of development provides a safe-haven for their prey. If wolves limit moose population growth, then wolves and moose should respond inversely to land cover disturbance. Using general linear model analysis, we test how the rate of change in moose ( Alces alces ) density and wolf ( Canis lupus ) harvest density are influenced by the rate of change in land cover and proportion of land cover disturbed within a 300,000 km 2 area in the boreal forest of Alberta, Canada. Using logistic regression, we test how the direction of change in moose density is influenced by measures of land cover change. In response to incremental land cover disturbance, moose declines occurred where <43% of land cover was disturbed; in such landscapes, there were high rates of increase in linear disturbance and wolf density increased. By contrast, moose increases occurred where >43% of land cover was disturbed and wolf density declined. Wolves and moose appeared to respond inversely to incremental disturbance with the balance between moose decline and wolf increase shifting at about 43% of land cover disturbed. Conservation decisions require quantification of disturbance rates and their relationships to predator-prey systems because ecosystem responses to anthropogenic disturbance shift across disturbance gradients.

  20. Strong correlation in incremental full configuration interaction

    NASA Astrophysics Data System (ADS)

    Zimmerman, Paul M.

    2017-06-01

    Incremental Full Configuration Interaction (iFCI) reaches high accuracy electronic energies via a many-body expansion of the correlation energy. In this work, the Perfect Pairing (PP) ansatz replaces the Hartree-Fock reference of the original iFCI method. This substitution captures a large amount of correlation at zero-order, which allows iFCI to recover the remaining correlation energy with low-order increments. The resulting approach, PP-iFCI, is size consistent, size extensive, and systematically improvable with increasing order of incremental expansion. Tests on multiple single bond, multiple double bond, and triple bond dissociations of main group polyatomics using double and triple zeta basis sets demonstrate the power of the method for handling strong correlation. The smooth dissociation profiles that result from PP-iFCI show that FCI-quality ground state computations are now within reach for systems with up to about 10 heavy atoms.

  1. Incremental analysis of large elastic deformation of a rotating cylinder

    NASA Technical Reports Server (NTRS)

    Buchanan, G. R.

    1976-01-01

    The effect of finite deformation upon a rotating, orthotropic cylinder was investigated using a general incremental theory. The incremental equations of motion are developed using the variational principle. The governing equations are derived using the principle of virtual work for a body with initial stress. The governing equations are reduced to those for the title problem and a numerical solution is obtained using finite difference approximations. Since the problem is defined in terms of one independent space coordinate, the finite difference grid can be modified as the incremental deformation occurs without serious numerical difficulties. The nonlinear problem is solved incrementally by totaling a series of linear solutions.

  2. 40 CFR 60.1605 - What if I do not meet an increment of progress?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Times for Small Municipal Waste Combustion Units Constructed on or Before August 30, 1999 Model Rule... increment of progress, you must submit a notification to the Administrator postmarked within 10 business...

  3. Incremental Learning With Selective Memory (ILSM): Towards Fast Prostate Localization for Image Guided Radiotherapy

    PubMed Central

    Gao, Yaozong; Zhan, Yiqiang

    2015-01-01

    Image-guided radiotherapy (IGRT) requires fast and accurate localization of the prostate in 3-D treatment-guided radiotherapy, which is challenging due to low tissue contrast and large anatomical variation across patients. On the other hand, the IGRT workflow involves collecting a series of computed tomography (CT) images from the same patient under treatment. These images contain valuable patient-specific information yet are often neglected by previous works. In this paper, we propose a novel learning framework, namely incremental learning with selective memory (ILSM), to effectively learn the patient-specific appearance characteristics from these patient-specific images. Specifically, starting with a population-based discriminative appearance model, ILSM aims to “personalize” the model to fit patient-specific appearance characteristics. The model is personalized with two steps: backward pruning that discards obsolete population-based knowledge and forward learning that incorporates patient-specific characteristics. By effectively combining the patient-specific characteristics with the general population statistics, the incrementally learned appearance model can localize the prostate of a specific patient much more accurately. This work has three contributions: 1) the proposed incremental learning framework can capture patient-specific characteristics more effectively, compared to traditional learning schemes, such as pure patient-specific learning, population-based learning, and mixture learning with patient-specific and population data; 2) this learning framework does not have any parametric model assumption, hence, allowing the adoption of any discriminative classifier; and 3) using ILSM, we can localize the prostate in treatment CTs accurately (DSC ∼0.89) and fast (∼4 s), which satisfies the real-world clinical requirements of IGRT. PMID:24495983

  4. How to set the stage for a full-fledged clinical trial testing 'incremental haemodialysis'.

    PubMed

    Casino, Francesco Gaetano; Basile, Carlo

    2017-07-21

    Most people who make the transition to maintenance haemodialysis (HD) therapy are treated with a fixed dose of thrice-weekly HD (3HD/week) regimen without consideration of their residual kidney function (RKF). The RKF provides an effective and naturally continuous clearance of both small and middle molecules, plays a major role in metabolic homeostasis, nutritional status and cardiovascular health, and aids in fluid management. The RKF is associated with better patient survival and greater health-related quality of life. Its preservation is instrumental to the prescription of incremental (1HD/week to 2HD/week) HD. The recently heightened interest in incremental HD has been hindered by the current limitations of the urea kinetic model (UKM), which tend to overestimate the needed dialysis dose in the presence of a substantial RKF. A recent paper by Casino and Basile suggested a variable target model (VTM), which gives more clinical weight to the RKF and allows less frequent HD treatments at lower RKF as opposed to the fixed target model, based on the wrong concept of the clinical equivalence between renal and dialysis clearance. A randomized controlled trial (RCT) enrolling incident patients and comparing incremental HD (prescribed according to the VTM) with the standard 3HD/week schedule and focused on hard outcomes, such as survival and health-related quality of life of patients, is urgently needed. The first step in designing such a study is to compute the 'adequacy lines' and the associated fitting equations necessary for the most appropriate allocation of the patients in the two arms and their correct and safe follow-up. In conclusion, the potentially important clinical and financial implications of the incremental HD render it highly promising and warrant RCTs. The UKM is the keystone for conducting such studies. © The Author 2017. Published by Oxford University Press on behalf of ERA-EDTA. All rights reserved.

  5. Biometrics Enabling Capability Increment 1 (BEC Inc 1)

    DTIC Science & Technology

    2016-03-01

    2016 Major Automated Information System Annual Report Biometrics Enabling Capability Increment 1 (BEC Inc 1) Defense Acquisition Management...Phone: 227-3119 DSN Fax: Date Assigned: July 15, 2015 Program Information Program Name Biometrics Enabling Capability Increment 1 (BEC Inc 1) DoD...therefore, no Original Estimate has been established. BEC Inc 1 2016 MAR UNCLASSIFIED 4 Program Description The Biometrics Enabling Capability (BEC

  6. A unifying kinetic framework for modeling oxidoreductase-catalyzed reactions.

    PubMed

    Chang, Ivan; Baldi, Pierre

    2013-05-15

    Oxidoreductases are a fundamental class of enzymes responsible for the catalysis of oxidation-reduction reactions, crucial in most bioenergetic metabolic pathways. From their common root in the ancient prebiotic environment, oxidoreductases have evolved into diverse and elaborate protein structures with specific kinetic properties and mechanisms adapted to their individual functional roles and environmental conditions. While accurate kinetic modeling of oxidoreductases is thus important, current models suffer from limitations to the steady-state domain, lack empirical validation or are too specialized to a single system or set of conditions. To address these limitations, we introduce a novel unifying modeling framework for kinetic descriptions of oxidoreductases. The framework is based on a set of seven elementary reactions that (i) form the basis for 69 pairs of enzyme state transitions for encoding various specific microscopic intra-enzyme reaction networks (micro-models), and (ii) lead to various specific macroscopic steady-state kinetic equations (macro-models) via thermodynamic assumptions. Thus, a synergistic bridge between the micro and macro kinetics can be achieved, enabling us to extract unitary rate constants, simulate reaction variance and validate the micro-models using steady-state empirical data. To help facilitate the application of this framework, we make available RedoxMech: a Mathematica™ software package that automates the generation and customization of micro-models. The Mathematica™ source code for RedoxMech, the documentation and the experimental datasets are all available from: http://www.igb.uci.edu/tools/sb/metabolic-modeling. pfbaldi@ics.uci.edu Supplementary data are available at Bioinformatics online.

  7. A simple reaction-rate model for turbulent diffusion flames

    NASA Technical Reports Server (NTRS)

    Bangert, L. H.

    1975-01-01

    A simple reaction rate model is proposed for turbulent diffusion flames in which the reaction rate is proportional to the turbulence mixing rate. The reaction rate is also dependent on the mean mass fraction and the mean square fluctuation of mass fraction of each reactant. Calculations are compared with experimental data and are generally successful in predicting the measured quantities.

  8. Toward disentangling the effect of hydrologic and nitrogen source changes from 1992 to 2001 on incremental nitrogen yield in the contiguous United States

    NASA Astrophysics Data System (ADS)

    Alam, Md Jahangir; Goodall, Jonathan L.

    2012-04-01

    The goal of this research was to quantify the relative impact of hydrologic and nitrogen source changes on incremental nitrogen yield in the contiguous United States. Using nitrogen source estimates from various federal data bases, remotely sensed land use data from the National Land Cover Data program, and observed instream loadings from the United States Geological Survey National Stream Quality Accounting Network program, we calibrated and applied the spatially referenced regression model SPARROW to estimate incremental nitrogen yield for the contiguous United States. We ran different model scenarios to separate the effects of changes in source contributions from hydrologic changes for the years 1992 and 2001, assuming that only state conditions changed and that model coefficients describing the stream water-quality response to changes in state conditions remained constant between 1992 and 2001. Model results show a decrease of 8.2% in the median incremental nitrogen yield over the period of analysis with the vast majority of this decrease due to changes in hydrologic conditions rather than decreases in nitrogen sources. For example, when we changed the 1992 version of the model to have nitrogen source data from 2001, the model results showed only a small increase in median incremental nitrogen yield (0.12%). However, when we changed the 1992 version of the model to have hydrologic conditions from 2001, model results showed a decrease of approximately 8.7% in median incremental nitrogen yield. We did, however, find notable differences in incremental yield estimates for different sources of nitrogen after controlling for hydrologic changes, particularly for population related sources. For example, the median incremental yield for population related sources increased by 8.4% after controlling for hydrologic changes. This is in contrast to a 2.8% decrease in population related sources when hydrologic changes are included in the analysis. Likewise we found that

  9. CARIBIAM: constrained Association Rules using Interactive Biological IncrementAl Mining.

    PubMed

    Rahal, Imad; Rahhal, Riad; Wang, Baoying; Perrizo, William

    2008-01-01

    This paper analyses annotated genome data by applying a very central data-mining technique known as Association Rule Mining (ARM) with the aim of discovering rules and hypotheses capable of yielding deeper insights into this type of data. In the literature, ARM has been noted for producing an overwhelming number of rules. This work proposes a new technique capable of using domain knowledge in the form of queries in order to efficiently mine only the subset of the associations that are of interest to investigators in an incremental and interactive manner.

  10. Incremental Improvement of Career Education in Utah. Final Report.

    ERIC Educational Resources Information Center

    Utah State Board of Education, Salt Lake City.

    This is a project report on Utah's plans to effect "incremental improvements" in career education implementation in seven school districts. Project objectives are formulated as follow: effect incremental improvements in attendance area cones, strengthen career education leadership capabilities, develop staff competence to diffuse the…

  11. Haemoglobin saturation during incremental arm and leg exercise.

    PubMed Central

    Powers, S. K.; Dodd, S.; Woodyard, J.; Beadle, R. E.; Church, G.

    1984-01-01

    There are few reports concerning the alterations in the percent of haemoglobin saturated with oxygen (%SO2) during non-steady state incremental exercise. Further, no data exist to describe the %SO2 changes during arm exercise. Therefore, the purpose of this study was made to assess the dynamic changes in %SO2 during incremental arm and leg work. Nine trained subjects (7 males and 2 females) performed incremental arm and leg exercise to exhaustion on an arm crank ergometer and a cycle ergometer, respectively. Ventilation and gas exchange measurements were obtained minute by minute via open circuit spirometry and changes in %SO2 were recorded via an ear oximeter. No significant difference (p greater than 0.05) existed between arm and leg work in end-tidal oxygen (PETO2), end-tidal carbon dioxide (PETCO2), or %SO2 when compared as a function of percent VO2 max. These results provide evidence that arterial O2 desaturation occurs in a similar fashion in both incremental arm and leg work with the greatest changes in %SO2 occurring at work rates greater than 70% VO2 max. PMID:6435715

  12. The dark side of incremental learning: a model of cumulative semantic interference during lexical access in speech production.

    PubMed

    Oppenheim, Gary M; Dell, Gary S; Schwartz, Myrna F

    2010-02-01

    Naming a picture of a dog primes the subsequent naming of a picture of a dog (repetition priming) and interferes with the subsequent naming of a picture of a cat (semantic interference). Behavioral studies suggest that these effects derive from persistent changes in the way that words are activated and selected for production, and some have claimed that the findings are only understandable by positing a competitive mechanism for lexical selection. We present a simple model of lexical retrieval in speech production that applies error-driven learning to its lexical activation network. This model naturally produces repetition priming and semantic interference effects. It predicts the major findings from several published experiments, demonstrating that these effects may arise from incremental learning. Furthermore, analysis of the model suggests that competition during lexical selection is not necessary for semantic interference if the learning process is itself competitive. Copyright 2009 Elsevier B.V. All rights reserved.

  13. Key Management Infrastructure Increment 2 (KMI Inc 2)

    DTIC Science & Technology

    2016-03-01

    2016 Major Automated Information System Annual Report Key Management Infrastructure Increment 2 (KMI Inc 2) Defense Acquisition Management...PB - President’s Budget RDT&E - Research, Development, Test, and Evaluation SAE - Service Acquisition Executive TBD - To Be Determined TY - Then...Assigned: April 6, 2015 Program Information Program Name Key Management Infrastructure Increment 2 (KMI Inc 2) DoD Component DoD The acquiring DoD

  14. Volatilities, Traded Volumes, and Price Increments in Derivative Securities

    NASA Astrophysics Data System (ADS)

    Kim, Kyungsik; Lim, Gyuchang; Kim, Soo Yong; Scalas, Enrico

    2007-03-01

    We apply the detrended fluctuation analysis (DFA) to the statistics of the Korean treasury bond (KTB) futures from which the logarithmic increments, volatilities, and traded volumes are estimated over a specific time lag. For our case, the logarithmic increment of futures prices has no long-memory property, while the volatility and the traded volume exhibit the existence of long-memory property. To analyze and calculate whether the volatility clustering is due to the inherent higher-order correlation not detected by applying directly the DFA to logarithmic increments of the KTB futures, it is of importance to shuffle the original tick data of futures prices and to generate the geometric Brownian random walk with the same mean and standard deviation. It is really shown from comparing the three tick data that the higher-order correlation inherent in logarithmic increments makes the volatility clustering. Particularly, the result of the DFA on volatilities and traded volumes may be supported the hypothesis of price changes.

  15. Supercritical water oxidation of quinazoline: Reaction kinetics and modeling.

    PubMed

    Gong, Yanmeng; Guo, Yang; Wang, Shuzhong; Song, Wenhan; Xu, Donghai

    2017-03-01

    This paper presents a first quantitative kinetic model for supercritical water oxidation (SCWO) of quinazoline that describes the formation and interconversion of intermediates and final products at 673-873 K. The set of 11 reaction pathways for phenol, pyrimidine, naphthalene, NH 3 , etc, involved in the simplified reaction network proved sufficient for fitting the experimental results satisfactorily. We validated the model prediction ability on CO 2 yields at initial quinazoline loading not used in the parameter estimation. Reaction rate analysis and sensitivity analysis indicate that nearly all reactions reach their thermodynamic equilibrium within 300 s. The pyrimidine yielding from quinazoline is the dominant ring-opening pathway and provides a significant contribution to CO 2 formation. Low sensitivity of NH 3 decomposition rate to concentration confirms its refractory nature in SCWO. Nitrogen content in liquid products decreases whereas that in gaseous phase increases as reaction time prolonged. The nitrogen predicted by the model in gaseous phase combined with the experimental nitrogen in liquid products gives an accurate nitrogen balance of conversion process. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. Implementation of a vibrationally linked chemical reaction model for DSMC

    NASA Technical Reports Server (NTRS)

    Carlson, A. B.; Bird, Graeme A.

    1994-01-01

    A new procedure closely linking dissociation and exchange reactions in air to the vibrational levels of the diatomic molecules has been implemented in both one- and two-dimensional versions of Direct Simulation Monte Carlo (DSMC) programs. The previous modeling of chemical reactions with DSMC was based on the continuum reaction rates for the various possible reactions. The new method is more closely related to the actual physics of dissociation and is more appropriate to the particle nature of DSMC. Two cases are presented: the relaxation to equilibrium of undissociated air initially at 10,000 K, and the axisymmetric calculation of shuttle forebody heating during reentry at 92.35 km and 7500 m/s. Although reaction rates are not used in determining the dissociations or exchange reactions, the new method produces rates which agree astonishingly well with the published rates derived from experiment. The results for gas properties and surface properties also agree well with the results produced by earlier DSMC models, equilibrium air calculations, and experiment.

  17. Are Fearless Dominance Traits Superfluous in Operationalizing Psychopathy? Incremental Validity and Sex Differences

    PubMed Central

    Murphy, Brett; Lilienfeld, Scott; Skeem, Jennifer; Edens, John

    2016-01-01

    Researchers are vigorously debating whether psychopathic personality includes seemingly adaptive traits, especially social and physical boldness. In a large sample (N=1565) of adult offenders, we examined the incremental validity of two operationalizations of boldness (Fearless Dominance traits in the Psychopathy Personality Inventory, Lilienfeld & Andrews, 1996; Boldness traits in the Triarchic Model of Psychopathy, Patrick et al, 2009), above and beyond other characteristics of psychopathy, in statistically predicting scores on four psychopathy-related measures, including the Psychopathy Checklist-Revised (PCL-R). The incremental validity added by boldness traits in predicting the PCL-R’s representation of psychopathy was especially pronounced for interpersonal traits (e.g., superficial charm, deceitfulness). Our analyses, however, revealed unexpected sex differences in the relevance of these traits to psychopathy, with boldness traits exhibiting reduced importance for psychopathy in women. We discuss the implications of these findings for measurement models of psychopathy. PMID:26866795

  18. Simulating the effects of climatic variation on stem carbon accumulation of a ponderosa pine stand: comparison with annual growth increment data.

    PubMed

    Hunt, E R; Martin, F C; Running, S W

    1991-01-01

    Simulation models of ecosystem processes may be necessary to separate the long-term effects of climate change on forest productivity from the effects of year-to-year variations in climate. The objective of this study was to compare simulated annual stem growth with measured annual stem growth from 1930 to 1982 for a uniform stand of ponderosa pine (Pinus ponderosa Dougl.) in Montana, USA. The model, FOREST-BGC, was used to simulate growth assuming leaf area index (LAI) was either constant or increasing. The measured stem annual growth increased exponentially over time; the differences between the simulated and measured stem carbon accumulations were not large. Growth trends were removed from both the measured and simulated annual increments of stem carbon to enhance the year-to-year variations in growth resulting from climate. The detrended increments from the increasing LAI simulation fit the detrended increments of the stand data over time with an R(2) of 0.47; the R(2) increased to 0.65 when the previous year's simulated detrended increment was included with the current year's simulated increment to account for autocorrelation. Stepwise multiple linear regression of the detrended increments of the stand data versus monthly meteorological variables had an R(2) of 0.37, and the R(2) increased to 0.47 when the previous year's meteorological data were included to account for autocorrelation. Thus, FOREST-BGC was more sensitive to the effects of year-to-year climate variation on annual stem growth than were multiple linear regression models.

  19. Analyzing Reaction Rates with the Distortion/Interaction‐Activation Strain Model

    PubMed Central

    2017-01-01

    Abstract The activation strain or distortion/interaction model is a tool to analyze activation barriers that determine reaction rates. For bimolecular reactions, the activation energies are the sum of the energies to distort the reactants into geometries they have in transition states plus the interaction energies between the two distorted molecules. The energy required to distort the molecules is called the activation strain or distortion energy. This energy is the principal contributor to the activation barrier. The transition state occurs when this activation strain is overcome by the stabilizing interaction energy. Following the changes in these energies along the reaction coordinate gives insights into the factors controlling reactivity. This model has been applied to reactions of all types in both organic and inorganic chemistry, including substitutions and eliminations, cycloadditions, and several types of organometallic reactions. PMID:28447369

  20. Exploiting Outage and Error Probability of Cooperative Incremental Relaying in Underwater Wireless Sensor Networks

    PubMed Central

    Nasir, Hina; Javaid, Nadeem; Sher, Muhammad; Qasim, Umar; Khan, Zahoor Ali; Alrajeh, Nabil; Niaz, Iftikhar Azim

    2016-01-01

    This paper embeds a bi-fold contribution for Underwater Wireless Sensor Networks (UWSNs); performance analysis of incremental relaying in terms of outage and error probability, and based on the analysis proposition of two new cooperative routing protocols. Subject to the first contribution, a three step procedure is carried out; a system model is presented, the number of available relays are determined, and based on cooperative incremental retransmission methodology, closed-form expressions for outage and error probability are derived. Subject to the second contribution, Adaptive Cooperation in Energy (ACE) efficient depth based routing and Enhanced-ACE (E-ACE) are presented. In the proposed model, feedback mechanism indicates success or failure of data transmission. If direct transmission is successful, there is no need for relaying by cooperative relay nodes. In case of failure, all the available relays retransmit the data one by one till the desired signal quality is achieved at destination. Simulation results show that the ACE and E-ACE significantly improves network performance, i.e., throughput, when compared with other incremental relaying protocols like Cooperative Automatic Repeat reQuest (CARQ). E-ACE and ACE achieve 69% and 63% more throughput respectively as compared to CARQ in hard underwater environment. PMID:27420061

  1. MESOSCOPIC MODELING OF STOCHASTIC REACTION-DIFFUSION KINETICS IN THE SUBDIFFUSIVE REGIME

    PubMed Central

    BLANC, EMILIE; ENGBLOM, STEFAN; HELLANDER, ANDREAS; LÖTSTEDT, PER

    2017-01-01

    Subdiffusion has been proposed as an explanation of various kinetic phenomena inside living cells. In order to fascilitate large-scale computational studies of subdiffusive chemical processes, we extend a recently suggested mesoscopic model of subdiffusion into an accurate and consistent reaction-subdiffusion computational framework. Two different possible models of chemical reaction are revealed and some basic dynamic properties are derived. In certain cases those mesoscopic models have a direct interpretation at the macroscopic level as fractional partial differential equations in a bounded time interval. Through analysis and numerical experiments we estimate the macroscopic effects of reactions under subdiffusive mixing. The models display properties observed also in experiments: for a short time interval the behavior of the diffusion and the reaction is ordinary, in an intermediate interval the behavior is anomalous, and at long times the behavior is ordinary again. PMID:29046618

  2. 40 CFR 60.2835 - What if I do not meet an increment of progress?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... or Before November 30, 1999 Model Rule-Air Curtain Incinerators § 60.2835 What if I do not meet an... Administrator postmarked within 10 business days after the date for that increment of progress in table 1 of...

  3. Martingales, nonstationary increments, and the efficient market hypothesis

    NASA Astrophysics Data System (ADS)

    McCauley, Joseph L.; Bassler, Kevin E.; Gunaratne, Gemunu H.

    2008-06-01

    We discuss the deep connection between nonstationary increments, martingales, and the efficient market hypothesis for stochastic processes x(t) with arbitrary diffusion coefficients D(x,t). We explain why a test for a martingale is generally a test for uncorrelated increments. We explain why martingales look Markovian at the level of both simple averages and 2-point correlations. But while a Markovian market has no memory to exploit and cannot be beaten systematically, a martingale admits memory that might be exploitable in higher order correlations. We also use the analysis of this paper to correct a misstatement of the ‘fair game’ condition in terms of serial correlations in Fama’s paper on the EMH. We emphasize that the use of the log increment as a variable in data analysis generates spurious fat tails and spurious Hurst exponents.

  4. Single point incremental forming: Formability of PC sheets

    NASA Astrophysics Data System (ADS)

    Formisano, A.; Boccarusso, L.; Carrino, L.; Lambiase, F.; Minutolo, F. Memola Capece

    2018-05-01

    Recent research on Single Point Incremental Forming of polymers has slightly covered the possibility of expanding the materials capability window of this flexible forming process beyond metals, by demonstrating the workability of thermoplastic polymers at room temperature. Given the different behaviour of polymers compared to metals, different aspects need to be deepened to better understand the behaviour of these materials when incrementally formed. Thus, the aim of the work is to investigate the formability of incrementally formed polycarbonate thin sheets. To this end, an experimental investigation at room temperature was conducted involving formability tests; varying wall angle cone and pyramid frusta were manufactured by processing polycarbonate sheets with different thicknesses and using tools with different diameters, in order to draw conclusions on the formability of polymer sheets through the evaluation of the forming angles and the observation of the failure mechanisms.

  5. Incremental Implicit Learning of Bundles of Statistical Patterns

    PubMed Central

    Qian, Ting; Jaeger, T. Florian; Aslin, Richard N.

    2016-01-01

    Forming an accurate representation of a task environment often takes place incrementally as the information relevant to learning the representation only unfolds over time. This incremental nature of learning poses an important problem: it is usually unclear whether a sequence of stimuli consists of only a single pattern, or multiple patterns that are spliced together. In the former case, the learner can directly use each observed stimulus to continuously revise its representation of the task environment. In the latter case, however, the learner must first parse the sequence of stimuli into different bundles, so as to not conflate the multiple patterns. We created a video-game statistical learning paradigm and investigated 1) whether learners without prior knowledge of the existence of multiple “stimulus bundles” — subsequences of stimuli that define locally coherent statistical patterns — could detect their presence in the input, and 2) whether learners are capable of constructing a rich representation that encodes the various statistical patterns associated with bundles. By comparing human learning behavior to the predictions of three computational models, we find evidence that learners can handle both tasks successfully. In addition, we discuss the underlying reasons for why the learning of stimulus bundles occurs even when such behavior may seem irrational. PMID:27639552

  6. Consistent post-reaction vibrational energy redistribution in DSMC simulations using TCE model

    NASA Astrophysics Data System (ADS)

    Borges Sebastião, Israel; Alexeenko, Alina

    2016-10-01

    The direct simulation Monte Carlo (DSMC) method has been widely applied to study shockwaves, hypersonic reentry flows, and other nonequilibrium flow phenomena. Although there is currently active research on high-fidelity models based on ab initio data, the total collision energy (TCE) and Larsen-Borgnakke (LB) models remain the most often used chemistry and relaxation models in DSMC simulations, respectively. The conventional implementation of the discrete LB model, however, may not satisfy detailed balance when recombination and exchange reactions play an important role in the flow energy balance. This issue can become even more critical in reacting mixtures involving polyatomic molecules, such as in combustion. In this work, this important shortcoming is addressed and an empirical approach to consistently specify the post-reaction vibrational states close to thermochemical equilibrium conditions is proposed within the TCE framework. Following Bird's quantum-kinetic (QK) methodology for populating post-reaction states, the new TCE-based approach involves two main steps. The state-specific TCE reaction probabilities for a forward reaction are first pre-computed from equilibrium 0-D simulations. These probabilities are then employed to populate the post-reaction vibrational states of the corresponding reverse reaction. The new approach is illustrated by application to exchange and recombination reactions relevant to H2-O2 combustion processes.

  7. Observers for Systems with Nonlinearities Satisfying an Incremental Quadratic Inequality

    NASA Technical Reports Server (NTRS)

    Acikmese, Ahmet Behcet; Corless, Martin

    2004-01-01

    We consider the problem of state estimation for nonlinear time-varying systems whose nonlinearities satisfy an incremental quadratic inequality. These observer results unifies earlier results in the literature; and extend it to some additional classes of nonlinearities. Observers are presented which guarantee that the state estimation error exponentially converges to zero. Observer design involves solving linear matrix inequalities for the observer gain matrices. Results are illustrated by application to a simple model of an underwater.

  8. Mutual-Information-Based Incremental Relaying Communications for Wireless Biomedical Implant Systems

    PubMed Central

    Liao, Yangzhe; Cai, Qing; Ai, Qingsong; Liu, Quan

    2018-01-01

    Network lifetime maximization of wireless biomedical implant systems is one of the major research challenges of wireless body area networks (WBANs). In this paper, a mutual information (MI)-based incremental relaying communication protocol is presented where several on-body relay nodes and one coordinator are attached to the clothes of a patient. Firstly, a comprehensive analysis of a system model is investigated in terms of channel path loss, energy consumption, and the outage probability from the network perspective. Secondly, only when the MI value becomes smaller than the predetermined threshold is data transmission allowed. The communication path selection can be either from the implanted sensor to the on-body relay then forwards to the coordinator or from the implanted sensor to the coordinator directly, depending on the communication distance. Moreover, mathematical models of quality of service (QoS) metrics are derived along with the related subjective functions. The results show that the MI-based incremental relaying technique achieves better performance in comparison to our previous proposed protocol techniques regarding several selected performance metrics. The outcome of this paper can be applied to intra-body continuous physiological signal monitoring, artificial biofeedback-oriented WBANs, and telemedicine system design. PMID:29419784

  9. Mutual-Information-Based Incremental Relaying Communications for Wireless Biomedical Implant Systems.

    PubMed

    Liao, Yangzhe; Leeson, Mark S; Cai, Qing; Ai, Qingsong; Liu, Quan

    2018-02-08

    Network lifetime maximization of wireless biomedical implant systems is one of the major research challenges of wireless body area networks (WBANs). In this paper, a mutual information (MI)-based incremental relaying communication protocol is presented where several on-body relay nodes and one coordinator are attached to the clothes of a patient. Firstly, a comprehensive analysis of a system model is investigated in terms of channel path loss, energy consumption, and the outage probability from the network perspective. Secondly, only when the MI value becomes smaller than the predetermined threshold is data transmission allowed. The communication path selection can be either from the implanted sensor to the on-body relay then forwards to the coordinator or from the implanted sensor to the coordinator directly, depending on the communication distance. Moreover, mathematical models of quality of service (QoS) metrics are derived along with the related subjective functions. The results show that the MI-based incremental relaying technique achieves better performance in comparison to our previous proposed protocol techniques regarding several selected performance metrics. The outcome of this paper can be applied to intra-body continuous physiological signal monitoring, artificial biofeedback-oriented WBANs, and telemedicine system design.

  10. Bigger is Better, but at What Cost? Estimating the Economic Value of Incremental Data Assets.

    PubMed

    Dalessandro, Brian; Perlich, Claudia; Raeder, Troy

    2014-06-01

    Many firms depend on third-party vendors to supply data for commercial predictive modeling applications. An issue that has received very little attention in the prior research literature is the estimation of a fair price for purchased data. In this work we present a methodology for estimating the economic value of adding incremental data to predictive modeling applications and present two cases studies. The methodology starts with estimating the effect that incremental data has on model performance in terms of common classification evaluation metrics. This effect is then translated into economic units, which gives an expected economic value that the firm might realize with the acquisition of a particular data asset. With this estimate a firm can then set a data acquisition price that targets a particular return on investment. This article presents the methodology in full detail and illustrates it in the context of two marketing case studies.

  11. Beyond Incrementalism? SCHIP and the politics of health reform.

    PubMed

    Oberlander, Jonathan B; Lyons, Barbara

    2009-01-01

    When Congress enacted the State Children's Health Insurance Program (SCHIP) in 1997, it was heralded as a model of bipartisan, incremental health policy. However, despite the program's achievements in the ensuing decade, SCHIP's reauthorization triggered political conflict, and efforts to expand the program stalemated in 2007. The 2008 elections broke that stalemate, and in 2009 the new Congress passed, and President Barack Obama signed, legislation reauthorizing SCHIP. Now that attention is turning to comprehensive health reform, what lessons can reformers learn from SCHIP's political adventures?

  12. Analyzing the posting behaviors in news forums with incremental inter-event time

    NASA Astrophysics Data System (ADS)

    Sun, Zhi; Peng, Qinke; Lv, Jia; Zhong, Tao

    2017-08-01

    Online human behaviors are widely discussed in various fields. Three key factors, named priority, interest and memory are found crucial in human behaviors. Existing research mainly focuses on the identified and active users. However, the anonymous users and the inactive ones exist widely in news forums, whose behaviors do not receive enough attention. They cannot offer abundant postings like the others. It requires us to study posting behaviors of all the users including anonymous ones, identified ones, active ones and inactive ones in news forums only at the collective level. In this paper, the memory effects of the posting behaviors in news forums are investigated at the collective level. On the basis of the incremental inter-event time, a new model is proposed to describe the posting behaviors at the collective level. The results on twelve actual news events demonstrate the good performance of our model to describe the posting behaviors at the collective level in news forums. In addition, we find the symmetric incremental inter-event time distribution and the similar posting patterns in different durations.

  13. Evaluation and linking of effective parameters in particle-based models and continuum models for mixing-limited bimolecular reactions

    NASA Astrophysics Data System (ADS)

    Zhang, Yong; Papelis, Charalambos; Sun, Pengtao; Yu, Zhongbo

    2013-08-01

    Particle-based models and continuum models have been developed to quantify mixing-limited bimolecular reactions for decades. Effective model parameters control reaction kinetics, but the relationship between the particle-based model parameter (such as the interaction radius R) and the continuum model parameter (i.e., the effective rate coefficient Kf) remains obscure. This study attempts to evaluate and link R and Kf for the second-order bimolecular reaction in both the bulk and the sharp-concentration-gradient (SCG) systems. First, in the bulk system, the agent-based method reveals that R remains constant for irreversible reactions and decreases nonlinearly in time for a reversible reaction, while mathematical analysis shows that Kf transitions from an exponential to a power-law function. Qualitative link between R and Kf can then be built for the irreversible reaction with equal initial reactant concentrations. Second, in the SCG system with a reaction interface, numerical experiments show that when R and Kf decline as t-1/2 (for example, to account for the reactant front expansion), the two models capture the transient power-law growth of product mass, and their effective parameters have the same functional form. Finally, revisiting of laboratory experiments further shows that the best fit factor in R and Kf is on the same order, and both models can efficiently describe chemical kinetics observed in the SCG system. Effective model parameters used to describe reaction kinetics therefore may be linked directly, where the exact linkage may depend on the chemical and physical properties of the system.

  14. Optimization of Angular-Momentum Biases of Reaction Wheels

    NASA Technical Reports Server (NTRS)

    Lee, Clifford; Lee, Allan

    2008-01-01

    RBOT [RWA Bias Optimization Tool (wherein RWA signifies Reaction Wheel Assembly )] is a computer program designed for computing angular momentum biases for reaction wheels used for providing spacecraft pointing in various directions as required for scientific observations. RBOT is currently deployed to support the Cassini mission to prevent operation of reaction wheels at unsafely high speeds while minimizing time in undesirable low-speed range, where elasto-hydrodynamic lubrication films in bearings become ineffective, leading to premature bearing failure. The problem is formulated as a constrained optimization problem in which maximum wheel speed limit is a hard constraint and a cost functional that increases as speed decreases below a low-speed threshold. The optimization problem is solved using a parametric search routine known as the Nelder-Mead simplex algorithm. To increase computational efficiency for extended operation involving large quantity of data, the algorithm is designed to (1) use large time increments during intervals when spacecraft attitudes or rates of rotation are nearly stationary, (2) use sinusoidal-approximation sampling to model repeated long periods of Earth-point rolling maneuvers to reduce computational loads, and (3) utilize an efficient equation to obtain wheel-rate profiles as functions of initial wheel biases based on conservation of angular momentum (in an inertial frame) using pre-computed terms.

  15. Incremental Reactivity Effects on Secondary Organic Aerosol Formation in Urban Atmospheres with and without Biogenic Influence

    NASA Astrophysics Data System (ADS)

    Kacarab, Mary; Li, Lijie; Carter, William P. L.; Cocker, David R., III

    2016-04-01

    Two different surrogate mixtures of anthropogenic and biogenic volatile organic compounds (VOCs) were developed to study secondary organic aerosol (SOA) formation at atmospheric reactivities similar to urban regions with varying biogenic influence levels. Environmental chamber simulations were designed to enable the study of the incremental aerosol formation from select anthropogenic (m-Xylene, 1,2,4-Trimethylbenzene, and 1-Methylnaphthalene) and biogenic (α-pinene) precursors under the chemical reactivity set by the two different surrogate mixtures. The surrogate reactive organic gas (ROG) mixtures were based on that used to develop the maximum incremental reactivity (MIR) factors for evaluation of O3 forming potential. Multiple incremental aerosol formation experiments were performed in the University of California Riverside (UCR) College of Engineering Center for Environmental Research and Technology (CE-CERT) dual 90m3 environmental chambers. Incremental aerosol yields were determined for each of the VOCs studied and compared to yields found from single precursor studies. Aerosol physical properties of density, volatility, and hygroscopicity were monitored throughout experiments. Bulk elemental chemical composition from high-resolution time of flight aerosol mass spectrometer (HR-ToF-AMS) data will also be presented. Incremental yields and SOA chemical and physical characteristics will be compared with data from previous single VOC studies conducted for these aerosol precursors following traditional VOC/NOx chamber experiments. Evaluation of the incremental effects of VOCs on SOA formation and properties are paramount in evaluating how to best extrapolate environmental chamber observations to the ambient atmosphere and provides useful insights into current SOA formation models. Further, the comparison of incremental SOA from VOCs in varying surrogate urban atmospheres (with and without strong biogenic influence) allows for a unique perspective on the impacts

  16. SurfKin: an ab initio kinetic code for modeling surface reactions.

    PubMed

    Le, Thong Nguyen-Minh; Liu, Bin; Huynh, Lam K

    2014-10-05

    In this article, we describe a C/C++ program called SurfKin (Surface Kinetics) to construct microkinetic mechanisms for modeling gas-surface reactions. Thermodynamic properties of reaction species are estimated based on density functional theory calculations and statistical mechanics. Rate constants for elementary steps (including adsorption, desorption, and chemical reactions on surfaces) are calculated using the classical collision theory and transition state theory. Methane decomposition and water-gas shift reaction on Ni(111) surface were chosen as test cases to validate the code implementations. The good agreement with literature data suggests this is a powerful tool to facilitate the analysis of complex reactions on surfaces, and thus it helps to effectively construct detailed microkinetic mechanisms for such surface reactions. SurfKin also opens a possibility for designing nanoscale model catalysts. Copyright © 2014 Wiley Periodicals, Inc.

  17. A computational study of pyrolysis reactions of lignin model compounds

    Treesearch

    Thomas Elder

    2010-01-01

    Enthalpies of reaction for the initial steps in the pyrolysis of lignin have been evaluated at the CBS-4m level of theory using fully substituted b-O-4 dilignols. Values for competing unimolecular decomposition reactions are consistent with results previously published for phenethyl phenyl ether models, but with lowered selectivity. Chain propagating reactions of free...

  18. Effect of homogenous-heterogeneous reactions on MHD Prandtl fluid flow over a stretching sheet

    NASA Astrophysics Data System (ADS)

    Khan, Imad; Malik, M. Y.; Hussain, Arif; Salahuddin, T.

    An analysis is performed to explore the effects of homogenous-heterogeneous reactions on two-dimensional flow of Prandtl fluid over a stretching sheet. In present analysis, we used the developed model of homogeneous-heterogeneous reactions in boundary layer flow. The mathematical configuration of presented flow phenomenon yields the nonlinear partial differential equations. Using scaling transformations, the governing partial differential equations (momentum equation and homogenous-heterogeneous reactions equations) are transformed into non-linear ordinary differential equations (ODE's). Then, resulting non-linear ODE's are solved by computational scheme known as shooting method. The quantitative and qualitative manners of concerned physical quantities (velocity, concentration and drag force coefficient) are examined under prescribed physical constrained through figures and tables. It is observed that velocity profile enhances verses fluid parameters α and β while Hartmann number reduced it. The homogeneous and heterogeneous reactions parameters have reverse effects on concentration profile. Concentration profile shows retarding behavior for large values of Schmidt number. Skin fraction coefficient enhances with increment in Hartmann number H and fluid parameter α .

  19. Cost of Incremental Expansion of an Existing Family Medicine Residency Program.

    PubMed

    Ashkin, Evan A; Newton, Warren P; Toomey, Brian; Lingley, Ronald; Page, Cristen P

    2017-07-01

    Expanding residency training programs to address shortages in the primary care workforce is challenged by the present graduate medical education (GME) environment. The Medicare funding cap on new GME positions and reductions in the Health Resources and Services Administration (HRSA) Teaching Health Center (THC) GME program require innovative solutions to support primary care residency expansion. Sparse literature exists to assist in predicting the actual cost of incremental expansion of a family medicine residency program without federal or state GME support. In 2011 a collaboration to develop a community health center (CHC) academic medical partnership (CHAMP), was formed and created a THC as a training site for expansion of an existing family medicine residency program. The cost of expansion was a critical factor as no Federal GME funding or HRSA THC GME program support was available. Initial start-up costs were supported by a federal grant and local foundations. Careful financial analysis of the expansion has provided actual costs per resident of the incremental expansion of the residencyRESULTS: The CHAMP created a new THC and expanded the residency from eight to ten residents per year. The cost of expansion was approximately $72,000 per resident per year. The cost of incremental expansion of our residency program in the CHAMP model was more than 50% less than that of the recently reported cost of training in the HRSA THC GME program.

  20. Modeling the mechanism of glycosylation reactions between ethanol, 1,2-ethanediol and methoxymethanol.

    PubMed

    Azofra, Luis Miguel; Alkorta, Ibon; Toro-Labbé, Alejandro; Elguero, José

    2013-09-07

    The mechanism of the S(N)2 model glycosylation reaction between ethanol, 1,2-ethanediol and methoxymethanol has been studied theoretically at the B3LYP/6-311+G(d,p) computational level. Three different types of reactions have been explored: (i) the exchange of hydroxyl groups between these model systems; (ii) the basic catalysis reactions by combination of the substrates as glycosyl donors (neutral species) and acceptors (enolate species); and (iii) the effect on the reaction profile of an explicit H2O molecule in the reactions considered in (ii). The reaction force, the electronic chemical potential and the reaction electronic flux have been characterized for the reaction path in each case. Energy calculations show that methoxymethanol is the worst glycosyl donor model among the ones studied here, while 1,2-ethanediol is the best, having the lowest activation barrier of 74.7 kJ mol(-1) for the reaction between this one and the ethanolate as the glycosyl acceptor model. In general, the presence of direct interactions between the atoms involved in the penta-coordinated TS increases the activation energies of the processes.

  1. Tinnitus as an alarm bell: stress reaction tinnitus model.

    PubMed

    Alpini, D; Cesarani, A

    2006-01-01

    Stress is a significant factor influencing the clinical course of tinnitus. Auditory system is particularly sensitive to the effects of different stress factors (chemical, oxidative, emotional, etc.). Different stages of reaction (alarm, resistance, exhaustion) lead to different characteristics of tinnitus and different therapeutic approaches. Individual characteristics of stress reaction may explain different aspects of tinnitus in various patients with different responses to treatment, despite similar audiological and etiological factors. A model based on individual reactions to stress factors (stress reaction tinnitus model) could explain tinnitus as an alarm signal, just like an 'alarm bell', informing the patient that something potentially dangerous for subject homeostasis is happening. Tinnitus could become a disabling symptom when the subject is chronically exposed to a stress factor and is unable to switch off the alarm. Stress signals, specific for each patient, have to be identified during the 'alarm' phase in order to prevent an evolution toward the 'resistance' and 'exhaustion' phases. In these phases, identification of stressor is no more sufficient, due to the organization of a 'paradoxical auditory memory' and a 'pathologically shifted attention to tinnitus'. Identification of stress reaction phase requires accurate otolaryngology and anamnesis combined with audiological matching tests (Feldman Masking Test, for example) and psychometric questionnaires (Tinnitus Reaction and Tinnitus Cognitive Questionnaires). Copyright (c) 2006 S. Karger AG, Basel.

  2. Site Preparation Guide: Increment 1 and Increment 2. Cargo Movement Operations System (CMOS). Revision

    DTIC Science & Technology

    1990-03-01

    Freight, and Air Freight workcen- ters. Increment II workcenters will also use these computers. All order processing , cargo information processing...4. Work Clearance Permits .................................... 47 5. Work Order Processing ..................................... 47 6. Validation...implementation. 5. Work Order Processing . a. After SSC/AQFT/AQAE have reviewed and approved the site PSA, the site will be notified to begin

  3. International Space Station Increment-4/5 Microgravity Environment Summary Report

    NASA Technical Reports Server (NTRS)

    Jules, Kenol; Hrovat, Kenneth; Kelly, Eric; McPherson, Kevin; Reckart, Timothy

    2003-01-01

    This summary report presents the results of some of the processed acceleration data measured aboard the International Space Station during the period of December 2001 to December 2002. Unlike the past two ISS Increment reports, which were increment specific, this summary report covers two increments: Increments 4 and 5, hereafter referred to as Increment-4/5. Two accelerometer systems were used to measure the acceleration levels for the activities that took place during Increment-4/5. Due to time constraint and lack of precise timeline information regarding some payload operations and station activities, not a11 of the activities were analyzed for this report. The National Aeronautics and Space Administration sponsors the Microgravity Acceleration Measurement System and the Space Acceleration Microgravity System to support microgravity science experiments which require microgravity acceleration measurements. On April 19, 2001, both the Microgravity Acceleration Measurement System and the Space Acceleration Measurement System units were launched on STS-100 from the Kennedy Space Center for installation on the International Space Station. The Microgravity Acceleration Measurement System supports science experiments requiring quasi-steady acceleration measurements, while the Space Acceleration Measurement System unit supports experiments requiring vibratory acceleration measurement. The International Space Station Increment-4/5 reduced gravity environment analysis presented in this report uses acceleration data collected by both sets of accelerometer systems: The Microgravity Acceleration Measurement System, which consists of two sensors: the low-frequency Orbital Acceleration Research Experiment Sensor Subsystem and the higher frequency High Resolution Accelerometer Package. The low frequency sensor measures up to 1 Hz, but is routinely trimmean filtered to yield much lower frequency acceleration data up to 0.01 Hz. This filtered data can be mapped to arbitrary

  4. Examining the ethnoracial invariance of a bifactor model of anxiety sensitivity and the incremental validity of the physical domain-specific factor in a primary-care patient sample.

    PubMed

    Fergus, Thomas A; Kelley, Lance P; Griggs, Jackson O

    2017-10-01

    There is growing support for a bifactor conceptualization of the Anxiety Sensitivity Index-3 (ASI-3; Taylor et al., 2007), consisting of a General factor and 3 domain-specific factors (i.e., Physical, Cognitive, Social). Earlier studies supporting a bifactor model of the ASI-3 used samples that consisted of predominantly White respondents. In addition, extant research has yet to support the incremental validity of the Physical domain-specific factor while controlling for the General factor. The present study is an examination of a bifactor model of the ASI-3 and the measurement invariance of that model among an ethnoracially diverse sample of primary-care patients (N = 533). Results from multiple-group confirmatory factor analysis supported the configural and metric/scalar invariance of the bifactor model of the ASI-3 across self-identifying Black, Latino, and White respondents. The Physical domain-specific factor accounted for unique variance in an index of health anxiety beyond the General factor. These results provide support for the generalizability of a bifactor model of the ASI-3 across 3 ethnoracial groups, as well as indication of the incremental explanatory power of the Physical domain-specific factor. Study implications are discussed. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  5. One Step at a Time: SBM as an Incremental Process.

    ERIC Educational Resources Information Center

    Conrad, Mark

    1995-01-01

    Discusses incremental SBM budgeting and answers questions regarding resource equity, bookkeeping requirements, accountability, decision-making processes, and purchasing. Approaching site-based management as an incremental process recognizes that every school system engages in some level of site-based decisions. Implementation can be gradual and…

  6. DSMC modeling of flows with recombination reactions

    NASA Astrophysics Data System (ADS)

    Gimelshein, Sergey; Wysong, Ingrid

    2017-06-01

    An empirical microscopic recombination model is developed for the direct simulation Monte Carlo method that complements the extended weak vibrational bias model of dissociation. The model maintains the correct equilibrium reaction constant in a wide range of temperatures by using the collision theory to enforce the number of recombination events. It also strictly follows the detailed balance requirement for equilibrium gas. The model and its implementation are verified with oxygen and nitrogen heat bath relaxation and compared with available experimental data on atomic oxygen recombination in argon and molecular nitrogen.

  7. History Matters: Incremental Ontology Reasoning Using Modules

    NASA Astrophysics Data System (ADS)

    Cuenca Grau, Bernardo; Halaschek-Wiener, Christian; Kazakov, Yevgeny

    The development of ontologies involves continuous but relatively small modifications. Existing ontology reasoners, however, do not take advantage of the similarities between different versions of an ontology. In this paper, we propose a technique for incremental reasoning—that is, reasoning that reuses information obtained from previous versions of an ontology—based on the notion of a module. Our technique does not depend on a particular reasoning calculus and thus can be used in combination with any reasoner. We have applied our results to incremental classification of OWL DL ontologies and found significant improvement over regular classification time on a set of real-world ontologies.

  8. Chemical reaction networks as a model to describe UVC- and radiolytically-induced reactions of simple compounds.

    PubMed

    Dondi, Daniele; Merli, Daniele; Albini, Angelo; Zeffiro, Alberto; Serpone, Nick

    2012-05-01

    When a chemical system is submitted to high energy sources (UV, ionizing radiation, plasma sparks, etc.), as is expected to be the case of prebiotic chemistry studies, a plethora of reactive intermediates could form. If oxygen is present in excess, carbon dioxide and water are the major products. More interesting is the case of reducing conditions where synthetic pathways are also possible. This article examines the theoretical modeling of such systems with random-generated chemical networks. Four types of random-generated chemical networks were considered that originated from a combination of two connection topologies (viz., Poisson and scale-free) with reversible and irreversible chemical reactions. The results were analyzed taking into account the number of the most abundant products required for reaching 50% of the total number of moles of compounds at equilibrium, as this may be related to an actual problem of complex mixture analysis. The model accounts for multi-component reaction systems with no a priori knowledge of reacting species and the intermediates involved if system components are sufficiently interconnected. The approach taken is relevant to an earlier study on reactions that may have occurred in prebiotic systems where only a few compounds were detected. A validation of the model was attained on the basis of results of UVC and radiolytic reactions of prebiotic mixtures of low molecular weight compounds likely present on the primeval Earth.

  9. 40 CFR 60.1600 - When must I submit the notifications of achievement of increments of progress?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES... Before August 30, 1999 Model Rule-Increments of Progress § 60.1600 When must I submit the notifications...

  10. VR-SCOSMO: A smooth conductor-like screening model with charge-dependent radii for modeling chemical reactions.

    PubMed

    Kuechler, Erich R; Giese, Timothy J; York, Darrin M

    2016-04-28

    To better represent the solvation effects observed along reaction pathways, and of ionic species in general, a charge-dependent variable-radii smooth conductor-like screening model (VR-SCOSMO) is developed. This model is implemented and parameterized with a third order density-functional tight binding quantum model, DFTB3/3OB-OPhyd, a quantum method which was developed for organic and biological compounds, utilizing a specific parameterization for phosphate hydrolysis reactions. Unlike most other applications with the DFTB3/3OB model, an auxiliary set of atomic multipoles is constructed from the underlying DFTB3 density matrix which is used to interact the solute with the solvent response surface. The resulting method is variational, produces smooth energies, and has analytic gradients. As a baseline, a conventional SCOSMO model with fixed radii is also parameterized. The SCOSMO and VR-SCOSMO models shown have comparable accuracy in reproducing neutral-molecule absolute solvation free energies; however, the VR-SCOSMO model is shown to reduce the mean unsigned errors (MUEs) of ionic compounds by half (about 2-3 kcal/mol). The VR-SCOSMO model presents similar accuracy as a charge-dependent Poisson-Boltzmann model introduced by Hou et al. [J. Chem. Theory Comput. 6, 2303 (2010)]. VR-SCOSMO is then used to examine the hydrolysis of trimethylphosphate and seven other phosphoryl transesterification reactions with different leaving groups. Two-dimensional energy landscapes are constructed for these reactions and calculated barriers are compared to those obtained from ab initio polarizable continuum calculations and experiment. Results of the VR-SCOSMO model are in good agreement in both cases, capturing the rate-limiting reaction barrier and the nature of the transition state.

  11. Modelling and simulating reaction-diffusion systems using coloured Petri nets.

    PubMed

    Liu, Fei; Blätke, Mary-Ann; Heiner, Monika; Yang, Ming

    2014-10-01

    Reaction-diffusion systems often play an important role in systems biology when developmental processes are involved. Traditional methods of modelling and simulating such systems require substantial prior knowledge of mathematics and/or simulation algorithms. Such skills may impose a challenge for biologists, when they are not equally well-trained in mathematics and computer science. Coloured Petri nets as a high-level and graphical language offer an attractive alternative, which is easily approachable. In this paper, we investigate a coloured Petri net framework integrating deterministic, stochastic and hybrid modelling formalisms and corresponding simulation algorithms for the modelling and simulation of reaction-diffusion processes that may be closely coupled with signalling pathways, metabolic reactions and/or gene expression. Such systems often manifest multiscaleness in time, space and/or concentration. We introduce our approach by means of some basic diffusion scenarios, and test it against an established case study, the Brusselator model. Copyright © 2014 Elsevier Ltd. All rights reserved.

  12. The affects on Titan atmospheric modeling by variable molecular reaction rates

    NASA Astrophysics Data System (ADS)

    Hamel, Mark D.

    The main effort of this thesis is to study the production and loss of molecular ions in the ionosphere of Saturn's largest moon Titan. Titan's atmosphere is subject to complex photochemical processes that can lead to the production of higher order hydrocarbons and nitriles. Ion-molecule chemistry plays an important role in this process but remains poorly understood. In particular, current models that simulate the photochemistry of Titan's atmosphere overpredict the abundance of the ionosphere's main ions suggesting a flaw in the modeling process. The objective of this thesis is to determine which reactions are most important for production and loss of the two primary ions, C2H5+ and HCNH+, and what is the impact of uncertainty in the reaction rates on the production and loss of these ions. In reviewing the literature, there is a contention about what reactions are really necessary to illuminate what is occurring in the atmosphere. Approximately seven hundred reactions are included in the model used in this discussion (INT16). This paper studies what reactions are fundamental to the atmospheric processes in Titan's upper atmosphere, and also to the reactions that occur in the lower bounds of the ionosphere which are used to set a baseline molecular density for all species, and reflects what is expected at those altitudes on Titan. This research was conducted through evaluating reaction rates and cross sections available in the scientific literature and through conducting model simulations of the photochemistry in Titan's atmosphere under a range of conditions constrained by the literature source. The objective of this study is to determine the dependence of ion densities of C2H5+ and HCNH+ on the uncertainty in the reaction rates that involve these two ions in Titan's atmosphere.

  13. Modeling Interfacial Glass-Water Reactions: Recent Advances and Current Limitations

    DOE PAGES

    Pierce, Eric M.; Frugier, Pierre; Criscenti, Louise J.; ...

    2014-07-12

    Describing the reactions that occur at the glass-water interface and control the development of the altered layer constitutes one of the main scientific challenges impeding existing models from providing accurate radionuclide release estimates. Radionuclide release estimates are a critical component of the safety basis for geologic repositories. The altered layer (i.e., amorphous hydrated surface layer and crystalline reaction products) represents a complex region, both physically and chemically, sandwiched between two distinct boundaries pristine glass surface at the inner most interface and aqueous solution at the outer most interface. Computational models, spanning different length and time-scales, are currently being developed tomore » improve our understanding of this complex and dynamic process with the goal of accurately describing the pore-scale changes that occur as the system evolves. These modeling approaches include geochemical simulations [i.e., classical reaction path simulations and glass reactivity in allowance for alteration layer (GRAAL) simulations], Monte Carlo simulations, and Molecular Dynamics methods. Finally, in this manuscript, we discuss the advances and limitations of each modeling approach placed in the context of the glass-water reaction and how collectively these approaches provide insights into the mechanisms that control the formation and evolution of altered layers.« less

  14. The Time Course of Incremental Word Processing during Chinese Reading

    ERIC Educational Resources Information Center

    Zhou, Junyi; Ma, Guojie; Li, Xingshan; Taft, Marcus

    2018-01-01

    In the current study, we report two eye movement experiments investigating how Chinese readers process incremental words during reading. These are words where some of the component characters constitute another word (an embedded word). In two experiments, eye movements were monitored while the participants read sentences with incremental words…

  15. 26 CFR 1.41-8T - Alternative incremental credit (temporary).

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 26 Internal Revenue 1 2010-04-01 2010-04-01 true Alternative incremental credit (temporary). 1.41... INCOME TAXES Credits Against Tax § 1.41-8T Alternative incremental credit (temporary). (a) [Reserved] For... alternative simplified credit (ASC) and attaches the completed form to the taxpayer's timely filed (including...

  16. Graphical assessment of incremental value of novel markers in prediction models: From statistical to decision analytical perspectives.

    PubMed

    Steyerberg, Ewout W; Vedder, Moniek M; Leening, Maarten J G; Postmus, Douwe; D'Agostino, Ralph B; Van Calster, Ben; Pencina, Michael J

    2015-07-01

    New markers may improve prediction of diagnostic and prognostic outcomes. We aimed to review options for graphical display and summary measures to assess the predictive value of markers over standard, readily available predictors. We illustrated various approaches using previously published data on 3264 participants from the Framingham Heart Study, where 183 developed coronary heart disease (10-year risk 5.6%). We considered performance measures for the incremental value of adding HDL cholesterol to a prediction model. An initial assessment may consider statistical significance (HR = 0.65, 95% confidence interval 0.53 to 0.80; likelihood ratio p < 0.001), and distributions of predicted risks (densities or box plots) with various summary measures. A range of decision thresholds is considered in predictiveness and receiver operating characteristic curves, where the area under the curve (AUC) increased from 0.762 to 0.774 by adding HDL. We can furthermore focus on reclassification of participants with and without an event in a reclassification graph, with the continuous net reclassification improvement (NRI) as a summary measure. When we focus on one particular decision threshold, the changes in sensitivity and specificity are central. We propose a net reclassification risk graph, which allows us to focus on the number of reclassified persons and their event rates. Summary measures include the binary AUC, the two-category NRI, and decision analytic variants such as the net benefit (NB). Various graphs and summary measures can be used to assess the incremental predictive value of a marker. Important insights for impact on decision making are provided by a simple graph for the net reclassification risk. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. Soft tissue deformation modelling through neural dynamics-based reaction-diffusion mechanics.

    PubMed

    Zhang, Jinao; Zhong, Yongmin; Gu, Chengfan

    2018-05-30

    Soft tissue deformation modelling forms the basis of development of surgical simulation, surgical planning and robotic-assisted minimally invasive surgery. This paper presents a new methodology for modelling of soft tissue deformation based on reaction-diffusion mechanics via neural dynamics. The potential energy stored in soft tissues due to a mechanical load to deform tissues away from their rest state is treated as the equivalent transmembrane potential energy, and it is distributed in the tissue masses in the manner of reaction-diffusion propagation of nonlinear electrical waves. The reaction-diffusion propagation of mechanical potential energy and nonrigid mechanics of motion are combined to model soft tissue deformation and its dynamics, both of which are further formulated as the dynamics of cellular neural networks to achieve real-time computational performance. The proposed methodology is implemented with a haptic device for interactive soft tissue deformation with force feedback. Experimental results demonstrate that the proposed methodology exhibits nonlinear force-displacement relationship for nonlinear soft tissue deformation. Homogeneous, anisotropic and heterogeneous soft tissue material properties can be modelled through the inherent physical properties of mass points. Graphical abstract Soft tissue deformation modelling with haptic feedback via neural dynamics-based reaction-diffusion mechanics.

  18. Formal modeling of a system of chemical reactions under uncertainty.

    PubMed

    Ghosh, Krishnendu; Schlipf, John

    2014-10-01

    We describe a novel formalism representing a system of chemical reactions, with imprecise rates of reactions and concentrations of chemicals, and describe a model reduction method, pruning, based on the chemical properties. We present two algorithms, midpoint approximation and interval approximation, for construction of efficient model abstractions with uncertainty in data. We evaluate computational feasibility by posing queries in computation tree logic (CTL) on a prototype of extracellular-signal-regulated kinase (ERK) pathway.

  19. Nuclear cycler: An incremental approach to the deflection of asteroids

    NASA Astrophysics Data System (ADS)

    Vasile, Massimiliano; Thiry, Nicolas

    2016-04-01

    This paper introduces a novel deflection approach based on nuclear explosions: the nuclear cycler. The idea is to combine the effectiveness of nuclear explosions with the controllability and redundancy offered by slow push methods within an incremental deflection strategy. The paper will present an extended model for single nuclear stand-off explosions in the proximity of elongated ellipsoidal asteroids, and a family of natural formation orbits that allows the spacecraft to deploy multiple bombs while being shielded by the asteroid during the detonation.

  20. 21 CFR 874.1070 - Short increment sensitivity index (SISI) adapter.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Short increment sensitivity index (SISI) adapter. 874.1070 Section 874.1070 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN... short periodic sound pulses in specific small decibel increments that are intended to be superimposed on...

  1. 21 CFR 874.1070 - Short increment sensitivity index (SISI) adapter.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Short increment sensitivity index (SISI) adapter. 874.1070 Section 874.1070 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN... short periodic sound pulses in specific small decibel increments that are intended to be superimposed on...

  2. Boundary value problems with incremental plasticity in granular media

    NASA Technical Reports Server (NTRS)

    Chung, T. J.; Lee, J. K.; Costes, N. C.

    1974-01-01

    Discussion of the critical state concept in terms of an incremental theory of plasticity in granular (soil) media, and formulation of the governing equations which are convenient for a computational scheme using the finite element method. It is shown that the critical state concept with its representation by the classical incremental theory of plasticity can provide a powerful means for solving a wide variety of boundary value problems in soil media.

  3. Regulating recognition decisions through incremental reinforcement learning.

    PubMed

    Han, Sanghoon; Dobbins, Ian G

    2009-06-01

    Does incremental reinforcement learning influence recognition memory judgments? We examined this question by subtly altering the relative validity or availability of feedback in order to differentially reinforce old or new recognition judgments. Experiment 1 probabilistically and incorrectly indicated that either misses or false alarms were correct in the context of feedback that was otherwise accurate. Experiment 2 selectively withheld feedback for either misses or false alarms in the context of feedback that was otherwise present. Both manipulations caused prominent shifts of recognition memory decision criteria that remained for considerable periods even after feedback had been altogether removed. Overall, these data demonstrate that incremental reinforcement-learning mechanisms influence the degree of caution subjects exercise when evaluating explicit memories.

  4. A mesoscopic reaction rate model for shock initiation of multi-component PBX explosives.

    PubMed

    Liu, Y R; Duan, Z P; Zhang, Z Y; Ou, Z C; Huang, F L

    2016-11-05

    The primary goal of this research is to develop a three-term mesoscopic reaction rate model that consists of a hot-spot ignition, a low-pressure slow burning and a high-pressure fast reaction terms for shock initiation of multi-component Plastic Bonded Explosives (PBX). Thereinto, based on the DZK hot-spot model for a single-component PBX explosive, the hot-spot ignition term as well as its reaction rate is obtained through a "mixing rule" of the explosive components; new expressions for both the low-pressure slow burning term and the high-pressure fast reaction term are also obtained by establishing the relationships between the reaction rate of the multi-component PBX explosive and that of its explosive components, based on the low-pressure slow burning term and the high-pressure fast reaction term of a mesoscopic reaction rate model. Furthermore, for verification, the new reaction rate model is incorporated into the DYNA2D code to simulate numerically the shock initiation process of the PBXC03 and the PBXC10 multi-component PBX explosives, and the numerical results of the pressure histories at different Lagrange locations in explosive are found to be in good agreements with previous experimental data. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. Revisiting the Incremental Effects of Context on Word Processing: Evidence from Single-Word Event-Related Brain Potentials

    PubMed Central

    Payne, Brennan R.; Lee, Chia-Lin; Federmeier, Kara D.

    2015-01-01

    The amplitude of the N400— an event-related potential (ERP) component linked to meaning processing and initial access to semantic memory— is inversely related to the incremental build-up of semantic context over the course of a sentence. We revisited the nature and scope of this incremental context effect, adopting a word-level linear mixed-effects modeling approach, with the goal of probing the continuous and incremental effects of semantic and syntactic context on multiple aspects of lexical processing during sentence comprehension (i.e., effects of word frequency and orthographic neighborhood). First, we replicated the classic word position effect at the single-word level: open-class words showed reductions in N400 amplitude with increasing word position in semantically congruent sentences only. Importantly, we found that accruing sentence context had separable influences on the effects of frequency and neighborhood on the N400. Word frequency effects were reduced with accumulating semantic context. However, orthographic neighborhood was unaffected by accumulating context, showing robust effects on the N400 across all words, even within congruent sentences. Additionally, we found that N400 amplitudes to closed-class words were reduced with incrementally constraining syntactic context in sentences that provided only syntactic constraints. Taken together, our findings indicate that modeling word-level variability in ERPs reveals mechanisms by which different sources of information simultaneously contribute to the unfolding neural dynamics of comprehension. PMID:26311477

  6. Revisiting the incremental effects of context on word processing: Evidence from single-word event-related brain potentials.

    PubMed

    Payne, Brennan R; Lee, Chia-Lin; Federmeier, Kara D

    2015-11-01

    The amplitude of the N400-an event-related potential (ERP) component linked to meaning processing and initial access to semantic memory-is inversely related to the incremental buildup of semantic context over the course of a sentence. We revisited the nature and scope of this incremental context effect, adopting a word-level linear mixed-effects modeling approach, with the goal of probing the continuous and incremental effects of semantic and syntactic context on multiple aspects of lexical processing during sentence comprehension (i.e., effects of word frequency and orthographic neighborhood). First, we replicated the classic word-position effect at the single-word level: Open-class words showed reductions in N400 amplitude with increasing word position in semantically congruent sentences only. Importantly, we found that accruing sentence context had separable influences on the effects of frequency and neighborhood on the N400. Word frequency effects were reduced with accumulating semantic context. However, orthographic neighborhood was unaffected by accumulating context, showing robust effects on the N400 across all words, even within congruent sentences. Additionally, we found that N400 amplitudes to closed-class words were reduced with incrementally constraining syntactic context in sentences that provided only syntactic constraints. Taken together, our findings indicate that modeling word-level variability in ERPs reveals mechanisms by which different sources of information simultaneously contribute to the unfolding neural dynamics of comprehension. © 2015 Society for Psychophysiological Research.

  7. Diffusion-controlled reactions modeling in Geant4-DNA

    NASA Astrophysics Data System (ADS)

    Karamitros, M.; Luan, S.; Bernal, M. A.; Allison, J.; Baldacchino, G.; Davidkova, M.; Francis, Z.; Friedland, W.; Ivantchenko, V.; Ivantchenko, A.; Mantero, A.; Nieminem, P.; Santin, G.; Tran, H. N.; Stepan, V.; Incerti, S.

    2014-10-01

    Context Under irradiation, a biological system undergoes a cascade of chemical reactions that can lead to an alteration of its normal operation. There are different types of radiation and many competing reactions. As a result the kinetics of chemical species is extremely complex. The simulation becomes then a powerful tool which, by describing the basic principles of chemical reactions, can reveal the dynamics of the macroscopic system. To understand the dynamics of biological systems under radiation, since the 80s there have been on-going efforts carried out by several research groups to establish a mechanistic model that consists in describing all the physical, chemical and biological phenomena following the irradiation of single cells. This approach is generally divided into a succession of stages that follow each other in time: (1) the physical stage, where the ionizing particles interact directly with the biological material; (2) the physico-chemical stage, where the targeted molecules release their energy by dissociating, creating new chemical species; (3) the chemical stage, where the new chemical species interact with each other or with the biomolecules; (4) the biological stage, where the repairing mechanisms of the cell come into play. This article focuses on the modeling of the chemical stage. Method This article presents a general method of speeding-up chemical reaction simulations in fluids based on the Smoluchowski equation and Monte-Carlo methods, where all molecules are explicitly simulated and the solvent is treated as a continuum. The model describes diffusion-controlled reactions. This method has been implemented in Geant4-DNA. The keys to the new algorithm include: (1) the combination of a method to compute time steps dynamically with a Brownian bridge process to account for chemical reactions, which avoids costly fixed time step simulations; (2) a k-d tree data structure for quickly locating, for a given molecule, its closest reactants. The

  8. Diffusion-controlled reactions modeling in Geant4-DNA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karamitros, M., E-mail: matkara@gmail.com; CNRS, INCIA, UMR 5287, F-33400 Talence; Luan, S.

    2014-10-01

    Context Under irradiation, a biological system undergoes a cascade of chemical reactions that can lead to an alteration of its normal operation. There are different types of radiation and many competing reactions. As a result the kinetics of chemical species is extremely complex. The simulation becomes then a powerful tool which, by describing the basic principles of chemical reactions, can reveal the dynamics of the macroscopic system. To understand the dynamics of biological systems under radiation, since the 80s there have been on-going efforts carried out by several research groups to establish a mechanistic model that consists in describing allmore » the physical, chemical and biological phenomena following the irradiation of single cells. This approach is generally divided into a succession of stages that follow each other in time: (1) the physical stage, where the ionizing particles interact directly with the biological material; (2) the physico-chemical stage, where the targeted molecules release their energy by dissociating, creating new chemical species; (3) the chemical stage, where the new chemical species interact with each other or with the biomolecules; (4) the biological stage, where the repairing mechanisms of the cell come into play. This article focuses on the modeling of the chemical stage. Method This article presents a general method of speeding-up chemical reaction simulations in fluids based on the Smoluchowski equation and Monte-Carlo methods, where all molecules are explicitly simulated and the solvent is treated as a continuum. The model describes diffusion-controlled reactions. This method has been implemented in Geant4-DNA. The keys to the new algorithm include: (1) the combination of a method to compute time steps dynamically with a Brownian bridge process to account for chemical reactions, which avoids costly fixed time step simulations; (2) a k–d tree data structure for quickly locating, for a given molecule, its closest

  9. An Improved Incremental Learning Approach for KPI Prognosis of Dynamic Fuel Cell System.

    PubMed

    Yin, Shen; Xie, Xiaochen; Lam, James; Cheung, Kie Chung; Gao, Huijun

    2016-12-01

    The key performance indicator (KPI) has an important practical value with respect to the product quality and economic benefits for modern industry. To cope with the KPI prognosis issue under nonlinear conditions, this paper presents an improved incremental learning approach based on available process measurements. The proposed approach takes advantage of the algorithm overlapping of locally weighted projection regression (LWPR) and partial least squares (PLS), implementing the PLS-based prognosis in each locally linear model produced by the incremental learning process of LWPR. The global prognosis results including KPI prediction and process monitoring are obtained from the corresponding normalized weighted means of all the local models. The statistical indicators for prognosis are enhanced as well by the design of novel KPI-related and KPI-unrelated statistics with suitable control limits for non-Gaussian data. For application-oriented purpose, the process measurements from real datasets of a proton exchange membrane fuel cell system are employed to demonstrate the effectiveness of KPI prognosis. The proposed approach is finally extended to a long-term voltage prediction for potential reference of further fuel cell applications.

  10. Disrupting incrementalism in health care innovation.

    PubMed

    Soleimani, Farzad; Zenios, Stefanos

    2011-08-01

    To build enabling innovation frameworks for health care entrepreneurs to better identify, evaluate, and pursue entrepreneurial opportunities. Powerful frameworks have been developed to enable entrepreneurs and investors identify which opportunity areas are worth pursuing and which start-up ideas have the potential to succeed. These frameworks, however, have not been clearly defined and interpreted for innovations in health care. Having a better understanding of the process of innovation in health care allows physician entrepreneurs to innovate more successfully. A review of academic literature was conducted. Concepts and frameworks related to technology innovation were analyzed. A new set of health care specific frameworks was developed. These frameworks were then applied to innovations in various health care subsectors. Health care entrepreneurs would greatly benefit from distinguishing between incremental and disruptive innovations. The US regulatory and reimbursement systems favor incrementalism with a greater chance of success for established players. Small companies and individual groups, however, are more likely to thrive if they adopt a disruptive strategy. Disruption in health care occurs through various mechanisms as detailed in this article. While the main mechanism of disruption might vary across different health care subsectors, it is shown that disruptive innovations consistently require a component of contrarian interpretation to guarantee considerable payoff. If health care entrepreneurs choose to adopt an incrementalist approach, they need to build the risk of disruption into their models and also ascertain that they have a very strong intellectual property (IP) position to weather competition from established players. On the contrary, if they choose to pursue disruption in the market, albeit the competition will be less severe, they need to recognize that the regulatory and reimbursement hurdles are going to be very high. Thus, they would benefit

  11. 40 CFR 60.2830 - When must I submit the notifications of achievement of increments of progress?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Commenced Construction On or Before November 30, 1999 Model Rule-Air Curtain Incinerators § 60.2830 When... increments of progress must be postmarked no later than 10 business days after the compliance date for the...

  12. Modeling shock-driven reaction in low density PMDI foam

    NASA Astrophysics Data System (ADS)

    Brundage, Aaron; Alexander, C. Scott; Reinhart, William; Peterson, David

    Shock experiments on low density polyurethane foams reveal evidence of reaction at low impact pressures. However, these reaction thresholds are not evident over the low pressures reported for historical Hugoniot data of highly distended polyurethane at densities below 0.1 g/cc. To fill this gap, impact data given in a companion paper for polymethylene diisocyanate (PMDI) foam with a density of 0.087 g/cc were acquired for model validation. An equation of state (EOS) was developed to predict the shock response of these highly distended materials over the full range of impact conditions representing compaction of the inert material, low-pressure decomposition, and compression of the reaction products. A tabular SESAME EOS of the reaction products was generated using the JCZS database in the TIGER equilibrium code. In particular, the Arrhenius Burn EOS, a two-state model which transitions from an unreacted to a reacted state using single step Arrhenius kinetics, as implemented in the shock physics code CTH, was modified to include a statistical distribution of states. Hence, a single EOS is presented that predicts the onset to reaction due to shock loading in PMDI-based polyurethane foams. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's NNSA under Contract DE-AC04-94AL85000.

  13. The Dark Side of Malleability: Incremental Theory Promotes Immoral Behaviors

    PubMed Central

    Huang, Niwen; Zuo, Shijiang; Wang, Fang; Cai, Pan; Wang, Fengxiang

    2017-01-01

    Implicit theories drastically affect an individual’s processing of social information, decision making, and action. The present research focuses on whether individuals who hold the implicit belief that people’s moral character is fixed (entity theorists) and individuals who hold the implicit belief that people’s moral character is malleable (incremental theorists) make different choices when facing a moral decision. Incremental theorists are less likely to make the fundamental attribution error (FAE), rarely make moral judgment based on traits and show more tolerance to immorality, relative to entity theorists, which might decrease the possibility of undermining the self-image when they engage in immoral behaviors, and thus we posit that incremental beliefs facilitate immorality. Four studies were conducted to explore the effect of these two types of implicit theories on immoral intention or practice. The association between implicit theories and immoral behavior was preliminarily examined from the observer perspective in Study 1, and the results showed that people tended to associate immoral behaviors (including everyday immoral intention and environmental destruction) with an incremental theorist rather than an entity theorist. Then, the relationship was further replicated from the actor perspective in Studies 2–4. In Study 2, implicit theories, which were measured, positively predicted the degree of discrimination against carriers of the hepatitis B virus. In Study 3, implicit theories were primed through reading articles, and the participants in the incremental condition showed more cheating than those in the entity condition. In Study 4, implicit theories were primed through a new manipulation, and the participants in the unstable condition (primed incremental theory) showed more discrimination than those in the other three conditions. Taken together, the results of our four studies were consistent with our hypotheses. PMID:28824517

  14. The Dark Side of Malleability: Incremental Theory Promotes Immoral Behaviors.

    PubMed

    Huang, Niwen; Zuo, Shijiang; Wang, Fang; Cai, Pan; Wang, Fengxiang

    2017-01-01

    Implicit theories drastically affect an individual's processing of social information, decision making, and action. The present research focuses on whether individuals who hold the implicit belief that people's moral character is fixed (entity theorists) and individuals who hold the implicit belief that people's moral character is malleable (incremental theorists) make different choices when facing a moral decision. Incremental theorists are less likely to make the fundamental attribution error (FAE), rarely make moral judgment based on traits and show more tolerance to immorality, relative to entity theorists, which might decrease the possibility of undermining the self-image when they engage in immoral behaviors, and thus we posit that incremental beliefs facilitate immorality. Four studies were conducted to explore the effect of these two types of implicit theories on immoral intention or practice. The association between implicit theories and immoral behavior was preliminarily examined from the observer perspective in Study 1, and the results showed that people tended to associate immoral behaviors (including everyday immoral intention and environmental destruction) with an incremental theorist rather than an entity theorist. Then, the relationship was further replicated from the actor perspective in Studies 2-4. In Study 2, implicit theories, which were measured, positively predicted the degree of discrimination against carriers of the hepatitis B virus. In Study 3, implicit theories were primed through reading articles, and the participants in the incremental condition showed more cheating than those in the entity condition. In Study 4, implicit theories were primed through a new manipulation, and the participants in the unstable condition (primed incremental theory) showed more discrimination than those in the other three conditions. Taken together, the results of our four studies were consistent with our hypotheses.

  15. Dental caries increments and related factors in children with type 1 diabetes mellitus.

    PubMed

    Siudikiene, J; Machiulskiene, V; Nyvad, B; Tenovuo, J; Nedzelskiene, I

    2008-01-01

    The aim of this study was to analyse possible associations between caries increments and selected caries determinants in children with type 1 diabetes mellitus and their age- and sex-matched non-diabetic controls, over 2 years. A total of 63 (10-15 years old) diabetic and non-diabetic pairs were examined for dental caries, oral hygiene and salivary factors. Salivary flow rates, buffer effect, concentrations of mutans streptococci, lactobacilli, yeasts, total IgA and IgG, protein, albumin, amylase and glucose were analysed. Means of 2-year decayed/missing/filled surface (DMFS) increments were similar in diabetics and their controls. Over the study period, both unstimulated and stimulated salivary flow rates remained significantly lower in diabetic children compared to controls. No differences were observed in the counts of lactobacilli, mutans streptococci or yeast growth during follow-up, whereas salivary IgA, protein and glucose concentrations were higher in diabetics than in controls throughout the 2-year period. Multivariable linear regression analysis showed that children with higher 2-year DMFS increments were older at baseline and had higher salivary glucose concentrations than children with lower 2-year DMFS increments. Likewise, higher 2-year DMFS increments in diabetics versus controls were associated with greater increments in salivary glucose concentrations in diabetics. Higher increments in active caries lesions in diabetics versus controls were associated with greater increments of dental plaque and greater increments of salivary albumin. Our results suggest that, in addition to dental plaque as a common caries risk factor, diabetes-induced changes in salivary glucose and albumin concentrations are indicative of caries development among diabetics. Copyright 2008 S. Karger AG, Basel.

  16. 40 CFR 60.2815 - What are my requirements for meeting increments of progress and achieving final compliance?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY... Units Model Rule-Air Curtain Incinerators § 60.2815 What are my requirements for meeting increments of...

  17. 40 CFR 60.2585 - What must I include in the notifications of achievement of increments of progress?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY... Units Model Rule-Increments of Progress § 60.2585 What must I include in the notifications of...

  18. 40 CFR 60.2585 - What must I include in the notifications of achievement of increments of progress?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY... Units Model Rule-Increments of Progress § 60.2585 What must I include in the notifications of...

  19. 40 CFR 60.2585 - What must I include in the notifications of achievement of increments of progress?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY... Units Model Rule-Increments of Progress § 60.2585 What must I include in the notifications of...

  20. Fluid dynamic modeling of nano-thermite reactions

    NASA Astrophysics Data System (ADS)

    Martirosyan, Karen S.; Zyskin, Maxim; Jenkins, Charles M.; Yuki Horie, Yasuyuki

    2014-03-01

    This paper presents a direct numerical method based on gas dynamic equations to predict pressure evolution during the discharge of nanoenergetic materials. The direct numerical method provides for modeling reflections of the shock waves from the reactor walls that generates pressure-time fluctuations. The results of gas pressure prediction are consistent with the experimental evidence and estimates based on the self-similar solution. Artificial viscosity provides sufficient smoothing of shock wave discontinuity for the numerical procedure. The direct numerical method is more computationally demanding and flexible than self-similar solution, in particular it allows study of a shock wave in its early stage of reaction and allows the investigation of "slower" reactions, which may produce weaker shock waves. Moreover, numerical results indicate that peak pressure is not very sensitive to initial density and reaction time, providing that all the material reacts well before the shock wave arrives at the end of the reactor.

  1. Fluid dynamic modeling of nano-thermite reactions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martirosyan, Karen S., E-mail: karen.martirosyan@utb.edu; Zyskin, Maxim; Jenkins, Charles M.

    2014-03-14

    This paper presents a direct numerical method based on gas dynamic equations to predict pressure evolution during the discharge of nanoenergetic materials. The direct numerical method provides for modeling reflections of the shock waves from the reactor walls that generates pressure-time fluctuations. The results of gas pressure prediction are consistent with the experimental evidence and estimates based on the self-similar solution. Artificial viscosity provides sufficient smoothing of shock wave discontinuity for the numerical procedure. The direct numerical method is more computationally demanding and flexible than self-similar solution, in particular it allows study of a shock wave in its early stagemore » of reaction and allows the investigation of “slower” reactions, which may produce weaker shock waves. Moreover, numerical results indicate that peak pressure is not very sensitive to initial density and reaction time, providing that all the material reacts well before the shock wave arrives at the end of the reactor.« less

  2. Increment contracts: southern experience and potential use in the Appalachians

    Treesearch

    Gary W. Zinn; Gary W. Miller

    1984-01-01

    Increment contracts are long-term timber management contracts in which landowners receive regular payments based on the average annual growth of wood their land is capable of producing. Increment contracts have been used on nearly 500,000 acres of private forests in the South. Southern experience suggests that several changes in the contract would improve its utility:...

  3. Testing the relations between impulsivity-related traits, suicidality, and nonsuicidal self-injury: a test of the incremental validity of the UPPS model.

    PubMed

    Lynam, Donald R; Miller, Joshua D; Miller, Drew J; Bornovalova, Marina A; Lejuez, C W

    2011-04-01

    Borderline personality disorder (BPD) has received significant attention as a predictor of suicidal behavior (SB) and nonsuicidal self-injury (NSSI). Despite significant promise, trait impulsivity has received less attention. Understanding the relations between impulsivity and SB and NSSI is confounded, unfortunately, by the heterogeneous nature of impulsivity. This study examined the relations among 4 personality pathways to impulsive behavior studied via the UPPS model of impulsivity and SB and NSSI in a residential sample of drug abusers (N = 76). In this study, we tested whether these 4 impulsivity-related traits (i.e., Negative Urgency, Sensation Seeking, Lack of Premeditation, and Lack of Perseverance) provide incremental validity in the statistical prediction of SB and NSSI above and beyond BPD; they do. We also tested whether BPD symptoms provide incremental validity in the prediction of SB and NSSI above and beyond these impulsivity-related traits; they do not. In addition to the main effects of Lack of Premeditation and Negative Urgency, we found evidence of a robust interaction between these 2 personality traits. The current results argue strongly for the consideration of these 2 impulsivity-related domains--alone and in interaction--when attempting to understand and predict SB and NSSI.

  4. A discrete model to study reaction-diffusion-mechanics systems.

    PubMed

    Weise, Louis D; Nash, Martyn P; Panfilov, Alexander V

    2011-01-01

    This article introduces a discrete reaction-diffusion-mechanics (dRDM) model to study the effects of deformation on reaction-diffusion (RD) processes. The dRDM framework employs a FitzHugh-Nagumo type RD model coupled to a mass-lattice model, that undergoes finite deformations. The dRDM model describes a material whose elastic properties are described by a generalized Hooke's law for finite deformations (Seth material). Numerically, the dRDM approach combines a finite difference approach for the RD equations with a Verlet integration scheme for the equations of the mass-lattice system. Using this framework results were reproduced on self-organized pacemaking activity that have been previously found with a continuous RD mechanics model. Mechanisms that determine the period of pacemakers and its dependency on the medium size are identified. Finally it is shown how the drift direction of pacemakers in RDM systems is related to the spatial distribution of deformation and curvature effects.

  5. A Discrete Model to Study Reaction-Diffusion-Mechanics Systems

    PubMed Central

    Weise, Louis D.; Nash, Martyn P.; Panfilov, Alexander V.

    2011-01-01

    This article introduces a discrete reaction-diffusion-mechanics (dRDM) model to study the effects of deformation on reaction-diffusion (RD) processes. The dRDM framework employs a FitzHugh-Nagumo type RD model coupled to a mass-lattice model, that undergoes finite deformations. The dRDM model describes a material whose elastic properties are described by a generalized Hooke's law for finite deformations (Seth material). Numerically, the dRDM approach combines a finite difference approach for the RD equations with a Verlet integration scheme for the equations of the mass-lattice system. Using this framework results were reproduced on self-organized pacemaking activity that have been previously found with a continuous RD mechanics model. Mechanisms that determine the period of pacemakers and its dependency on the medium size are identified. Finally it is shown how the drift direction of pacemakers in RDM systems is related to the spatial distribution of deformation and curvature effects. PMID:21804911

  6. Incrementally learning objects by touch: online discriminative and generative models for tactile-based recognition.

    PubMed

    Soh, Harold; Demiris, Yiannis

    2014-01-01

    Human beings not only possess the remarkable ability to distinguish objects through tactile feedback but are further able to improve upon recognition competence through experience. In this work, we explore tactile-based object recognition with learners capable of incremental learning. Using the sparse online infinite Echo-State Gaussian process (OIESGP), we propose and compare two novel discriminative and generative tactile learners that produce probability distributions over objects during object grasping/palpation. To enable iterative improvement, our online methods incorporate training samples as they become available. We also describe incremental unsupervised learning mechanisms, based on novelty scores and extreme value theory, when teacher labels are not available. We present experimental results for both supervised and unsupervised learning tasks using the iCub humanoid, with tactile sensors on its five-fingered anthropomorphic hand, and 10 different object classes. Our classifiers perform comparably to state-of-the-art methods (C4.5 and SVM classifiers) and findings indicate that tactile signals are highly relevant for making accurate object classifications. We also show that accurate "early" classifications are possible using only 20-30 percent of the grasp sequence. For unsupervised learning, our methods generate high quality clusterings relative to the widely-used sequential k-means and self-organising map (SOM), and we present analyses into the differences between the approaches.

  7. 40 CFR 60.1585 - What are my requirements for meeting increments of progress and achieving final compliance?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY... or Before August 30, 1999 Model Rule-Increments of Progress § 60.1585 What are my requirements for...

  8. Adenovirus-delivered GFP-HO-1C[INCREMENT]23 attenuates blood-spinal cord barrier permeability after rat spinal cord contusion.

    PubMed

    Chang, Sheng; Bi, Yunlong; Meng, Xiangwei; Qu, Lin; Cao, Yang

    2018-03-21

    The blood-spinal cord barrier (BSCB) plays a key role in maintaining the microenvironment and is primarily composed of tight junction proteins and nonfenestrated capillary endothelial cells. After injury, BSCB damage results in increasing capillary permeability and release of inflammatory factors. Recent studies have reported that haem oxygenase-1 (HO-1) fragments lacking 23 amino acids at the C-terminus (HO-1C[INCREMENT]23) exert novel anti-inflammatory and antioxidative effects in vitro. However, no study has identified the role of HO-1C[INCREMENT]23 in vivo. We aimed to investigate the protective effects of HO-1C[INCREMENT]23 on the BSCB after spinal cord injury (SCI) in a rat model. Here, adenoviral HO-1C[INCREMENT]23 (Ad-GFP-HO-1C[INCREMENT]23) was intrathecally injected into the 10th thoracic spinal cord segment (T10) 7 days before SCI. In addition, nuclear and cytoplasmic extraction and immunofluorescence staining of HO-1 were used to examine the effect of Ad-GFP-HO-1C[INCREMENT]23 on HO-1 nuclear translocation. Evan's blue staining served as an index of capillary permeability and was detected by fluorescence microscopy at 633 nm. Western blotting was also performed to detect tight junction protein expression. The Basso, Beattie and Bresnahan score was used to evaluate kinematic functional recovery through the 28th day after SCI. In this study, the Ad-GFP-HO-1C[INCREMENT]23 group showed better kinematic functional recovery after SCI than the Ad-GFP and Vehicle groups, as well as smaller reductions in TJ proteins and capillary permeability compared with those in the Ad-GFP and Vehicle groups. These findings indicated that Ad-GFP-HO-1C[INCREMENT]23 might have a potential therapeutic effect that is mediated by its protection of BSCB integrity.

  9. Enabling Incremental Query Re-Optimization.

    PubMed

    Liu, Mengmeng; Ives, Zachary G; Loo, Boon Thau

    2016-01-01

    As declarative query processing techniques expand to the Web, data streams, network routers, and cloud platforms, there is an increasing need to re-plan execution in the presence of unanticipated performance changes. New runtime information may affect which query plan we prefer to run. Adaptive techniques require innovation both in terms of the algorithms used to estimate costs , and in terms of the search algorithm that finds the best plan. We investigate how to build a cost-based optimizer that recomputes the optimal plan incrementally given new cost information, much as a stream engine constantly updates its outputs given new data. Our implementation especially shows benefits for stream processing workloads. It lays the foundations upon which a variety of novel adaptive optimization algorithms can be built. We start by leveraging the recently proposed approach of formulating query plan enumeration as a set of recursive datalog queries ; we develop a variety of novel optimization approaches to ensure effective pruning in both static and incremental cases. We further show that the lessons learned in the declarative implementation can be equally applied to more traditional optimizer implementations.

  10. Enabling Incremental Query Re-Optimization

    PubMed Central

    Liu, Mengmeng; Ives, Zachary G.; Loo, Boon Thau

    2017-01-01

    As declarative query processing techniques expand to the Web, data streams, network routers, and cloud platforms, there is an increasing need to re-plan execution in the presence of unanticipated performance changes. New runtime information may affect which query plan we prefer to run. Adaptive techniques require innovation both in terms of the algorithms used to estimate costs, and in terms of the search algorithm that finds the best plan. We investigate how to build a cost-based optimizer that recomputes the optimal plan incrementally given new cost information, much as a stream engine constantly updates its outputs given new data. Our implementation especially shows benefits for stream processing workloads. It lays the foundations upon which a variety of novel adaptive optimization algorithms can be built. We start by leveraging the recently proposed approach of formulating query plan enumeration as a set of recursive datalog queries; we develop a variety of novel optimization approaches to ensure effective pruning in both static and incremental cases. We further show that the lessons learned in the declarative implementation can be equally applied to more traditional optimizer implementations. PMID:28659658

  11. 40 CFR 60.1620 - How do I comply with the increment of progress for initiating onsite construction?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY... or Before August 30, 1999 Model Rule-Increments of Progress § 60.1620 How do I comply with the...

  12. 40 CFR 60.1625 - How do I comply with the increment of progress for completing onsite construction?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY... or Before August 30, 1999 Model Rule-Increments of Progress § 60.1625 How do I comply with the...

  13. A New Method for Incremental Testing of Finite State Machines

    NASA Technical Reports Server (NTRS)

    Pedrosa, Lehilton Lelis Chaves; Moura, Arnaldo Vieira

    2010-01-01

    The automatic generation of test cases is an important issue for conformance testing of several critical systems. We present a new method for the derivation of test suites when the specification is modeled as a combined Finite State Machine (FSM). A combined FSM is obtained conjoining previously tested submachines with newly added states. This new concept is used to describe a fault model suitable for incremental testing of new systems, or for retesting modified implementations. For this fault model, only the newly added or modified states need to be tested, thereby considerably reducing the size of the test suites. The new method is a generalization of the well-known W-method and the G-method, but is scalable, and so it can be used to test FSMs with an arbitrarily large number of states.

  14. Incremental soil sampling root water uptake, or be great through others

    USDA-ARS?s Scientific Manuscript database

    Ray Allmaras pursued several research topics in relation to residue and tillage research. He looked for new tools to help explain soil responses to tillage, including disk permeameters and image analysis. The incremental sampler developed by Pikul and Allmaras allowed small-depth increment, volumetr...

  15. A Group Increment Scheme for Infrared Absorption Intensities of Greenhouse Gases

    NASA Technical Reports Server (NTRS)

    Kokkila, Sara I.; Bera, Partha P.; Francisco, Joseph S.; Lee, Timothy J.

    2012-01-01

    A molecule's absorption in the atmospheric infrared (IR) window (IRW) is an indicator of its efficiency as a greenhouse gas. A model for estimating the absorption of a fluorinated molecule within the IRW was developed to assess its radiative impact. This model will be useful in comparing different hydrofluorocarbons and hydrofluoroethers contribution to global warming. The absorption of radiation by greenhouse gases, in particular hydrofluoroethers and hydrofluorocarbons, was investigated using ab initio quantum mechanical methods. Least squares regression techniques were used to create a model based on this data. The placement and number of fluorines in the molecule were found to affect the absorption in the IR window and were incorporated into the model. Several group increment models are discussed. An additive model based on one-carbon groups is found to work satisfactorily in predicting the ab initio calculated vibrational intensities.

  16. On the Accuracy and Parallelism of GPGPU-Powered Incremental Clustering Algorithms

    PubMed Central

    He, Li; Zheng, Hao; Wang, Lei

    2017-01-01

    Incremental clustering algorithms play a vital role in various applications such as massive data analysis and real-time data processing. Typical application scenarios of incremental clustering raise high demand on computing power of the hardware platform. Parallel computing is a common solution to meet this demand. Moreover, General Purpose Graphic Processing Unit (GPGPU) is a promising parallel computing device. Nevertheless, the incremental clustering algorithm is facing a dilemma between clustering accuracy and parallelism when they are powered by GPGPU. We formally analyzed the cause of this dilemma. First, we formalized concepts relevant to incremental clustering like evolving granularity. Second, we formally proved two theorems. The first theorem proves the relation between clustering accuracy and evolving granularity. Additionally, this theorem analyzes the upper and lower bounds of different-to-same mis-affiliation. Fewer occurrences of such mis-affiliation mean higher accuracy. The second theorem reveals the relation between parallelism and evolving granularity. Smaller work-depth means superior parallelism. Through the proofs, we conclude that accuracy of an incremental clustering algorithm is negatively related to evolving granularity while parallelism is positively related to the granularity. Thus the contradictory relations cause the dilemma. Finally, we validated the relations through a demo algorithm. Experiment results verified theoretical conclusions. PMID:29123546

  17. On the Accuracy and Parallelism of GPGPU-Powered Incremental Clustering Algorithms.

    PubMed

    Chen, Chunlei; He, Li; Zhang, Huixiang; Zheng, Hao; Wang, Lei

    2017-01-01

    Incremental clustering algorithms play a vital role in various applications such as massive data analysis and real-time data processing. Typical application scenarios of incremental clustering raise high demand on computing power of the hardware platform. Parallel computing is a common solution to meet this demand. Moreover, General Purpose Graphic Processing Unit (GPGPU) is a promising parallel computing device. Nevertheless, the incremental clustering algorithm is facing a dilemma between clustering accuracy and parallelism when they are powered by GPGPU. We formally analyzed the cause of this dilemma. First, we formalized concepts relevant to incremental clustering like evolving granularity. Second, we formally proved two theorems. The first theorem proves the relation between clustering accuracy and evolving granularity. Additionally, this theorem analyzes the upper and lower bounds of different-to-same mis-affiliation. Fewer occurrences of such mis-affiliation mean higher accuracy. The second theorem reveals the relation between parallelism and evolving granularity. Smaller work-depth means superior parallelism. Through the proofs, we conclude that accuracy of an incremental clustering algorithm is negatively related to evolving granularity while parallelism is positively related to the granularity. Thus the contradictory relations cause the dilemma. Finally, we validated the relations through a demo algorithm. Experiment results verified theoretical conclusions.

  18. The Sugar Model: Autocatalytic Activity of the Triose-Ammonia Reaction

    NASA Technical Reports Server (NTRS)

    Weber, Arthur L.

    2006-01-01

    Reaction of triose sugars with ammonia under anaerobic conditions yielded autocatalytic products. The autocatalytic behavior of the products was examined by measuring the effect of the crude triose-ammonia reaction product on the kinetics of a second identical triose-ammonia reaction. The reaction product showed autocatalytic activity by increasing both the rate of disappearance of triose and the rate formation of pyruvaldehyde, the product of triose dehydration. This synthetic process is considered a reasonable model of origin-of-life chemistry because it uses plausible prebiotic substrates, and resembles modern biosynthesis by employing the energized carbon groups of sugars to drive the synthesis of autocatalytic molecules.

  19. The Sugar Model: Autocatalytic Activity of the Triose Ammonia Reaction

    NASA Astrophysics Data System (ADS)

    Weber, Arthur L.

    2007-04-01

    Reaction of triose sugars with ammonia under anaerobic conditions yielded autocatalytic products. The autocatalytic behavior of the products was examined by measuring the effect of the crude triose ammonia reaction product on the kinetics of a second identical triose ammonia reaction. The reaction product showed autocatalytic activity by increasing both the rate of disappearance of triose and the rate of formation of pyruvaldehyde, the product of triose dehydration. This synthetic process is considered a reasonable model of origin-of-life chemistry because it uses plausible prebiotic substrates, and resembles modern biosynthesis by employing the energized carbon groups of sugars to drive the synthesis of autocatalytic molecules.

  20. A Joint Modeling Approach for Reaction Time and Accuracy in Psycholinguistic Experiments

    ERIC Educational Resources Information Center

    Loeys, T.; Rosseel, Y.; Baten, K.

    2011-01-01

    In the psycholinguistic literature, reaction times and accuracy can be analyzed separately using mixed (logistic) effects models with crossed random effects for item and subject. Given the potential correlation between these two outcomes, a joint model for the reaction time and accuracy may provide further insight. In this paper, a Bayesian…

  1. Convection induced by thermal gradients on thin reaction fronts

    NASA Astrophysics Data System (ADS)

    Ruelas Paredes, David R. A.; Vasquez, Desiderio A.

    2017-09-01

    We present a thin front model for the propagation of chemical reaction fronts in liquids inside a Hele-Shaw cell or porous media. In this model we take into account density gradients due to thermal and compositional changes across a thin interface. The front separating reacted from unreacted fluids evolves following an eikonal relation between the normal speed and the curvature. We carry out a linear stability analysis of convectionless flat fronts confined in a two-dimensional rectangular domain. We find that all fronts are stable to perturbations of short wavelength, but they become unstable for some wavelengths depending on the values of compositional and thermal gradients. If the effects of these gradients oppose each other, we observe a range of wavelengths that make the flat front unstable. Numerical solutions of the nonlinear model show curved fronts of steady shape with convection propagating faster than flat fronts. Exothermic fronts increase the temperature of the fluid as they propagate through the domain. This increment in temperature decreases with increasing speed.

  2. International Space Station Increment-2 Quick Look Report

    NASA Technical Reports Server (NTRS)

    Jules, Kenol; Hrovat, Kenneth; Kelly, Eric

    2001-01-01

    The objective of this quick look report is to disseminate the International Space Station (ISS) Increment-2 reduced gravity environment preliminary analysis in a timely manner to the microgravity scientific community. This report is a quick look at the processed acceleration data collected by the Microgravity Acceleration Measurement System (MAMS) during the period of May 3 to June 8, 2001. The report is by no means an exhaustive examination of all the relevant activities, which occurred during the time span mentioned above for two reasons. First, the time span being considered in this report is rather short since the MAMS was not active throughout the time span being considered to allow a detailed characterization. Second, as the name of the report implied, it is a quick look at the acceleration data. Consequently, a more comprehensive report, the ISS Increment-2 report, will be published following the conclusion of the Increment-2 tour of duty. NASA sponsors the MAMS and the Space Acceleration Microgravity System (SAMS) to support microgravity science experiments, which require microgravity acceleration measurements. On April 19, 2001, both the MAMS and the SAMS units were launched on STS-100 from the Kennedy Space Center for installation on the ISS. The MAMS unit was flown to the station in support of science experiments requiring quasisteady acceleration data measurements, while the SAMS unit was flown to support experiments requiring vibratory acceleration data measurement. Both acceleration systems are also used in support of the vehicle microgravity requirements verification. The ISS reduced gravity environment analysis presented in this report uses mostly the MAMS acceleration data measurements (the Increment-2 report will cover both systems). The MAMS has two sensors. The MAMS Orbital Acceleration Research Experiment Sensor Subsystem, which is a low frequency range sensor (up to 1 Hz), is used to characterize the quasi-steady environment for payloads and

  3. Criterion and incremental validity of the emotion regulation questionnaire

    PubMed Central

    Ioannidis, Christos A.; Siegling, A. B.

    2015-01-01

    Although research on emotion regulation (ER) is developing, little attention has been paid to the predictive power of ER strategies beyond established constructs. The present study examined the incremental validity of the Emotion Regulation Questionnaire (ERQ; Gross and John, 2003), which measures cognitive reappraisal and expressive suppression, over and above the Big Five personality factors. It also extended the evidence for the measure's criterion validity to yet unexamined criteria. A university student sample (N = 203) completed the ERQ, a measure of the Big Five, and relevant cognitive and emotion-laden criteria. Cognitive reappraisal predicted positive affect beyond personality, as well as experiential flexibility and constructive self-assertion beyond personality and affect. Expressive suppression explained incremental variance in negative affect beyond personality and in experiential flexibility beyond personality and general affect. No incremental effects were found for worry, social anxiety, rumination, reflection, and preventing negative emotions. Implications for the construct validity and utility of the ERQ are discussed. PMID:25814967

  4. Incremental Beliefs of Ability, Achievement Emotions and Learning of Singapore Students

    ERIC Educational Resources Information Center

    Luo, Wenshu; Lee, Kerry; Ng, Pak Tee; Ong, Joanne Xiao Wei

    2014-01-01

    This study investigated the relationships of students' incremental beliefs of math ability to their achievement emotions, classroom engagement and math achievement. A sample of 273 secondary students in Singapore were administered measures of incremental beliefs of math ability, math enjoyment, pride, boredom and anxiety, as well as math classroom…

  5. Estimation of Energy Expenditure Using a Patch-Type Sensor Module with an Incremental Radial Basis Function Neural Network

    PubMed Central

    Li, Meina; Kwak, Keun-Chang; Kim, Youn Tae

    2016-01-01

    Conventionally, indirect calorimetry has been used to estimate oxygen consumption in an effort to accurately measure human body energy expenditure. However, calorimetry requires the subject to wear a mask that is neither convenient nor comfortable. The purpose of our study is to develop a patch-type sensor module with an embedded incremental radial basis function neural network (RBFNN) for estimating the energy expenditure. The sensor module contains one ECG electrode and a three-axis accelerometer, and can perform real-time heart rate (HR) and movement index (MI) monitoring. The embedded incremental network includes linear regression (LR) and RBFNN based on context-based fuzzy c-means (CFCM) clustering. This incremental network is constructed by building a collection of information granules through CFCM clustering that is guided by the distribution of error of the linear part of the LR model. PMID:27669249

  6. The Cognitive Underpinnings of Incremental Rehearsal

    ERIC Educational Resources Information Center

    Varma, Sashank; Schleisman, Katrina B.

    2014-01-01

    Incremental rehearsal (IR) is a flashcard technique that has been developed and evaluated by school psychologists. We discuss potential learning and memory effects from cognitive psychology that may explain the observed superiority of IR over other flashcard techniques. First, we propose that IR is a form of "spaced practice" that…

  7. Comparison of different incremental analysis update schemes in a realistic assimilation system with Ensemble Kalman Filter

    NASA Astrophysics Data System (ADS)

    Yan, Y.; Barth, A.; Beckers, J. M.; Brankart, J. M.; Brasseur, P.; Candille, G.

    2017-07-01

    In this paper, three incremental analysis update schemes (IAU 0, IAU 50 and IAU 100) are compared in the same assimilation experiments with a realistic eddy permitting primitive equation model of the North Atlantic Ocean using the Ensemble Kalman Filter. The difference between the three IAU schemes lies on the position of the increment update window. The relevance of each IAU scheme is evaluated through analyses on both thermohaline and dynamical variables. The validation of the assimilation results is performed according to both deterministic and probabilistic metrics against different sources of observations. For deterministic validation, the ensemble mean and the ensemble spread are compared to the observations. For probabilistic validation, the continuous ranked probability score (CRPS) is used to evaluate the ensemble forecast system according to reliability and resolution. The reliability is further decomposed into bias and dispersion by the reduced centred random variable (RCRV) score. The obtained results show that 1) the IAU 50 scheme has the same performance as the IAU 100 scheme 2) the IAU 50/100 schemes outperform the IAU 0 scheme in error covariance propagation for thermohaline variables in relatively stable region, while the IAU 0 scheme outperforms the IAU 50/100 schemes in dynamical variables estimation in dynamically active region 3) in case with sufficient number of observations and good error specification, the impact of IAU schemes is negligible. The differences between the IAU 0 scheme and the IAU 50/100 schemes are mainly due to different model integration time and different instability (density inversion, large vertical velocity, etc.) induced by the increment update. The longer model integration time with the IAU 50/100 schemes, especially the free model integration, on one hand, allows for better re-establishment of the equilibrium model state, on the other hand, smooths the strong gradients in dynamically active region.

  8. PhreeqcRM: A reaction module for transport simulators based on the geochemical model PHREEQC

    USGS Publications Warehouse

    Parkhurst, David L.; Wissmeier, Laurin

    2015-01-01

    PhreeqcRM is a geochemical reaction module designed specifically to perform equilibrium and kinetic reaction calculations for reactive transport simulators that use an operator-splitting approach. The basic function of the reaction module is to take component concentrations from the model cells of the transport simulator, run geochemical reactions, and return updated component concentrations to the transport simulator. If multicomponent diffusion is modeled (e.g., Nernst–Planck equation), then aqueous species concentrations can be used instead of component concentrations. The reaction capabilities are a complete implementation of the reaction capabilities of PHREEQC. In each cell, the reaction module maintains the composition of all of the reactants, which may include minerals, exchangers, surface complexers, gas phases, solid solutions, and user-defined kinetic reactants.PhreeqcRM assigns initial and boundary conditions for model cells based on standard PHREEQC input definitions (files or strings) of chemical compositions of solutions and reactants. Additional PhreeqcRM capabilities include methods to eliminate reaction calculations for inactive parts of a model domain, transfer concentrations and other model properties, and retrieve selected results. The module demonstrates good scalability for parallel processing by using multiprocessing with MPI (message passing interface) on distributed memory systems, and limited scalability using multithreading with OpenMP on shared memory systems. PhreeqcRM is written in C++, but interfaces allow methods to be called from C or Fortran. By using the PhreeqcRM reaction module, an existing multicomponent transport simulator can be extended to simulate a wide range of geochemical reactions. Results of the implementation of PhreeqcRM as the reaction engine for transport simulators PHAST and FEFLOW are shown by using an analytical solution and the reactive transport benchmark of MoMaS.

  9. Abrasion-ablation model for neutron production in heavy ion reactions

    NASA Technical Reports Server (NTRS)

    Cucinotta, Francis A.; Wilson, John W.; Townsend, Lawrence W.

    1995-01-01

    In heavy ion reactions, neutron production at forward angles is observed to occur with a Gaussian shape that is centered near the beam energy and extends to energies well above that of the beam. This paper presents an abrasion-ablation model for making quantitative predictions of the neutron spectrum. To describe neutrons produced from the abrasion step of the reaction where the projectile and target overlap, the authors use the Glauber model and include effects of final-state interactions. They then use the prefragment mass distribution from abrasion with a statistical evaporation model to estimate the neutron spectrum resulting from ablation. Measurements of neutron production from Ne and Nb beams are compared with calculations, and good agreement is found.

  10. A Luenberger observer for reaction-diffusion models with front position data

    NASA Astrophysics Data System (ADS)

    Collin, Annabelle; Chapelle, Dominique; Moireau, Philippe

    2015-11-01

    We propose a Luenberger observer for reaction-diffusion models with propagating front features, and for data associated with the location of the front over time. Such models are considered in various application fields, such as electrophysiology, wild-land fire propagation and tumor growth modeling. Drawing our inspiration from image processing methods, we start by proposing an observer for the eikonal-curvature equation that can be derived from the reaction-diffusion model by an asymptotic expansion. We then carry over this observer to the underlying reaction-diffusion equation by an ;inverse asymptotic analysis;, and we show that the associated correction in the dynamics has a stabilizing effect for the linearized estimation error. We also discuss the extension to joint state-parameter estimation by using the earlier-proposed ROUKF strategy. We then illustrate and assess our proposed observer method with test problems pertaining to electrophysiology modeling, including with a realistic model of cardiac atria. Our numerical trials show that state estimation is directly very effective with the proposed Luenberger observer, while specific strategies are needed to accurately perform parameter estimation - as is usual with Kalman filtering used in a nonlinear setting - and we demonstrate two such successful strategies.

  11. Coupled enzyme reactions performed in heterogeneous reaction media: experiments and modeling for glucose oxidase and horseradish peroxidase in a PEG/citrate aqueous two-phase system.

    PubMed

    Aumiller, William M; Davis, Bradley W; Hashemian, Negar; Maranas, Costas; Armaou, Antonios; Keating, Christine D

    2014-03-06

    The intracellular environment in which biological reactions occur is crowded with macromolecules and subdivided into microenvironments that differ in both physical properties and chemical composition. The work described here combines experimental and computational model systems to help understand the consequences of this heterogeneous reaction media on the outcome of coupled enzyme reactions. Our experimental model system for solution heterogeneity is a biphasic polyethylene glycol (PEG)/sodium citrate aqueous mixture that provides coexisting PEG-rich and citrate-rich phases. Reaction kinetics for the coupled enzyme reaction between glucose oxidase (GOX) and horseradish peroxidase (HRP) were measured in the PEG/citrate aqueous two-phase system (ATPS). Enzyme kinetics differed between the two phases, particularly for the HRP. Both enzymes, as well as the substrates glucose and H2O2, partitioned to the citrate-rich phase; however, the Amplex Red substrate necessary to complete the sequential reaction partitioned strongly to the PEG-rich phase. Reactions in ATPS were quantitatively described by a mathematical model that incorporated measured partitioning and kinetic parameters. The model was then extended to new reaction conditions, i.e., higher enzyme concentration. Both experimental and computational results suggest mass transfer across the interface is vital to maintain the observed rate of product formation, which may be a means of metabolic regulation in vivo. Although outcomes for a specific system will depend on the particulars of the enzyme reactions and the microenvironments, this work demonstrates how coupled enzymatic reactions in complex, heterogeneous media can be understood in terms of a mathematical model.

  12. Incremental Transductive Learning Approaches to Schistosomiasis Vector Classification

    NASA Astrophysics Data System (ADS)

    Fusco, Terence; Bi, Yaxin; Wang, Haiying; Browne, Fiona

    2016-08-01

    The key issues pertaining to collection of epidemic disease data for our analysis purposes are that it is a labour intensive, time consuming and expensive process resulting in availability of sparse sample data which we use to develop prediction models. To address this sparse data issue, we present the novel Incremental Transductive methods to circumvent the data collection process by applying previously acquired data to provide consistent, confidence-based labelling alternatives to field survey research. We investigated various reasoning approaches for semi-supervised machine learning including Bayesian models for labelling data. The results show that using the proposed methods, we can label instances of data with a class of vector density at a high level of confidence. By applying the Liberal and Strict Training Approaches, we provide a labelling and classification alternative to standalone algorithms. The methods in this paper are components in the process of reducing the proliferation of the Schistosomiasis disease and its effects.

  13. Does Stroke Volume Increase During an Incremental Exercise? A Systematic Review

    PubMed Central

    Vieira, Stella S.; Lemes, Brunno; de T. C. de Carvalho, Paulo; N. de Lima, Rafael; S. Bocalini, Danilo; A. S. Junior, José; Arsa, Gisela; A. Casarin, Cezar; L. Andrade, Erinaldo; J. Serra, Andrey

    2016-01-01

    Introduction: Cardiac output increases during incremental-load exercise to meet metabolic skeletal muscle demand. This response requires a fast adjustment in heart rate and stroke volume. The heart rate is well known to increase linearly with exercise load; however, data for stroke volume during incremental-load exercise are unclear. Our objectives were to (a) review studies that have investigated stroke volume on incremental load exercise and (b) summarize the findings for stroke volume, primarily at maximal-exercise load. Methods: A comprehensive review of the Cochrane Library’s, Embase, Medline, SportDiscus, PubMed, and Web of Sci-ence databases was carried out for the years 1985 to the present. The search was performed between February and June 2014 to find studies evaluating changes in stroke volume during incremental-load exercise. Controlled and uncontrolled trials were evaluated for a quality score. Results: The stroke volume data in maximal-exercise load are inconsistent. There is evidence to hypothesis that stroke volume increases during maximal-exercise load, but other lines of evidence indicate that stroke volume reaches a plateau under these circumstances, or even decreases. Conclusion: The stroke volume are unclear, include contradictory evidence. Additional studies with standardized reporting for subjects (e.g., age, gender, physical fitness, and body position), exercise test protocols, and left ventricular function are required to clarify the characteristics of stroke volume during incremental maximal-exercise load. PMID:27347221

  14. Incremental Validity of the New MCAT.

    ERIC Educational Resources Information Center

    Friedman, Charles P.; Bakewell, William E., Jr.

    1980-01-01

    The ability of the new Medical College Admission Test (MCAT) to predict performance of first-year medical students at the University of North Carolina was studied. Its incremental validity, determined by computing the additional variance in performance explainable by the MCAT after the effects of other admissions variables were taken into account,…

  15. Next Generation Diagnostic System (NGDS) Increment 1 Early Fielding Report

    DTIC Science & Technology

    2017-06-07

    for a NGDS Warrior Panel test FOB 5- Marburg Virus 2 – Marburg 1 – Staph infection 1 – Flu Yes 5 days post -exposure 70 minutes after...Director, Operational Test and Evaluation Next Generation Diagnostic System (NGDS) Increment 1 Early Fielding Report   June 2017...Increment 1 Early Fielding Report Summary This report provides the Director, Operational Test and Evaluation’s (DOT&E) operational assessment of the

  16. Prediction of adverse drug reactions using decision tree modeling.

    PubMed

    Hammann, F; Gutmann, H; Vogt, N; Helma, C; Drewe, J

    2010-07-01

    Drug safety is of great importance to public health. The detrimental effects of drugs not only limit their application but also cause suffering in individual patients and evoke distrust of pharmacotherapy. For the purpose of identifying drugs that could be suspected of causing adverse reactions, we present a structure-activity relationship analysis of adverse drug reactions (ADRs) in the central nervous system (CNS), liver, and kidney, and also of allergic reactions, for a broad variety of drugs (n = 507) from the Swiss drug registry. Using decision tree induction, a machine learning method, we determined the chemical, physical, and structural properties of compounds that predispose them to causing ADRs. The models had high predictive accuracies (78.9-90.2%) for allergic, renal, CNS, and hepatic ADRs. We show the feasibility of predicting complex end-organ effects using simple models that involve no expensive computations and that can be used (i) in the selection of the compound during the drug discovery stage, (ii) to understand how drugs interact with the target organ systems, and (iii) for generating alerts in postmarketing drug surveillance and pharmacovigilance.

  17. Global Combat Support System - Joint Increment 8 (GCSS-J Inc 8)

    DTIC Science & Technology

    2016-03-01

    Acquisition Executive DoD - Department of Defense DoDAF - DoD Architecture Framework FD - Full Deployment FDD - Full Deployment Decision FY - Fiscal...Estimate (Or Actual) Milestone B1 Mar 2014 Mar 2014 Milestone C1 Mar 2014 Mar 2014 Increment 8 FDD Dec 2018 Dec 2018 Increment 8 FD TBD TBD Memo 1

  18. Contribution For Arc Temperature Affected By Current Increment Ratio At Peak Current In Pulsed Arc

    NASA Astrophysics Data System (ADS)

    Kano, Ryota; Mitubori, Hironori; Iwao, Toru

    2015-11-01

    Tungsten Inert Gas (TIG) Welding is one of the high quality welding. However, parameters of the pulsed arc welding are many and complicated. if the welding parameters are not appropriate, the welding pool shape becomes wide and shallow.the convection of driving force contributes to the welding pool shape. However, in the case of changing current waveform as the pulse high frequency TIG welding, the arc temperature does not follow the change of the current. Other result of the calculation, in particular, the arc temperature at the reaching time of peak current is based on these considerations. Thus, the accurate measurement of the temperature at the time is required. Therefore, the objective of this research is the elucidation of contribution for arc temperature affected by current increment ratio at peak current in pulsed arc. It should obtain a detail knowledge of the welding model in pulsed arc. The temperature in the case of increment of the peak current from the base current is measured by using spectroscopy. As a result, when the arc current increases from 100 A to 150 A at 120 ms, the transient response of the temperature didn't occur during increasing current. Thus, during the current rise, it has been verified by measuring. Therefore, the contribution for arc temperature affected by current increment ratio at peak current in pulsed arc was elucidated in order to obtain more knowledge of welding model of pulsed arc.

  19. Modeling of the oxygen reduction reaction for dense LSM thin films

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Tao; Liu, Jian; Yu, Yang

    In this study, the oxygen reduction reaction mechanism is investigated using numerical methods on a dense thin (La 1-xSr x) yMnO 3±δ film deposited on a YSZ substrate. This 1-D continuum model consists of defect chemistry and elementary oxygen reduction reaction steps coupled via reaction rates. The defect chemistry model contains eight species including cation vacancies on the A- and B-sites. The oxygen vacancy is calculated by solving species transportation equations in multiphysics simulations. Due to the simple geometry of a dense thin film, the oxygen reduction reaction was reduced to three elementary steps: surface adsorption and dissociation, incorporation onmore » the surface, and charge transfer across the LSM/YSZ interface. The numerical simulations allow for calculation of the temperature- and oxygen partial pressure-dependent properties of LSM. The parameters of the model are calibrated with experimental impedance data for various oxygen partial pressures at different temperatures. The results indicate that surface adsorption and dissociation is the rate-determining step in the ORR of LSM thin films. With the fine-tuned parameters, further quantitative analysis is performed. The activation energy of the oxygen exchange reaction and the dependence of oxygen non-stoichiometry on oxygen partial pressure are also calculated and verified using the literature results.« less

  20. Modeling of the oxygen reduction reaction for dense LSM thin films

    DOE PAGES

    Yang, Tao; Liu, Jian; Yu, Yang; ...

    2017-10-17

    In this study, the oxygen reduction reaction mechanism is investigated using numerical methods on a dense thin (La 1-xSr x) yMnO 3±δ film deposited on a YSZ substrate. This 1-D continuum model consists of defect chemistry and elementary oxygen reduction reaction steps coupled via reaction rates. The defect chemistry model contains eight species including cation vacancies on the A- and B-sites. The oxygen vacancy is calculated by solving species transportation equations in multiphysics simulations. Due to the simple geometry of a dense thin film, the oxygen reduction reaction was reduced to three elementary steps: surface adsorption and dissociation, incorporation onmore » the surface, and charge transfer across the LSM/YSZ interface. The numerical simulations allow for calculation of the temperature- and oxygen partial pressure-dependent properties of LSM. The parameters of the model are calibrated with experimental impedance data for various oxygen partial pressures at different temperatures. The results indicate that surface adsorption and dissociation is the rate-determining step in the ORR of LSM thin films. With the fine-tuned parameters, further quantitative analysis is performed. The activation energy of the oxygen exchange reaction and the dependence of oxygen non-stoichiometry on oxygen partial pressure are also calculated and verified using the literature results.« less

  1. Basal area increment and growth efficiency as functions of canopy dynamics and stem mechanics

    Treesearch

    Thomas J. Dean

    2004-01-01

    Crown and canopy structurecorrelate with growth efficiency and also determine stem size and taper as described by the uniform stress principle of stem formation. A regression model was derived from this principle that expresses basal area increment in terms of the amount and vertical distribution of leaf area and change in these variables during a growth period. This...

  2. The effects of intensity on V̇O2 kinetics during incremental free swimming.

    PubMed

    de Jesus, Kelly; Sousa, Ana; de Jesus, Karla; Ribeiro, João; Machado, Leandro; Rodríguez, Ferran; Keskinen, Kari; Vilas-Boas, João Paulo; Fernandes, Ricardo J

    2015-09-01

    Swimming and training are carried out with wide variability in distances and intensities. However, oxygen uptake kinetics for the intensities seen in swimming has not been reported. The purpose of this study was to assess and compare the oxygen uptake kinetics throughout low-moderate to severe intensities during incremental swimming exercise. We hypothesized that the oxygen uptake kinetic parameters would be affected by swimming intensity. Twenty male trained swimmers completed an incremental protocol of seven 200-m crawl swims to exhaustion (0.05 m·s(-1) increments and 30-s intervals). Oxygen uptake was continuously measured by a portable gas analyzer connected to a respiratory snorkel and valve system. Oxygen uptake kinetics was assessed using a double exponential regression model that yielded both fast and slow components of the response of oxygen uptake to exercise. From low-moderate to severe swimming intensities changes occurred for the first and second oxygen uptake amplitudes (P ≤ 0.04), time constants (P = 0.01), and time delays (P ≤ 0.02). At the heavy and severe intensities, a notable oxygen uptake slow component (>255 mL·min(-1)) occurred in all swimmers. Oxygen uptake kinetics whilst swimming at different intensities offers relevant information regarding cardiorespiratory and metabolic stress that might be useful for appropriate performance diagnosis and training prescription.

  3. 40 CFR 60.1595 - What must I include in the notifications of achievement of my increments of progress?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY... or Before August 30, 1999 Model Rule-Increments of Progress § 60.1595 What must I include in the...

  4. A "desperation-reaction" model of medical diffusion.

    PubMed Central

    Warner, K E

    1975-01-01

    Knowledge about the adoption and diffusion of innovations is briefly reviewed. A model is then proposed to explain how certain innovations, intended to address dire medical problems, might diffuse in a manner not previously reported, with extensive diffusion occurring during what would be a period of small-scale experimentation and limited adoption in the conventional innovation-diffusion environment. The model is illustrated with findings from a case study of the diffusion of drug therapies for four types of leukemia. Possible implications of "desperation-reaction" diffusion are suggested. PMID:1065622

  5. Testing the Relations Between Impulsivity-Related Traits, Suicidality, and Nonsuicidal Self-Injury: A Test of the Incremental Validity of the UPPS Model

    PubMed Central

    Lynam, Donald R.; Miller, Joshua D.; Miller, Drew J.; Bornovalova, Marina A.; Lejuez, C. W.

    2011-01-01

    Borderline personality disorder (BPD) has received significant attention as a predictor of suicidal behavior (SB) and nonsuicidal self-injury (NSSI). Despite significant promise, trait impulsivity has received less attention. Understanding the relations between impulsivity and SB and NSSI is confounded, unfortunately, by the heterogeneous nature of impulsivity. This study examined the relations among 4 personality pathways to impulsive behavior studied via the UPPS model of impulsivity and SB and NSSI in a residential sample of drug abusers (N = 76). In this study, we tested whether these 4 impulsivity-related traits (i.e., Negative Urgency, Sensation Seeking, Lack of Premeditation, and Lack of Perseverance) provide incremental validity in the statistical prediction of SB and NSSI above and beyond BPD; they do. We also tested whether BPD symptoms provide incremental validity in the prediction of SB and NSSI above and beyond these impulsivity-related traits; they do not. In addition to the main effects of Lack of Premeditation and Negative Urgency, we found evidence of a robust interaction between these 2 personality traits. The current results argue strongly for the consideration of these 2 impulsivity-related domains—alone and in interaction—when attempting to understand and predict SB and NSSI. PMID:21833346

  6. Computational Study of a Model System of Enzyme-Mediated [4+2] Cycloaddition Reaction

    PubMed Central

    2015-01-01

    A possible mechanistic pathway related to an enzyme-catalyzed [4+2] cycloaddition reac-tion was studied by theoretical calculations at density functional (B3LYP, O3LYP, M062X) and semiempirical levels (PM6-DH2, PM6) performed on a model system. The calculations were carried out for the key [4+2] cycloaddition step considering enzyme-catalyzed biosynthesis of Spinosyn A in a model reaction, where a reliable example of a biological Diels-Alder reaction was reported experimentally. In the present study it was demonstrated that the [4+2] cycloaddition reaction may benefit from moving along the energetically balanced reaction coordinate, which enabled the catalytic rate enhancement of the [4+2] cycloaddition pathway involving a single transition state. Modeling of such a system with coordination of three amino acids indicated a reliable decrease of activation energy by ~18.0 kcal/mol as compared to a non-catalytic transformation. PMID:25853669

  7. Word Decoding Development in Incremental Phonics Instruction in a Transparent Orthography

    ERIC Educational Resources Information Center

    Schaars, Moniek M.; Segers, Eliane; Verhoeven, Ludo

    2017-01-01

    The present longitudinal study aimed to investigate the development of word decoding skills during incremental phonics instruction in Dutch as a transparent orthography. A representative sample of 973 Dutch children in the first grade (M[subscript age] = 6;1, SD = 0;5) was exposed to incremental subsets of Dutch grapheme-phoneme correspondences…

  8. A Self-Organizing Incremental Neural Network based on local distribution learning.

    PubMed

    Xing, Youlu; Shi, Xiaofeng; Shen, Furao; Zhou, Ke; Zhao, Jinxi

    2016-12-01

    In this paper, we propose an unsupervised incremental learning neural network based on local distribution learning, which is called Local Distribution Self-Organizing Incremental Neural Network (LD-SOINN). The LD-SOINN combines the advantages of incremental learning and matrix learning. It can automatically discover suitable nodes to fit the learning data in an incremental way without a priori knowledge such as the structure of the network. The nodes of the network store rich local information regarding the learning data. The adaptive vigilance parameter guarantees that LD-SOINN is able to add new nodes for new knowledge automatically and the number of nodes will not grow unlimitedly. While the learning process continues, nodes that are close to each other and have similar principal components are merged to obtain a concise local representation, which we call a relaxation data representation. A denoising process based on density is designed to reduce the influence of noise. Experiments show that the LD-SOINN performs well on both artificial and real-word data. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. Modelling biochemical reaction systems by stochastic differential equations with reflection.

    PubMed

    Niu, Yuanling; Burrage, Kevin; Chen, Luonan

    2016-05-07

    In this paper, we gave a new framework for modelling and simulating biochemical reaction systems by stochastic differential equations with reflection not in a heuristic way but in a mathematical way. The model is computationally efficient compared with the discrete-state Markov chain approach, and it ensures that both analytic and numerical solutions remain in a biologically plausible region. Specifically, our model mathematically ensures that species numbers lie in the domain D, which is a physical constraint for biochemical reactions, in contrast to the previous models. The domain D is actually obtained according to the structure of the corresponding chemical Langevin equations, i.e., the boundary is inherent in the biochemical reaction system. A variant of projection method was employed to solve the reflected stochastic differential equation model, and it includes three simple steps, i.e., Euler-Maruyama method was applied to the equations first, and then check whether or not the point lies within the domain D, and if not perform an orthogonal projection. It is found that the projection onto the closure D¯ is the solution to a convex quadratic programming problem. Thus, existing methods for the convex quadratic programming problem can be employed for the orthogonal projection map. Numerical tests on several important problems in biological systems confirmed the efficiency and accuracy of this approach. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. The Stochastic Early Reaction, Inhibition, and late Action (SERIA) model for antisaccades

    PubMed Central

    2017-01-01

    The antisaccade task is a classic paradigm used to study the voluntary control of eye movements. It requires participants to suppress a reactive eye movement to a visual target and to concurrently initiate a saccade in the opposite direction. Although several models have been proposed to explain error rates and reaction times in this task, no formal model comparison has yet been performed. Here, we describe a Bayesian modeling approach to the antisaccade task that allows us to formally compare different models on the basis of their evidence. First, we provide a formal likelihood function of actions (pro- and antisaccades) and reaction times based on previously published models. Second, we introduce the Stochastic Early Reaction, Inhibition, and late Action model (SERIA), a novel model postulating two different mechanisms that interact in the antisaccade task: an early GO/NO-GO race decision process and a late GO/GO decision process. Third, we apply these models to a data set from an experiment with three mixed blocks of pro- and antisaccade trials. Bayesian model comparison demonstrates that the SERIA model explains the data better than competing models that do not incorporate a late decision process. Moreover, we show that the early decision process postulated by the SERIA model is, to a large extent, insensitive to the cue presented in a single trial. Finally, we use parameter estimates to demonstrate that changes in reaction time and error rate due to the probability of a trial type (pro- or antisaccade) are best explained by faster or slower inhibition and the probability of generating late voluntary prosaccades. PMID:28767650

  11. The environmental zero-point problem in evolutionary reaction norm modeling.

    PubMed

    Ergon, Rolf

    2018-04-01

    There is a potential problem in present quantitative genetics evolutionary modeling based on reaction norms. Such models are state-space models, where the multivariate breeder's equation in some form is used as the state equation that propagates the population state forward in time. These models use the implicit assumption of a constant reference environment, in many cases set to zero. This zero-point is often the environment a population is adapted to, that is, where the expected geometric mean fitness is maximized. Such environmental reference values follow from the state of the population system, and they are thus population properties. The environment the population is adapted to, is, in other words, an internal population property, independent of the external environment. It is only when the external environment coincides with the internal reference environment, or vice versa, that the population is adapted to the current environment. This is formally a result of state-space modeling theory, which is an important theoretical basis for evolutionary modeling. The potential zero-point problem is present in all types of reaction norm models, parametrized as well as function-valued, and the problem does not disappear when the reference environment is set to zero. As the environmental reference values are population characteristics, they ought to be modeled as such. Whether such characteristics are evolvable is an open question, but considering the complexity of evolutionary processes, such evolvability cannot be excluded without good arguments. As a straightforward solution, I propose to model the reference values as evolvable mean traits in their own right, in addition to other reaction norm traits. However, solutions based on an evolvable G matrix are also possible.

  12. Complexity of the heart rhythm after heart transplantation by entropy of transition network for RR-increments of RR time intervals between heartbeats.

    PubMed

    Makowiec, Danuta; Struzik, Zbigniew; Graff, Beata; Wdowczyk-Szulc, Joanna; Zarczynska-Buchnowiecka, Marta; Gruchala, Marcin; Rynkiewicz, Andrzej

    2013-01-01

    Network models have been used to capture, represent and analyse characteristics of living organisms and general properties of complex systems. The use of network representations in the characterization of time series complexity is a relatively new but quickly developing branch of time series analysis. In particular, beat-to-beat heart rate variability can be mapped out in a network of RR-increments, which is a directed and weighted graph with vertices representing RR-increments and the edges of which correspond to subsequent increments. We evaluate entropy measures selected from these network representations in records of healthy subjects and heart transplant patients, and provide an interpretation of the results.

  13. A user-friendly tool for incremental haemodialysis prescription.

    PubMed

    Casino, Francesco Gaetano; Basile, Carlo

    2018-01-05

    There is a recently heightened interest in incremental haemodialysis (IHD), the main advantage of which could likely be a better preservation of the residual kidney function of the patients. The implementation of IHD, however, is hindered by many factors, among them, the mathematical complexity of its prescription. The aim of our study was to design a user-friendly tool for IHD prescription, consisting of only a few rows of a common spreadsheet. The keystone of our spreadsheet was the following fundamental concept: the dialysis dose to be prescribed in IHD depends only on the normalized urea clearance provided by the native kidneys (KRUn) of the patient for each frequency of treatment, according to the variable target model recently proposed by Casino and Basile (The variable target model: a paradigm shift in the incremental haemodialysis prescription. Nephrol Dial Transplant 2017; 32: 182-190). The first step was to put in sequence a series of equations in order to calculate, firstly, KRUn and, then, the key parameters to be prescribed for an adequate IHD; the second step was to compare KRUn values obtained with our spreadsheet with KRUn values obtainable with the gold standard Solute-solver (Daugirdas JT et al., Solute-solver: a web-based tool for modeling urea kinetics for a broad range of hemodialysis schedules in multiple patients. Am J Kidney Dis 2009; 54: 798-809) in a sample of 40 incident haemodialysis patients. Our spreadsheet provided excellent results. The differences with Solute-solver were clinically negligible. This was confirmed by the Bland-Altman plot built to analyse the agreement between KRUn values obtained with the two methods: the difference was 0.07 ± 0.05 mL/min/35 L. Our spreadsheet is a user-friendly tool able to provide clinically acceptable results in IHD prescription. Two immediate consequences could derive: (i) a larger dissemination of IHD might occur; and (ii) our spreadsheet could represent a useful tool for an ineludibly

  14. Acceleration of incremental-pressure-correction incompressible flow computations using a coarse-grid projection method

    NASA Astrophysics Data System (ADS)

    Kashefi, Ali; Staples, Anne

    2016-11-01

    Coarse grid projection (CGP) methodology is a novel multigrid method for systems involving decoupled nonlinear evolution equations and linear elliptic equations. The nonlinear equations are solved on a fine grid and the linear equations are solved on a corresponding coarsened grid. Mapping functions transfer data between the two grids. Here we propose a version of CGP for incompressible flow computations using incremental pressure correction methods, called IFEi-CGP (implicit-time-integration, finite-element, incremental coarse grid projection). Incremental pressure correction schemes solve Poisson's equation for an intermediate variable and not the pressure itself. This fact contributes to IFEi-CGP's efficiency in two ways. First, IFEi-CGP preserves the velocity field accuracy even for a high level of pressure field grid coarsening and thus significant speedup is achieved. Second, because incremental schemes reduce the errors that arise from boundaries with artificial homogenous Neumann conditions, CGP generates undamped flows for simulations with velocity Dirichlet boundary conditions. Comparisons of the data accuracy and CPU times for the incremental-CGP versus non-incremental-CGP computations are presented.

  15. Catalytic ignition model in a monolithic reactor with in-depth reaction

    NASA Technical Reports Server (NTRS)

    Tien, Ta-Ching; Tien, James S.

    1990-01-01

    Two transient models have been developed to study the catalytic ignition in a monolithic catalytic reactor. The special feature in these models is the inclusion of thermal and species structures in the porous catalytic layer. There are many time scales involved in the catalytic ignition problem, and these two models are developed with different time scales. In the full transient model, the equations are non-dimensionalized by the shortest time scale (mass diffusion across the catalytic layer). It is therefore accurate but is computationally costly. In the energy-integral model, only the slowest process (solid heat-up) is taken as nonsteady. It is approximate but computationally efficient. In the computations performed, the catalyst is platinum and the reactants are rich mixtures of hydrogen and oxygen. One-step global chemical reaction rates are used for both gas-phase homogeneous reaction and catalytic heterogeneous reaction. The computed results reveal the transient ignition processes in detail, including the structure variation with time in the reactive catalytic layer. An ignition map using reactor length and catalyst loading is constructed. The comparison of computed results between the two transient models verifies the applicability of the energy-integral model when the time is greater than the second largest time scale of the system. It also suggests that a proper combined use of the two models can catch all the transient phenomena while minimizing the computational cost.

  16. Unmanned Maritime Systems Incremental Acquisition Approach

    DTIC Science & Technology

    2016-12-01

    We find that current UMS acquisitions are utilizing previous acquisition reforms, but could benefit from additional contractor peer competition and...peer review. Additional cost and schedule benefits could result from contractor competition during build processes in each incremental process. We...acquisitions are utilizing previous acquisition reforms, but could benefit from additional contractor peer competition and peer review. Additional

  17. Delineating pMDI model reactions with loblolly pine via solution-state NMR spectroscopy. Part 1, Catalyzed reactions with wood models and wood polymers

    Treesearch

    Daniel J. Yelle; John Ralph; Charles R. Frihart

    2011-01-01

    To better understand adhesive interactions with wood, reactions between model compounds of wood and a model compound of polymeric methylene diphenyl diisocyanate (pMDI) were characterized by solution-state NMR spectroscopy. For comparison, finely ground loblolly pine sapwood, milled-wood lignin and holocellulose from the same wood were isolated and derivatized with...

  18. Computational comparison of quantum-mechanical models for multistep direct reactions

    NASA Astrophysics Data System (ADS)

    Koning, A. J.; Akkermans, J. M.

    1993-02-01

    We have carried out a computational comparison of all existing quantum-mechanical models for multistep direct (MSD) reactions. The various MSD models, including the so-called Feshbach-Kerman-Koonin, Tamura-Udagawa-Lenske and Nishioka-Yoshida-Weidenmüller models, have been implemented in a single computer system. All model calculations thus use the same set of parameters and the same numerical techniques; only one adjustable parameter is employed. The computational results have been compared with experimental energy spectra and angular distributions for several nuclear reactions, namely, 90Zr(p,p') at 80 MeV, 209Bi(p,p') at 62 MeV, and 93Nb(n,n') at 25.7 MeV. In addition, the results have been compared with the Kalbach systematics and with semiclassical exciton model calculations. All quantum MSD models provide a good fit to the experimental data. In addition, they reproduce the systematics very well and are clearly better than semiclassical model calculations. We furthermore show that the calculated predictions do not differ very strongly between the various quantum MSD models, leading to the conclusion that the simplest MSD model (the Feshbach-Kerman-Koonin model) is adequate for the analysis of experimental data.

  19. A comprehensive model to determine the effects of temperature and species fluctuations on reaction rates in turbulent reaction flows

    NASA Technical Reports Server (NTRS)

    Magnotti, F.; Diskin, G.; Matulaitis, J.; Chinitz, W.

    1984-01-01

    The use of silane (SiH4) as an effective ignitor and flame stabilizing pilot fuel is well documented. A reliable chemical kinetic mechanism for prediction of its behavior at the conditions encountered in the combustor of a SCRAMJET engine was calculated. The effects of hydrogen addition on hydrocarbon ignition and flame stabilization as a means for reduction of lengthy ignition delays and reaction times were studied. The ranges of applicability of chemical kinetic models of hydrogen-air combustors were also investigated. The CHARNAL computer code was applied to the turbulent reaction rate modeling.

  20. Intervertebral reaction force prediction using an enhanced assembly of OpenSim models.

    PubMed

    Senteler, Marco; Weisse, Bernhard; Rothenfluh, Dominique A; Snedeker, Jess G

    2016-01-01

    OpenSim offers a valuable approach to investigating otherwise difficult to assess yet important biomechanical parameters such as joint reaction forces. Although the range of available models in the public repository is continually increasing, there currently exists no OpenSim model for the computation of intervertebral joint reactions during flexion and lifting tasks. The current work combines and improves elements of existing models to develop an enhanced model of the upper body and lumbar spine. Models of the upper body with extremities, neck and head were combined with an improved version of a lumbar spine from the model repository. Translational motion was enabled for each lumbar vertebrae with six controllable degrees of freedom. Motion segment stiffness was implemented at lumbar levels and mass properties were assigned throughout the model. Moreover, body coordinate frames of the spine were modified to allow straightforward variation of sagittal alignment and to simplify interpretation of results. Evaluation of model predictions for level L1-L2, L3-L4 and L4-L5 in various postures of forward flexion and moderate lifting (8 kg) revealed an agreement within 10% to experimental studies and model-based computational analyses. However, in an extended posture or during lifting of heavier loads (20 kg), computed joint reactions differed substantially from reported in vivo measures using instrumented implants. We conclude that agreement between the model and available experimental data was good in view of limitations of both the model and the validation datasets. The presented model is useful in that it permits computation of realistic lumbar spine joint reaction forces during flexion and moderate lifting tasks. The model and corresponding documentation are now available in the online OpenSim repository.

  1. Comparison of lifetime incremental cost:utility ratios of surgery relative to failed medical management for the treatment of hip, knee and spine osteoarthritis modelled using 2-year postsurgical values

    PubMed Central

    Tso, Peggy; Walker, Kevin; Mahomed, Nizar; Coyte, Peter C.; Rampersaud, Y. Raja

    2012-01-01

    Background Demand for surgery to treat osteoarthritis (OA) of the hip, knee and spine has risen dramatically. Whereas total hip (THA) and total knee arthroplasty (TKA) have been widely accepted as cost-effective, spine surgeries (decompression, decompression with fusion) to treat degenerative conditions remain underfunded compared with other surgeries. Methods An incremental cost–utility analysis comparing decompression and decompression with fusion to THA and TKA, from the perspective of the provincial health insurance system, was based on an observational matched-cohort study of prospectively collected outcomes and retrospectively collected costs. Patient outcomes were measured using short-form (SF)-36 surveys over a 2-year follow-up period. Utility was modelled over the lifetime, and quality-adjusted life years (QALYs) were determined. We calculated the incremental cost per QALY gained by estimating mean incremental lifetime costs and QALYs of surgery compared with medical management of each diagnosis group after discounting costs and QALYs at 3%. Sensitivity analyses were also conducted. Results The lifetime incremental cost:utility ratios (ICURs) discounted at 3% were $5321 per QALY for THA, $11 275 per QALY for TKA, $2307 per QALY for spinal decompression and $7153 per QALY for spinal decompression with fusion. The sensitivity analyses did not alter the ranking of the lifetime ICURs. Conclusion In appropriately selected patients with leg-dominant symptoms secondary to focal lumbar spinal stenosis who have failed medical management, the lifetime ICUR for surgical treatment of lumbar spinal stenosis is similar to those of THA and TKA for the treatment of OA. PMID:22630061

  2. Finite element modeling of contaminant transport in soils including the effect of chemical reactions.

    PubMed

    Javadi, A A; Al-Najjar, M M

    2007-05-17

    The movement of chemicals through soils to the groundwater is a major cause of degradation of water resources. In many cases, serious human and stock health implications are associated with this form of pollution. Recent studies have shown that the current models and methods are not able to adequately describe the leaching of nutrients through soils, often underestimating the risk of groundwater contamination by surface-applied chemicals, and overestimating the concentration of resident solutes. Furthermore, the effect of chemical reactions on the fate and transport of contaminants is not included in many of the existing numerical models for contaminant transport. In this paper a numerical model is presented for simulation of the flow of water and air and contaminant transport through unsaturated soils with the main focus being on the effects of chemical reactions. The governing equations of miscible contaminant transport including advection, dispersion-diffusion and adsorption effects together with the effect of chemical reactions are presented. The mathematical framework and the numerical implementation of the model are described in detail. The model is validated by application to a number of test cases from the literature and is then applied to the simulation of a physical model test involving transport of contaminants in a block of soil with particular reference to the effects of chemical reactions. Comparison of the results of the numerical model with the experimental results shows that the model is capable of predicting the effects of chemical reactions with very high accuracy. The importance of consideration of the effects of chemical reactions is highlighted.

  3. Quantum chemical modeling of enzymatic reactions: the case of 4-oxalocrotonate tautomerase.

    PubMed

    Sevastik, Robin; Himo, Fahmi

    2007-12-01

    The reaction mechanism of 4-oxalocrotonate tautomerase (4-OT) is studied using the density functional theory method B3LYP. This enzyme catalyzes the isomerisation of unconjugated alpha-keto acids to their conjugated isomers. Two different quantum chemical models of the active site are devised and the potential energy curves for the reaction are computed. The calculations support the proposed reaction mechanism in which Pro-1 acts as a base to shuttle a proton from the C3 to the C5 position of the substrate. The first step (proton transfer from C3 to proline) is shown to be the rate-limiting step. The energy of the charge-separated intermediate (protonated proline-deprotonated substrate) is calculated to be quite low, in accordance with measured pKa values. The results of the two models are used to evaluate the methodology employed in modeling enzyme active sites using quantum chemical cluster models.

  4. Common Aviation Command and Control System Increment 1 (CAC2S Inc 1)

    DTIC Science & Technology

    2016-03-01

    Command and Control System Increment 1 ( CAC2S Inc 1) DoD Component Navy United States Marine Corps Responsible Office Program Manager References MAIS ...facilities for planning and execution of Marine Aviation missions within the Marine Air Ground Task Force (MAGTF). CAC2S Increment I will eliminate...approved by ASN (RDA), the MDA, in a Program Decision Memorandum (PDM), “ CAC2S Increment I,” May 05, 2009. As the result of the PDM, the independent

  5. Examining the incremental and interactive effects of boldness with meanness and disinhibition within the triarchic model of psychopathy.

    PubMed

    Gatner, Dylan T; Douglas, Kevin S; Hart, Stephen D

    2016-07-01

    The triarchic model of psychopathy (Patrick, Fowles, & Krueger, 2009) comprises 3 phenotypic domains: Meanness, Disinhibition, and Boldness. Ongoing controversy surrounds the relevance of Boldness in the conceptualization and assessment of psychopathy. In the current study, undergraduate students (N = 439) completed the Triarchic Psychopathy Measure (Patrick, 2010) to examine the association between Boldness and a host of theoretically relevant external criteria. Boldness was generally unrelated to either prosocial or harmful criteria. Boldness rarely provided incremental value above or interacted with Meanness and Disinhibition with respect to external criteria. Curvilinear effects of Boldness rarely emerged. The findings suggest that Boldness might not be a central construct in the definition of psychopathic personality disorder. Implications for the 5th edition of the Diagnostic and Statistical Manual of Mental Disorders (American Psychiatric Association, 2013) psychopathic specifier are discussed. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  6. Global Combat Support System - Army Increment 2 (GCSS-A Inc 2)

    DTIC Science & Technology

    2016-03-01

    2016 Major Automated Information System Annual Report Global Combat Support System - Army Increment 2 (GCSS-A Inc 2) Defense Acquisition...Secretary of Defense PB - President’s Budget RDT&E - Research, Development, Test, and Evaluation SAE - Service Acquisition Executive TBD - To Be...Date Assigned: Program Information Program Name Global Combat Support System - Army Increment 2 (GCSS-A Inc 2) DoD Component Army Responsible

  7. Volatilities, traded volumes, and the hypothesis of price increments in derivative securities

    NASA Astrophysics Data System (ADS)

    Lim, Gyuchang; Kim, SooYong; Scalas, Enrico; Kim, Kyungsik

    2007-08-01

    A detrended fluctuation analysis (DFA) is applied to the statistics of Korean treasury bond (KTB) futures from which the logarithmic increments, volatilities, and traded volumes are estimated over a specific time lag. In this study, the logarithmic increment of futures prices has no long-memory property, while the volatility and the traded volume exhibit the existence of the long-memory property. To analyze and calculate whether the volatility clustering is due to a inherent higher-order correlation not detected by with the direct application of the DFA to logarithmic increments of KTB futures, it is of importance to shuffle the original tick data of future prices and to generate a geometric Brownian random walk with the same mean and standard deviation. It was found from a comparison of the three tick data that the higher-order correlation inherent in logarithmic increments leads to volatility clustering. Particularly, the result of the DFA on volatilities and traded volumes can be supported by the hypothesis of price changes.

  8. Springback effects during single point incremental forming: Optimization of the tool path

    NASA Astrophysics Data System (ADS)

    Giraud-Moreau, Laurence; Belchior, Jérémy; Lafon, Pascal; Lotoing, Lionel; Cherouat, Abel; Courtielle, Eric; Guines, Dominique; Maurine, Patrick

    2018-05-01

    Incremental sheet forming is an emerging process to manufacture sheet metal parts. This process is more flexible than conventional one and well suited for small batch production or prototyping. During the process, the sheet metal blank is clamped by a blank-holder and a small-size smooth-end hemispherical tool moves along a user-specified path to deform the sheet incrementally. Classical three-axis CNC milling machines, dedicated structure or serial robots can be used to perform the forming operation. Whatever the considered machine, large deviations between the theoretical shape and the real shape can be observed after the part unclamping. These deviations are due to both the lack of stiffness of the machine and residual stresses in the part at the end of the forming stage. In this paper, an optimization strategy of the tool path is proposed in order to minimize the elastic springback induced by residual stresses after unclamping. A finite element model of the SPIF process allowing the shape prediction of the formed part with a good accuracy is defined. This model, based on appropriated assumptions, leads to calculation times which remain compatible with an optimization procedure. The proposed optimization method is based on an iterative correction of the tool path. The efficiency of the method is shown by an improvement of the final shape.

  9. A Parallel and Incremental Approach for Data-Intensive Learning of Bayesian Networks.

    PubMed

    Yue, Kun; Fang, Qiyu; Wang, Xiaoling; Li, Jin; Liu, Weiyi

    2015-12-01

    Bayesian network (BN) has been adopted as the underlying model for representing and inferring uncertain knowledge. As the basis of realistic applications centered on probabilistic inferences, learning a BN from data is a critical subject of machine learning, artificial intelligence, and big data paradigms. Currently, it is necessary to extend the classical methods for learning BNs with respect to data-intensive computing or in cloud environments. In this paper, we propose a parallel and incremental approach for data-intensive learning of BNs from massive, distributed, and dynamically changing data by extending the classical scoring and search algorithm and using MapReduce. First, we adopt the minimum description length as the scoring metric and give the two-pass MapReduce-based algorithms for computing the required marginal probabilities and scoring the candidate graphical model from sample data. Then, we give the corresponding strategy for extending the classical hill-climbing algorithm to obtain the optimal structure, as well as that for storing a BN by pairs. Further, in view of the dynamic characteristics of the changing data, we give the concept of influence degree to measure the coincidence of the current BN with new data, and then propose the corresponding two-pass MapReduce-based algorithms for BNs incremental learning. Experimental results show the efficiency, scalability, and effectiveness of our methods.

  10. Reaction Time Correlations during Eye–Hand Coordination:Behavior and Modeling

    PubMed Central

    Dean, Heather L.; Martí, Daniel; Tsui, Eva; Rinzel, John; Pesaran, Bijan

    2011-01-01

    During coordinated eye– hand movements, saccade reaction times (SRTs) and reach reaction times (RRTs) are correlated in humans and monkeys. Reaction times (RTs) measure the degree of movement preparation and can correlate with movement speed and accuracy. However, RTs can also reflect effector nonspecific influences, such as motivation and arousal. We use a combination of behavioral psychophysics and computational modeling to identify plausible mechanisms for correlations in SRTs and RRTs. To disambiguate nonspecific mechanisms from mechanisms specific to movement coordination, we introduce a dual-task paradigm in which a reach and a saccade are cued with a stimulus onset asynchrony (SOA). We then develop several variants of integrate-to-threshold models of RT, which postulate that responses are initiated when the neural activity encoding effector-specific movement preparation reaches a threshold. The integrator models formalize hypotheses about RT correlations and make predictions for how each RT should vary with SOA. To test these hypotheses, we trained three monkeys to perform the eye– hand SOA task and analyzed their SRTs and RRTs. In all three subjects, RT correlations decreased with increasing SOA duration. Additionally, mean SRT decreased with decreasing SOA, revealing facilitation of saccades with simultaneous reaches, as predicted by the model. These results are not consistent with the predictions of the models with common modulation or common input but are compatible with the predictions of a model with mutual excitation between two effector-specific integrators. We propose that RT correlations are not simply attributable to motivation and arousal and are a signature of coordination. PMID:21325507

  11. Multi-Reanalysis Comparison of Variability in Analysis Increment of Column-Integrated Water Vapor Associated with Madden-Julian Oscillation

    NASA Astrophysics Data System (ADS)

    Yokoi, S.

    2014-12-01

    This study conducts a comparison of three reanalysis products (JRA-55, JRA-25, and ERA-Interim) in representation of Madden-Julian Oscillation (MJO), focusing on column-integrated water vapor (CWV) that is considered as an essential variable for discussing MJO dynamics. Besides the analysis fields of CWV, which exhibit spatio-temporal distributions that are quite similar to satellite observations, CWV tendency simulated by forecast models and analysis increment calculated by data assimilation are examined. For JRA-55, it is revealed that, while its forecast model is able to simulate eastward propagation of the CWV anomaly, it tends to weaken the amplitude, and data assimilation process sustains the amplitude. The multi-reanalysis comparison of the analysis increment further reveals that this weakening bias is probably caused by excessively weak cloud-radiative feedback represented by the model. This bias in the feedback strength makes anomalous moisture supply by the vertical advection term in the CWV budget equation too insensitive to precipitation anomaly, resulting in reduction of the amplitude of CWV anomaly. ERA-Interim has a nearly opposite feature; the forecast model represents excessively strong feedback and unrealistically strengthens the amplitude, while the data assimilation weakens it. These results imply the necessity of accurate representation of the cloud-radiative feedback strength for a short-term MJO forecast, and may be evidence to support the argument that this feedback is essential for the existence of MJO. Furthermore, this study demonstrates that the multi-reanalysis comparison of the analysis increment will provide useful information for identifying model biases and, potentially, for estimating parameters that are difficult to estimate solely from observation data, such as gross moist stability.

  12. Reaction rates for mesoscopic reaction-diffusion kinetics

    DOE PAGES

    Hellander, Stefan; Hellander, Andreas; Petzold, Linda

    2015-02-23

    The mesoscopic reaction-diffusion master equation (RDME) is a popular modeling framework frequently applied to stochastic reaction-diffusion kinetics in systems biology. The RDME is derived from assumptions about the underlying physical properties of the system, and it may produce unphysical results for models where those assumptions fail. In that case, other more comprehensive models are better suited, such as hard-sphere Brownian dynamics (BD). Although the RDME is a model in its own right, and not inferred from any specific microscale model, it proves useful to attempt to approximate a microscale model by a specific choice of mesoscopic reaction rates. In thismore » paper we derive mesoscopic scale-dependent reaction rates by matching certain statistics of the RDME solution to statistics of the solution of a widely used microscopic BD model: the Smoluchowski model with a Robin boundary condition at the reaction radius of two molecules. We also establish fundamental limits on the range of mesh resolutions for which this approach yields accurate results and show both theoretically and in numerical examples that as we approach the lower fundamental limit, the mesoscopic dynamics approach the microscopic dynamics. Finally, we show that for mesh sizes below the fundamental lower limit, results are less accurate. Thus, the lower limit determines the mesh size for which we obtain the most accurate results.« less

  13. Reaction rates for mesoscopic reaction-diffusion kinetics

    PubMed Central

    Hellander, Stefan; Hellander, Andreas; Petzold, Linda

    2016-01-01

    The mesoscopic reaction-diffusion master equation (RDME) is a popular modeling framework frequently applied to stochastic reaction-diffusion kinetics in systems biology. The RDME is derived from assumptions about the underlying physical properties of the system, and it may produce unphysical results for models where those assumptions fail. In that case, other more comprehensive models are better suited, such as hard-sphere Brownian dynamics (BD). Although the RDME is a model in its own right, and not inferred from any specific microscale model, it proves useful to attempt to approximate a microscale model by a specific choice of mesoscopic reaction rates. In this paper we derive mesoscopic scale-dependent reaction rates by matching certain statistics of the RDME solution to statistics of the solution of a widely used microscopic BD model: the Smoluchowski model with a Robin boundary condition at the reaction radius of two molecules. We also establish fundamental limits on the range of mesh resolutions for which this approach yields accurate results and show both theoretically and in numerical examples that as we approach the lower fundamental limit, the mesoscopic dynamics approach the microscopic dynamics. We show that for mesh sizes below the fundamental lower limit, results are less accurate. Thus, the lower limit determines the mesh size for which we obtain the most accurate results. PMID:25768640

  14. Toward a reaction rate model of condensed-phase RDX decomposition under high temperatures

    NASA Astrophysics Data System (ADS)

    Schweigert, Igor

    2015-06-01

    Shock ignition of energetic molecular solids is driven by microstructural heterogeneities, at which even moderate stresses can result in sufficiently high temperatures to initiate material decomposition and chemical energy release. Mesoscale modeling of these ``hot spots'' requires a reaction rate model that describes the energy release with a sub-microsecond resolution and under a wide range of temperatures. No such model is available even for well-studied energetic materials such as RDX. In this presentation, I will describe an ongoing effort to develop a reaction rate model of condensed-phase RDX decomposition under high temperatures using first-principles molecular dynamics, transition-state theory, and reaction network analysis. This work was supported by the Naval Research Laboratory, by the Office of Naval Research, and by the DoD High Performance Computing Modernization Program Software Application Institute for Multiscale Reactive Modeling of Insensitive Munitions.

  15. Software designs of image processing tasks with incremental refinement of computation.

    PubMed

    Anastasia, Davide; Andreopoulos, Yiannis

    2010-08-01

    Software realizations of computationally-demanding image processing tasks (e.g., image transforms and convolution) do not currently provide graceful degradation when their clock-cycles budgets are reduced, e.g., when delay deadlines are imposed in a multitasking environment to meet throughput requirements. This is an important obstacle in the quest for full utilization of modern programmable platforms' capabilities since worst-case considerations must be in place for reasonable quality of results. In this paper, we propose (and make available online) platform-independent software designs performing bitplane-based computation combined with an incremental packing framework in order to realize block transforms, 2-D convolution and frame-by-frame block matching. The proposed framework realizes incremental computation: progressive processing of input-source increments improves the output quality monotonically. Comparisons with the equivalent nonincremental software realization of each algorithm reveal that, for the same precision of the result, the proposed approach can lead to comparable or faster execution, while it can be arbitrarily terminated and provide the result up to the computed precision. Application examples with region-of-interest based incremental computation, task scheduling per frame, and energy-distortion scalability verify that our proposal provides significant performance scalability with graceful degradation.

  16. Quantifying energy intake in Pacific bluefin tuna (Thunnus orientalis) using the heat increment of feeding.

    PubMed

    Whitlock, R E; Walli, A; Cermeño, P; Rodriguez, L E; Farwell, C; Block, B A

    2013-11-01

    Using implanted archival tags, we examined the effects of meal caloric value, food type (sardine or squid) and ambient temperature on the magnitude and duration of the heat increment of feeding in three captive juvenile Pacific bluefin tuna. The objective of our study was to develop a model that can be used to estimate energy intake in wild fish of similar body mass. Both the magnitude and duration of the heat increment of feeding (measured by visceral warming) showed a strong positive correlation with the caloric value of the ingested meal. Controlling for meal caloric value, the extent of visceral warming was significantly greater at lower ambient temperature. The extent of visceral warming was also significantly higher for squid meals compared with sardine meals. By using a hierarchical Bayesian model to analyze our data and treating individuals as random effects, we demonstrate how increases in visceral temperature can be used to estimate the energy intake of wild Pacific bluefin tuna of similar body mass to the individuals used in our study.

  17. Simple reaction time to the onset of time-varying sounds.

    PubMed

    Schlittenlacher, Josef; Ellermeier, Wolfgang

    2015-10-01

    Although auditory simple reaction time (RT) is usually defined as the time elapsing between the onset of a stimulus and a recorded reaction, a sound cannot be specified by a single point in time. Therefore, the present work investigates how the period of time immediately after onset affects RT. By varying the stimulus duration between 10 and 500 msec, this critical duration was determined to fall between 32 and 40 milliseconds for a 1-kHz pure tone at 70 dB SPL. In a second experiment, the role of the buildup was further investigated by varying the rise time and its shape. The increment in RT for extending the rise time by a factor of ten was about 7 to 8 msec. There was no statistically significant difference in RT between a Gaussian and linear rise shape. A third experiment varied the modulation frequency and point of onset of amplitude-modulated tones, producing onsets at different initial levels with differently rapid increase or decrease immediately afterwards. The results of all three experiments results were explained very well by a straightforward extension of the parallel grains model (Miller and Ulrich Cogn. Psychol. 46, 101-151, 2003), a probabilistic race model employing many parallel channels. The extension of the model to time-varying sounds made the activation of such a grain depend on intensity as a function of time rather than a constant level. A second approach by mechanisms known from loudness produced less accurate predictions.

  18. Estimating reaction rate coefficients within a travel-time modeling framework.

    PubMed

    Gong, R; Lu, C; Wu, W-M; Cheng, H; Gu, B; Watson, D; Jardine, P M; Brooks, S C; Criddle, C S; Kitanidis, P K; Luo, J

    2011-01-01

    A generalized, efficient, and practical approach based on the travel-time modeling framework is developed to estimate in situ reaction rate coefficients for groundwater remediation in heterogeneous aquifers. The required information for this approach can be obtained by conducting tracer tests with injection of a mixture of conservative and reactive tracers and measurements of both breakthrough curves (BTCs). The conservative BTC is used to infer the travel-time distribution from the injection point to the observation point. For advection-dominant reactive transport with well-mixed reactive species and a constant travel-time distribution, the reactive BTC is obtained by integrating the solutions to advective-reactive transport over the entire travel-time distribution, and then is used in optimization to determine the in situ reaction rate coefficients. By directly working on the conservative and reactive BTCs, this approach avoids costly aquifer characterization and improves the estimation for transport in heterogeneous aquifers which may not be sufficiently described by traditional mechanistic transport models with constant transport parameters. Simplified schemes are proposed for reactive transport with zero-, first-, nth-order, and Michaelis-Menten reactions. The proposed approach is validated by a reactive transport case in a two-dimensional synthetic heterogeneous aquifer and a field-scale bioremediation experiment conducted at Oak Ridge, Tennessee. The field application indicates that ethanol degradation for U(VI)-bioremediation is better approximated by zero-order reaction kinetics than first-order reaction kinetics. Copyright © 2010 The Author(s). Journal compilation © 2010 National Ground Water Association.

  19. Estimating Reaction Rate Coefficients Within a Travel-Time Modeling Framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gong, R; Lu, C; Luo, Jian

    A generalized, efficient, and practical approach based on the travel-time modeling framework is developed to estimate in situ reaction rate coefficients for groundwater remediation in heterogeneous aquifers. The required information for this approach can be obtained by conducting tracer tests with injection of a mixture of conservative and reactive tracers and measurements of both breakthrough curves (BTCs). The conservative BTC is used to infer the travel-time distribution from the injection point to the observation point. For advection-dominant reactive transport with well-mixed reactive species and a constant travel-time distribution, the reactive BTC is obtained by integrating the solutions to advective-reactive transportmore » over the entire travel-time distribution, and then is used in optimization to determine the in situ reaction rate coefficients. By directly working on the conservative and reactive BTCs, this approach avoids costly aquifer characterization and improves the estimation for transport in heterogeneous aquifers which may not be sufficiently described by traditional mechanistic transport models with constant transport parameters. Simplified schemes are proposed for reactive transport with zero-, first-, nth-order, and Michaelis-Menten reactions. The proposed approach is validated by a reactive transport case in a two-dimensional synthetic heterogeneous aquifer and a field-scale bioremediation experiment conducted at Oak Ridge, Tennessee. The field application indicates that ethanol degradation for U(VI)-bioremediation is better approximated by zero-order reaction kinetics than first-order reaction kinetics.« less

  20. Accounting for the Decreasing Reaction Potential of Heterogeneous Aquifers in a Stochastic Framework of Aquifer-Scale Reactive Transport

    NASA Astrophysics Data System (ADS)

    Loschko, Matthias; Wöhling, Thomas; Rudolph, David L.; Cirpka, Olaf A.

    2018-01-01

    Many groundwater contaminants react with components of the aquifer matrix, causing a depletion of the aquifer's reactivity with time. We discuss conceptual simplifications of reactive transport that allow the implementation of a decreasing reaction potential in reactive-transport simulations in chemically and hydraulically heterogeneous aquifers without relying on a fully explicit description. We replace spatial coordinates by travel-times and use the concept of relative reactivity, which represents the reaction-partner supply from the matrix relative to a reference. Microorganisms facilitating the reactions are not explicitly modeled. Solute mixing is neglected. Streamlines, obtained by particle tracking, are discretized in travel-time increments with variable content of reaction partners in the matrix. As exemplary reactive system, we consider aerobic respiration and denitrification with simplified reaction equations: Dissolved oxygen undergoes conditional zero-order decay, nitrate follows first-order decay, which is inhibited in the presence of dissolved oxygen. Both reactions deplete the bioavailable organic carbon of the matrix, which in turn determines the relative reactivity. These simplifications reduce the computational effort, facilitating stochastic simulations of reactive transport on the aquifer scale. In a one-dimensional test case with a more detailed description of the reactions, we derive a potential relationship between the bioavailable organic-carbon content and the relative reactivity. In a three-dimensional steady-state test case, we use the simplified model to calculate the decreasing denitrification potential of an artificial aquifer over 200 years in an ensemble of 200 members. We demonstrate that the uncertainty in predicting the nitrate breakthrough in a heterogeneous aquifer decreases with increasing scale of observation.

  1. Incremental health care utilization and expenditures for chronic rhinosinusitis in the United States.

    PubMed

    Bhattacharyya, Neil

    2011-07-01

    I determined incremental increases in health care expenditures and utilization associated with chronic rhinosinusitis (CRS). Patients with a reported diagnosis of CRS were extracted from the 2007 Medical Expenditure Panel Survey medical conditions file and linked to the consolidated expenditures file. The patients with CRS were then compared to patients without CRS to determine differences in health care utilization (office visits,emergency facility visits, and prescriptions filled), as well as differences in health care expenditures (total health care costs, office visit costs, prescription medication costs, and self-expenditures) by use of demographically adjusted and comorbidity-adjusted multivariate models. An estimated 11.1+/-0.48 million adult patients reported having CRS in 2007 (4.9%+/-0.2% of the US population). The additional incremental health care utilizations associated with CRS relative to patients without CRS for office visits, emergency facility visits, and number of prescriptions filled were 3.45+/-0.42, 0.09+/-0.03, and 5.5+/-0.8, respectively (all pincremental increase in health care utilization and expenditures due to increases in office-based and prescription expenditures. The national health care costs of CRS remain very high, at an estimated $8.6 billion per year.

  2. A reaction-diffusion model of CO2 influx into an oocyte

    PubMed Central

    Somersalo, Erkki; Occhipinti, Rossana; Boron, Walter F.; Calvetti, Daniela

    2012-01-01

    We have developed and implemented a novel mathematical model for simulating transients in surface pH (pHS) and intracellular pH (pHi) caused by the influx of carbon dioxide (CO2) into a Xenopus oocyte. These transients are important tools for studying gas channels. We assume that the oocyte is a sphere surrounded by a thin layer of unstirred fluid, the extracellular unconvected fluid (EUF), which is in turn surrounded by the well-stirred bulk extracellular fluid (BECF) that represents an infinite reservoir for all solutes. Here, we assume that the oocyte plasma membrane is permeable only to CO2. In both the EUF and intracellular space, solute concentrations can change because of diffusion and reactions. The reactions are the slow equilibration of the CO2 hydration-dehydration reactions and competing equilibria among carbonic acid (H2CO3)/bicarbonate ( HCO3-) and a multitude of non-CO2/HCO3- buffers. Mathematically, the model is described by a coupled system of reaction-diffusion equations that—assuming spherical radial symmetry—we solved using the method of lines with appropriate stiff solvers. In agreement with experimental data (Musa-Aziz et al, PNAS 2009, 106:5406–5411), the model predicts that exposing the cell to extracellular 1.5% CO2/10 mM HCO3- (pH 7.50) causes pHi to fall and pHS to rise rapidly to a peak and then decay. Moreover, the model provides insights into the competition between diffusion and reaction processes when we change the width of the EUF, membrane permeability to CO2, native extra-and intracellular carbonic anhydrase-like activities, the non-CO2/HCO3- (intrinsic) intracellular buffering power, or mobility of intrinsic intracellular buffers. PMID:22728674

  3. Damage From Increment Borings in Bottomland Hardwoods

    Treesearch

    E. Richard Toole; John L. Gammage

    1959-01-01

    THIS PAPER REPORTS a study of the amount of stain and decay that developed from increment-borer holes in five species of bottomland hardwoods. Though the 0.2-inch holes made by conventional borers are often considered insignificant, it appears that they may result in serious defect.

  4. A Film Depositional Model of Permeability for Mineral Reactions in Unsaturated Media.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Freedman, Vicky L.; Saripalli, Prasad; Bacon, Diana H.

    2004-11-15

    A new modeling approach based on the biofilm models of Taylor et al. (1990, Water Resources Research, 26, 2153-2159) has been developed for modeling changes in porosity and permeability in saturated porous media and implemented in an inorganic reactive transport code. Application of the film depositional models to mineral precipitation and dissolution reactions requires that calculations of mineral films be dynamically changing as a function of time dependent reaction processes. Since calculations of film thicknesses do not consider mineral density, results show that the film porosity model does not adequately describe volumetric changes in the porous medium. These effects canmore » be included in permeability calculations by coupling the film permeability models (Mualem and Childs and Collis-George) to a volumetric model that incorporates both mineral density and reactive surface area. Model simulations demonstrate that an important difference between the biofilm and mineral film models is in the translation of changes in mineral radii to changes in pore space. Including the effect of tortuosity on pore radii changes improves the performance of the Mualem permeability model for both precipitation and dissolution. Results from simulation of simultaneous dissolution and secondary mineral precipitation provides reasonable estimates of porosity and permeability. Moreover, a comparison of experimental and simulated data show that the model yields qualitatively reasonable results for permeability changes due to solid-aqueous phase reactions.« less

  5. Can Aerosol Direct Radiative Effects Account for Analysis Increments of Temperature in the Tropical Atlantic?

    NASA Technical Reports Server (NTRS)

    da Silva, Arlindo M.; Alpert, Pinhas

    2016-01-01

    In the late 1990's, prior to the launch of the Terra satellite, atmospheric general circulation models (GCMs) did not include aerosol processes because aerosols were not properly monitored on a global scale and their spatial distributions were not known well enough for their incorporation in operational GCMs. At the time of the first GEOS Reanalysis (Schubert et al. 1993), long time series of analysis increments (the corrections to the atmospheric state by all available meteorological observations) became readily available, enabling detailed analysis of the GEOS-1 errors on a global scale. Such analysis revealed that temperature biases were particularly pronounced in the Tropical Atlantic region, with patterns depicting a remarkable similarity to dust plumes emanating from the African continent as evidenced by TOMS aerosol index maps. Yoram Kaufman was instrumental encouraging us to pursue this issue further, resulting in the study reported in Alpert et al. (1998) where we attempted to assess aerosol forcing by studying the errors of a the GEOS-1 GCM without aerosol physics within a data assimilation system. Based on this analysis, Alpert et al. (1998) put forward that dust aerosols are an important source of inaccuracies in numerical weather-prediction models in the Tropical Atlantic region, although a direct verification of this hypothesis was not possible back then. Nearly 20 years later, numerical prediction models have increased in resolution and complexity of physical parameterizations, including the representation of aerosols and their interactions with the circulation. Moreover, with the advent of NASA's EOS program and subsequent satellites, atmospheric aerosols are now monitored globally on a routine basis, and their assimilation in global models are becoming well established. In this talk we will reexamine the Alpert et al. (1998) hypothesis using the most recent version of the GEOS-5 Data Assimilation System with assimilation of aerosols. We will

  6. CRITICAL THERMAL INCREMENTS FOR RHYTHMIC RESPIRATORY MOVEMENTS OF INSECTS

    PubMed Central

    Crozier, W. J.; Stier, T. B.

    1925-01-01

    The rhythm of abdominal respiratory movements in various insects, aquatic and terrestrial, is shown to possess critical increments 11,500± or 16,500± calories (Libellula, Dixippus, Anax). These are characteristic of processes involved in respiration, and definitely differ from the increment 12,200 calories which is found in a number of instances of (non-respiratory) rhythmic neuromuscular activities of insects and other arthropods. With grasshoppers (Melanoplus), normal or freshly decapitated, the critical increment is 7,900, again a value encountered in connection with some phenomena of gaseous exchange and agreeing well with the value obtained for CO2 output in Melanoplus. It is shown that by decapitation the temperature characteristic for abdominal rhythm, in Melanoplus, is changed to 16,500, then to 11,300—depending upon the time since decapitation; intermediate values do not appear. The frequency of the respiratory movements seems to be controlled by a metabolically distinct group of neurones. The bearing of these results upon the theory of functional analysis by means of temperature characteristics is discussed, and it is pointed out that a definite standpoint becomes available from which to attempt the specific control of vital processes. PMID:19872148

  7. CRITICAL THERMAL INCREMENTS FOR RHYTHMIC RESPIRATORY MOVEMENTS OF INSECTS.

    PubMed

    Crozier, W J; Stier, T B

    1925-01-20

    The rhythm of abdominal respiratory movements in various insects, aquatic and terrestrial, is shown to possess critical increments 11,500+/- or 16,500+/- calories (Libellula, Dixippus, Anax). These are characteristic of processes involved in respiration, and definitely differ from the increment 12,200 calories which is found in a number of instances of (non-respiratory) rhythmic neuromuscular activities of insects and other arthropods. With grasshoppers (Melanoplus), normal or freshly decapitated, the critical increment is 7,900, again a value encountered in connection with some phenomena of gaseous exchange and agreeing well with the value obtained for CO(2) output in Melanoplus. It is shown that by decapitation the temperature characteristic for abdominal rhythm, in Melanoplus, is changed to 16,500, then to 11,300-depending upon the time since decapitation; intermediate values do not appear. The frequency of the respiratory movements seems to be controlled by a metabolically distinct group of neurones. The bearing of these results upon the theory of functional analysis by means of temperature characteristics is discussed, and it is pointed out that a definite standpoint becomes available from which to attempt the specific control of vital processes.

  8. STEPS: efficient simulation of stochastic reaction-diffusion models in realistic morphologies.

    PubMed

    Hepburn, Iain; Chen, Weiliang; Wils, Stefan; De Schutter, Erik

    2012-05-10

    Models of cellular molecular systems are built from components such as biochemical reactions (including interactions between ligands and membrane-bound proteins), conformational changes and active and passive transport. A discrete, stochastic description of the kinetics is often essential to capture the behavior of the system accurately. Where spatial effects play a prominent role the complex morphology of cells may have to be represented, along with aspects such as chemical localization and diffusion. This high level of detail makes efficiency a particularly important consideration for software that is designed to simulate such systems. We describe STEPS, a stochastic reaction-diffusion simulator developed with an emphasis on simulating biochemical signaling pathways accurately and efficiently. STEPS supports all the above-mentioned features, and well-validated support for SBML allows many existing biochemical models to be imported reliably. Complex boundaries can be represented accurately in externally generated 3D tetrahedral meshes imported by STEPS. The powerful Python interface facilitates model construction and simulation control. STEPS implements the composition and rejection method, a variation of the Gillespie SSA, supporting diffusion between tetrahedral elements within an efficient search and update engine. Additional support for well-mixed conditions and for deterministic model solution is implemented. Solver accuracy is confirmed with an original and extensive validation set consisting of isolated reaction, diffusion and reaction-diffusion systems. Accuracy imposes upper and lower limits on tetrahedron sizes, which are described in detail. By comparing to Smoldyn, we show how the voxel-based approach in STEPS is often faster than particle-based methods, with increasing advantage in larger systems, and by comparing to MesoRD we show the efficiency of the STEPS implementation. STEPS simulates models of cellular reaction-diffusion systems with complex

  9. Modelling reaction front formation and oscillatory behaviour in a contaminant plume

    NASA Astrophysics Data System (ADS)

    Cribbin, Laura; Fowler, Andrew; Mitchell, Sarah; Winstanley, Henry

    2013-04-01

    Groundwater contamination is a concern in all industrialised countries that suffer countless spills and leaks of various contaminants. Often, the contaminated groundwater forms a plume that, under the influences of regional groundwater flow, could eventually migrate to streams or wells. This can have catastrophic consequences for human health and local wildlife. The process known as bioremediation removes pollutants in the contaminated groundwater through bacterial reactions. Microorganisms can transform the contaminant into less harmful metabolic products. It is important to be able to predict whether such bioremediation will be sufficient for the safe clean-up of a plume before it reaches wells or lakes. Borehole data from a contaminant plume which resulted from spillage at a coal carbonisation plant in Mansfield, England is the motivation behind modelling the properties of a contaminant plume. In the upper part of the plume, oxygen is consumed and a nitrate spike forms. Deep inside the plume, nitrate is depleted and oscillations of organic carbon and ammonium concentration profiles are observed. While there are various numerical models that predict the evolution of a contaminant plume, we aim to create a simplified model that captures the fundamental characteristics of the plume while being comparable in accuracy to the detailed numerical models that currently exist. To model the transport of a contaminant, we consider the redox reactions that occur in groundwater systems. These reactions deplete the contaminant while creating zones of dominant terminal electron accepting processes throughout the plume. The contaminant is depleted by a series of terminal electron acceptors, the order of which is typically oxygen, nitrate, manganese, iron, sulphate and carbon dioxide. We describe a reaction front, characteristic of a redox zone, by means of rapid reaction and slow diffusion. This aids in describing the depletion of oxygen in the upper part of the plume. To

  10. A Gibbs Energy Minimization Approach for Modeling of Chemical Reactions in a Basic Oxygen Furnace

    NASA Astrophysics Data System (ADS)

    Kruskopf, Ari; Visuri, Ville-Valtteri

    2017-12-01

    In modern steelmaking, the decarburization of hot metal is converted into steel primarily in converter processes, such as the basic oxygen furnace. The objective of this work was to develop a new mathematical model for top blown steel converter, which accounts for the complex reaction equilibria in the impact zone, also known as the hot spot, as well as the associated mass and heat transport. An in-house computer code of the model has been developed in Matlab. The main assumption of the model is that all reactions take place in a specified reaction zone. The mass transfer between the reaction volume, bulk slag, and metal determine the reaction rates for the species. The thermodynamic equilibrium is calculated using the partitioning of Gibbs energy (PGE) method. The activity model for the liquid metal is the unified interaction parameter model and for the liquid slag the modified quasichemical model (MQM). The MQM was validated by calculating iso-activity lines for the liquid slag components. The PGE method together with the MQM was validated by calculating liquidus lines for solid components. The results were compared with measurements from literature. The full chemical reaction model was validated by comparing the metal and slag compositions to measurements from industrial scale converter. The predictions were found to be in good agreement with the measured values. Furthermore, the accuracy of the model was found to compare favorably with the models proposed in the literature. The real-time capability of the proposed model was confirmed in test calculations.

  11. Exact model reduction of combinatorial reaction networks

    PubMed Central

    Conzelmann, Holger; Fey, Dirk; Gilles, Ernst D

    2008-01-01

    Background Receptors and scaffold proteins usually possess a high number of distinct binding domains inducing the formation of large multiprotein signaling complexes. Due to combinatorial reasons the number of distinguishable species grows exponentially with the number of binding domains and can easily reach several millions. Even by including only a limited number of components and binding domains the resulting models are very large and hardly manageable. A novel model reduction technique allows the significant reduction and modularization of these models. Results We introduce methods that extend and complete the already introduced approach. For instance, we provide techniques to handle the formation of multi-scaffold complexes as well as receptor dimerization. Furthermore, we discuss a new modeling approach that allows the direct generation of exactly reduced model structures. The developed methods are used to reduce a model of EGF and insulin receptor crosstalk comprising 5,182 ordinary differential equations (ODEs) to a model with 87 ODEs. Conclusion The methods, presented in this contribution, significantly enhance the available methods to exactly reduce models of combinatorial reaction networks. PMID:18755034

  12. A numerical analysis on forming limits during spiral and concentric single point incremental forming

    NASA Astrophysics Data System (ADS)

    Gipiela, M. L.; Amauri, V.; Nikhare, C.; Marcondes, P. V. P.

    2017-01-01

    Sheet metal forming is one of the major manufacturing industries, which are building numerous parts for aerospace, automotive and medical industry. Due to the high demand in vehicle industry and environmental regulations on less fuel consumption on other hand, researchers are innovating new methods to build these parts with energy efficient sheet metal forming process instead of conventionally used punch and die to form the parts to achieve the lightweight parts. One of the most recognized manufacturing process in this category is Single Point Incremental Forming (SPIF). SPIF is the die-less sheet metal forming process in which the single point tool incrementally forces any single point of sheet metal at any process time to plastic deformation zone. In the present work, finite element method (FEM) is applied to analyze the forming limits of high strength low alloy steel formed by single point incremental forming (SPIF) by spiral and concentric tool path. SPIF numerical simulations were model with 24 and 29 mm cup depth, and the results were compare with Nakajima results obtained by experiments and FEM. It was found that the cup formed with Nakajima tool failed at 24 mm while cups formed by SPIF surpassed the limit for both depths with both profiles. It was also notice that the strain achieved in concentric profile are lower than that in spiral profile.

  13. Iterated reaction graphs: simulating complex Maillard reaction pathways.

    PubMed

    Patel, S; Rabone, J; Russell, S; Tissen, J; Klaffke, W

    2001-01-01

    This study investigates a new method of simulating a complex chemical system including feedback loops and parallel reactions. The practical purpose of this approach is to model the actual reactions that take place in the Maillard process, a set of food browning reactions, in sufficient detail to be able to predict the volatile composition of the Maillard products. The developed framework, called iterated reaction graphs, consists of two main elements: a soup of molecules and a reaction base of Maillard reactions. An iterative process loops through the reaction base, taking reactants from and feeding products back to the soup. This produces a reaction graph, with molecules as nodes and reactions as arcs. The iterated reaction graph is updated and validated by comparing output with the main products found by classical gas-chromatographic/mass spectrometric analysis. To ensure a realistic output and convergence to desired volatiles only, the approach contains a number of novel elements: rate kinetics are treated as reaction probabilities; only a subset of the true chemistry is modeled; and the reactions are blocked into groups.

  14. Charge-dependent non-bonded interaction methods for use in quantum mechanical modeling of condensed phase reactions

    NASA Astrophysics Data System (ADS)

    Kuechler, Erich R.

    Molecular modeling and computer simulation techniques can provide detailed insight into biochemical phenomena. This dissertation describes the development, implementation and parameterization of two methods for the accurate modeling of chemical reactions in aqueous environments, with a concerted scientific effort towards the inclusion of charge-dependent non-bonded non-electrostatic interactions into currently used computational frameworks. The first of these models, QXD, modifies interactions in a hybrid quantum mechanical/molecular (QM/MM) mechanical framework to overcome the current limitations of 'atom typing' QM atoms; an inaccurate and non-intuitive practice for chemically active species as these static atom types are dictated by the local bonding and electrostatic environment of the atoms they represent, which will change over the course of the simulation. The efficacy QXD model is demonstrated using a specific reaction parameterization (SRP) of the Austin Model 1 (AM1) Hamiltonian by simultaneously capturing the reaction barrier for chloride ion attack on methylchloride in solution and the solvation free energies of a series of compounds including the reagents of the reaction. The second, VRSCOSMO, is an implicit solvation model for use with the DFTB3/3OB Hamiltonian for biochemical reactions; allowing for accurate modeling of ionic compound solvation properties while overcoming the discontinuous nature of conventional PCM models when chemical reaction coordinates. The VRSCOSMO model is shown to accurately model the solvation properties of over 200 chemical compounds while also providing smooth, continuous reaction surfaces for a series of biologically motivated phosphoryl transesterification reactions. Both of these methods incorporate charge-dependent behavior into the non-bonded interactions variationally, allowing the 'size' of atoms to change in meaningful ways with respect to changes in local charge state, as to provide an accurate, predictive and

  15. Toward a reaction rate model of condensed-phase RDX decomposition under high temperatures

    NASA Astrophysics Data System (ADS)

    Schweigert, Igor

    2014-03-01

    Shock ignition of energetic molecular solids is driven by microstructural heterogeneities, at which even moderate stresses can result in sufficiently high temperatures to initiate material decomposition and the release of the chemical energy. Mesoscale modeling of these ``hot spots'' requires a chemical reaction rate model that describes the energy release with a sub-microsecond resolution and under a wide range of temperatures. No such model is available even for well-studied energetic materials such as RDX. In this presentation, I will describe an ongoing effort to develop a reaction rate model of condensed-phase RDX decomposition under high temperatures using first-principles molecular dynamics, transition-state theory, and reaction network analysis. This work was supported by the Naval Research Laboratory, by the Office of Naval Research, and by the DOD High Performance Computing Modernization Program Software Application Institute for Multiscale Reactive Modeling of Insensitive Munitions.

  16. Summary of the Science performed onboard the International Space Station during Increments 12 and 13

    NASA Technical Reports Server (NTRS)

    Jules, Kenol

    2007-01-01

    By September of 2007, continuous human presence on the International Space Station will reach a milestone of eighty months. The many astronauts and cosmonauts, who live onboard the station during the last fourteen Increments over that time span, spend their time building the station as well as performing science on a daily basis. Over those eighty months, the U.S astronauts crew members logged over 2954 hours of research time. Far more research time has been accumulated by experiments controlled by investigators on the ground. The U.S astronauts conducted over one hundred and twenty six (126) science investigations. From these hundred and twenty six science investigations, many were operated across multiple Increments. The crew also installed, activated and operated nine (9) science racks that supported six science disciplines ranging from material sciences to life science. By the end of Increment 14, a total of 5083 kg of research rack mass were ferried to the station as well as 5021 kg of research mass. The objectives of this paper are three-fold. (1) To briefly review the science conducted on the International Space Station during the previous eleven Increments; (2) to discuss in detail the science investigations that were conducted on the station during Increments 12 and 13. The discussion will focus mainly on the primary objectives of each investigation and their associated hypotheses that were investigated during these two Increments. Also, some preliminary science results will be discussed for each of the investigation as science results availability permit. (3) The paper will briefly touch on what the science complement planning was and what was actually accomplished due to real time science implementation and challenges during these two Increments in question to illustrate the challenges of daily science activity while the science platform is under construction. Finally, the paper will briefly discuss the science research complements for the other two

  17. Existing School Buildings: Incremental Seismic Retrofit Opportunities.

    ERIC Educational Resources Information Center

    Federal Emergency Management Agency, Washington, DC.

    The intent of this document is to provide technical guidance to school district facility managers for linking specific incremental seismic retrofit opportunities to specific maintenance and capital improvement projects. The linkages are based on logical affinities, such as technical fit, location of the work within the building, cost saving…

  18. Reaction mechanism of WGS and PROX reactions catalyzed by Pt/oxide catalysts revealed by an FeO(111)/Pt(111) inverse model catalyst.

    PubMed

    Xu, Lingshun; Wu, Zongfang; Jin, Yuekang; Ma, Yunsheng; Huang, Weixin

    2013-08-07

    We have employed XPS and TDS to study the adsorption and surface reactions of H2O, CO and HCOOH on an FeO(111)/Pt(111) inverse model catalyst. The FeO(111)-Pt(111) interface of the FeO(111)/Pt(111) inverse model catalyst exposes coordination-unsaturated Fe(II) cations (Fe(II)CUS) and the Fe(II)CUS cations are capable of modifying the reactivity of neighbouring Pt sites. Water facilely dissociates on the Fe(II)CUS cations at the FeO(111)-Pt(111) interface to form hydroxyls that react to form both water and H2 upon heating. Hydroxyls on the Fe(II)CUS cations can react with CO(a) on the neighbouring Pt(111) sites to produce CO2 at low temperatures. Hydroxyls act as the co-catalyst in the CO oxidation by hydroxyls to CO2 (PROX reaction), while they act as one of the reactants in the CO oxidation by hydroxyls to CO2 and H2 (WGS reaction), and the recombinative reaction of hydroxyls to produce H2 is the rate-limiting step in the WGS reaction. A comparison of reaction behaviors between the interfacial CO(a) + OH reaction and the formate decomposition reaction suggest that formate is the likely surface intermediate of the CO(a) + OH reaction. These results provide some solid experimental evidence for the associative reaction mechanism of WGS and PROX reactions catalyzed by Pt/oxide catalysts.

  19. Theater Medical Information Program Joint Increment 2 (TMIP J Inc 2)

    DTIC Science & Technology

    2016-03-01

    Acquisition Executive DoD - Department of Defense DoDAF - DoD Architecture Framework FD - Full Deployment FDD - Full Deployment Decision FY...the Full Deployment Decision ( FDD ), the TMIP-J Increment 2 Economic Analysis was approved on December 6, 2013. The USD(AT&L) signed an Acquisition...Decision Memorandum (ADM) on December 23, 2013 approving FDD for TMIP-J Increment 2 and establishing the Full Deployment Objective and Threshold dates as

  20. Threshold units: A correct metric for reaction time?

    PubMed Central

    Zele, Andrew J.; Cao, Dingcai; Pokorny, Joel

    2007-01-01

    Purpose To compare reaction time (RT) to rod incremental and decremental stimuli expressed in physical contrast units or psychophysical threshold units. Methods Rod contrast detection thresholds and suprathreshold RTs were measured for Rapid-On and Rapid-Off ramp stimuli. Results Threshold sensitivity to Rapid-Off stimuli was higher than to Rapid-On stimuli. Suprathreshold RTs specified in Weber contrast for Rapid-Off stimuli were shorter than for Rapid-On stimuli. Reaction time data expressed in multiples of threshold reversed the outcomes: Reaction times for Rapid-On stimuli were shorter than those for Rapid-Off stimuli. The use of alternative contrast metrics also failed to equate RTs. Conclusions A case is made that the interpretation of RT data may be confounded when expressed in threshold units. Stimulus energy or contrast is the only metric common to the response characteristics of the cells underlying speeded responses. The use of threshold metrics for RT can confuse the interpretation of an underlying physiological process. PMID:17240416

  1. Exposure to water fluoridation and caries increment.

    PubMed

    Spencer, A J; Armfield, J M; Slade, G D

    2008-03-01

    The objective of this cohort study was to examine the association between exposure to water fluoridation and the increment of dental caries in two Australian states: Queensland (Qld)--5 per cent fluoridation coverage; and South Australia (SA)--70 per cent fluoridation coverage. Stratified random samples were drawn from fluoridated Adelaide and the largely non-fluoridated rest-of-state in SA, and fluoridated Townsville and non-fluoridated Brisbane in Qld. Children were enrolled between 1991 and 1992 (SA: 5-15 yrs old, n = 9,980; Qld: 5-12 yrs old, n = 10,695). Follow-up caries status data for 3 years (+/- 1/2 year) were available on 8,183 children in SA and 6,711 children in Qld. Baseline data on lifetime exposure to fluoridated water, use of other fluorides and socio-economic status (SES) were collected by questionnaire, and tooth surface caries status by dental examinations in school dental service clinics. Higher per cent lifetime exposure to fluoridated water (6 categories: 0;1-24; 25-49; 50-74; 75-99; 100 per cent) was a significant predictor (ANOVA, p < 0.01) of lower annualised Net Caries Increment (NCI) for the deciduous dentition in SA and Qld, but only for Qld in the permanent dentition. These associations persisted in multiple linear regression analyses controlling for age, gender, exposure to other fluorides and SES (p < 0.05). Water fluoridation was effective in reducing caries increment, even in the presence of a dilution effect from other fluorides. The effect of fluoridated water consumption was strongest in the deciduous dentition and where diffusion of food and beverages from fluoridated to non-fluoridated areas was less likely.

  2. Predictive and mechanistic multivariate linear regression models for reaction development

    PubMed Central

    Santiago, Celine B.; Guo, Jing-Yao

    2018-01-01

    Multivariate Linear Regression (MLR) models utilizing computationally-derived and empirically-derived physical organic molecular descriptors are described in this review. Several reports demonstrating the effectiveness of this methodological approach towards reaction optimization and mechanistic interrogation are discussed. A detailed protocol to access quantitative and predictive MLR models is provided as a guide for model development and parameter analysis. PMID:29719711

  3. Simple model for lambda-doublet propensities in bimolecular reactions

    NASA Technical Reports Server (NTRS)

    Bronikowski, Michael J.; Zare, Richard N.

    1990-01-01

    A simple geometric model is presented to account for lambda-doublet propensities in bimolecular reactions A + BC - AB + C. It applies to reactions in which AB is formed in a pi state, and in which the unpaired molecular orbital responsible for lambda-doubling arises from breaking the B-C bond. The lambda-doublet population ratio is predicted to be 2:1 provided that: (1) the motion of A in the transition state determines the plane of rotation of AB; (2) the unpaired pi orbital lying initially along the B-C bond may be resolved into a projection onto the AB plane of rotation and a projection perpendicular to this plane; (3) there is no preferred geometry for dissociation of ABC. The 2:1 lambda-doublet ratio is the 'unconstrained dynamics prior' lambda-doublet distribution for such reactions.

  4. Testing an explanatory model of nurses' intention to report adverse drug reactions in hospital settings.

    PubMed

    Angelis, Alessia De; Pancani, Luca; Steca, Patrizia; Colaceci, Sofia; Giusti, Angela; Tibaldi, Laura; Alvaro, Rosaria; Ausili, Davide; Vellone, Ercole

    2017-05-01

    To test an explanatory model of nurses' intention to report adverse drug reactions in hospital settings, based on the theory of planned behaviour. Under-reporting of adverse drug reactions is an important problem among nurses. A cross-sectional design was used. Data were collected with the adverse drug reporting nurses' questionnaire. Confirmatory factor analysis was performed to test the factor validity of the adverse drug reporting nurses' questionnaire, and structural equation modelling was used to test the explanatory model. The convenience sample comprised 500 Italian hospital nurses (mean age = 43.52). Confirmatory factor analysis supported the factor validity of the adverse drug reporting nurses' questionnaire. The structural equation modelling showed a good fit with the data. Nurses' intention to report adverse drug reactions was significantly predicted by attitudes, subjective norms and perceived behavioural control (R² = 0.16). The theory of planned behaviour effectively explained the mechanisms behind nurses' intention to report adverse drug reactions, showing how several factors come into play. In a scenario of organisational empowerment towards adverse drug reaction reporting, the major predictors of the intention to report are support for the decision to report adverse drug reactions from other health care practitioners, perceptions about the value of adverse drug reaction reporting and nurses' favourable self-assessment of their adverse drug reaction reporting skills. © 2017 John Wiley & Sons Ltd.

  5. Heterogeneous reactions in a stratospheric box model: A sensitivity study

    NASA Astrophysics Data System (ADS)

    Danilin, Michael Y.; McConnell, John C.

    1994-12-01

    Recent laboratory data concerning the reactions of HCl and HOx on/in sulfuric acid aerosol (Hanson et al., 1994), N2O5 and ClONO2 hydrolysis on the frozen aerosol (Hanson and Ravishankara, 1993a) and the temperature dependence of the HNO3 absorption cross section (Burkholder et al., 1993) indicate that a reevaluation of the role of heterogeneous reactions in the chemical balance of the stratosphere is required. A chemical module prepared for a three-dimensional (3-D) global chemistry transport model (CTM) and a general circulation model (GCM) has been used to carry out a sensitivity study of the effects of heterogeneous reactions on/in the sulfate aerosol and on the polar stratospheric cloud (PSC) particles. We present here results for the latitudes 60°S, 70°S and 75°S at the 50-mbar level. Our findings indicate that (1) the new values of the HNO3 cross sections result in lower mixing ratios for NOx and make ozone more vulnerable to catalytic destruction by ClOx; (2) the effect of the heterogeneous reactions OH + HNO3(a) → H2O + NO3 and HO2 +HO2(a) → H2O2 + O2 are small in comparison with the same gas phase reactions and play a negligible role for the ozone balance; (3) the HCl reactions in the sulfuric acid aerosol at 60°S and 70°S increase the chlorine activation up to 0.53 parts per billion by volume (ppbv) and 0.72 ppbv, respectively, for liquid aerosol and up to 0.87 ppbv for frozen aerosol at 70°S for volcanic conditions and this results in considerable ozone depletion at these latitudes; (4) studying the ozone "hole" phenomenon, we have considered the different initial ratios of ClONO2/HCl, of N2O5, galactic cosmic rays (GCRs), and longer lifetimes for the PSC. We have speculated an existence of the reaction N2O5 + HCl(a) → ClNO2 + HNO3.

  6. Modeling the Reaction of Fe Atoms with CCl4

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Camaioni, Donald M.; Ginovska, Bojana; Dupuis, Michel

    2009-01-05

    The reaction of zero-valent iron with carbon tetrachloride (CCl4) in gas phase was studied using density functional theory. Temperature programmed desorption experiments over a range of Fe and CCl4 coverages on a FeO(111) surface, demonstrate a rich surface chemistry with several reaction products (C2Cl4, C2Cl6, OCCl2, CO, FeCl2, FeCl3) observed. The reactivity of Fe and CCl4 was studied under three stoichiometries, one Fe with one CCl4, one Fe with two CCl4 molecules and two Fe with one CCl4, modeling the environment of the experimental work. The electronic structure calculations give insight into the reactions leading to the experimentally observed productsmore » and suggest that novel Fe-C-Cl containing species are important intermediates in these reactions. The intermediate complexes are formed in highly exothermic reactions, in agreement with the experimentally observed reactivity with the surface at low temperature (30 K). This initial survey of the reactivity of Fe with CCl4 identifies some potential reaction pathways that are important in the effort to use Fe nano-particles to differentiate harmful pathways that lead to the formation of contaminants like chloroform (CHCl3) from harmless pathways that lead to products such as formate (HCO2-) or carbon oxides in water and soil. The Pacific Northwest National Laboratory is operated by Battelle for the U.S. Department of Energy.« less

  7. Incremental isometric embedding of high-dimensional data using connected neighborhood graphs.

    PubMed

    Zhao, Dongfang; Yang, Li

    2009-01-01

    Most nonlinear data embedding methods use bottom-up approaches for capturing the underlying structure of data distributed on a manifold in high dimensional space. These methods often share the first step which defines neighbor points of every data point by building a connected neighborhood graph so that all data points can be embedded to a single coordinate system. These methods are required to work incrementally for dimensionality reduction in many applications. Because input data stream may be under-sampled or skewed from time to time, building connected neighborhood graph is crucial to the success of incremental data embedding using these methods. This paper presents algorithms for updating $k$-edge-connected and $k$-connected neighborhood graphs after a new data point is added or an old data point is deleted. It further utilizes a simple algorithm for updating all-pair shortest distances on the neighborhood graph. Together with incremental classical multidimensional scaling using iterative subspace approximation, this paper devises an incremental version of Isomap with enhancements to deal with under-sampled or unevenly distributed data. Experiments on both synthetic and real-world data sets show that the algorithm is efficient and maintains low dimensional configurations of high dimensional data under various data distributions.

  8. Demonstration/Validation of Incremental Sampling at Two Diverse Military Ranges and Development of an Incremental Sampling Tool

    DTIC Science & Technology

    2010-06-01

    Sampling (MIS)? • Technique of combining many increments of soil from a number of points within exposure area • Developed by Enviro Stat (Trademarked...Demonstrating a reliable soil sampling strategy to accurately characterize contaminant concentrations in spatially extreme and heterogeneous...into a set of decision (exposure) units • One or several discrete or small- scale composite soil samples collected to represent each decision unit

  9. Predictive ability of a comprehensive incremental test in mountain bike marathon.

    PubMed

    Ahrend, Marc-Daniel; Schneeweiss, Patrick; Martus, Peter; Niess, Andreas M; Krauss, Inga

    2018-01-01

    Traditional performance tests in mountain bike marathon (XCM) primarily quantify aerobic metabolism and may not describe the relevant capacities in XCM. We aimed to validate a comprehensive test protocol quantifying its intermittent demands. Forty-nine athletes (38.8±9.1 years; 38 male; 11 female) performed a laboratory performance test, including an incremental test, to determine individual anaerobic threshold (IAT), peak power output (PPO) and three maximal efforts (10 s all-out sprint, 1 min maximal effort and 5 min maximal effort). Within 2 weeks, the athletes participated in one of three XCM races (n=15, n=9 and n=25). Correlations between test variables and race times were calculated separately. In addition, multiple regression models of the predictive value of laboratory outcomes were calculated for race 3 and across all races (z-transformed data). All variables were correlated with race times 1, 2 and 3: 10 s all-out sprint (r=-0.72; r=-0.59; r=-0.61), 1 min maximal effort (r=-0.85; r=-0.84; r=-0.82), 5 min maximal effort (r=-0.57; r=-0.85; r=-0.76), PPO (r=-0.77; r=-0.73; r=-0.76) and IAT (r=-0.71; r=-0.67; r=-0.68). The best-fitting multiple regression models for race 3 (r 2 =0.868) and across all races (r 2 =0.757) comprised 1 min maximal effort, IAT and body weight. Aerobic and intermittent variables correlated least strongly with race times. Their use in a multiple regression model confirmed additional explanatory power to predict XCM performance. These findings underline the usefulness of the comprehensive incremental test to predict performance in that sport more precisely.

  10. Predictive ability of a comprehensive incremental test in mountain bike marathon

    PubMed Central

    Schneeweiss, Patrick; Martus, Peter; Niess, Andreas M; Krauss, Inga

    2018-01-01

    Objectives Traditional performance tests in mountain bike marathon (XCM) primarily quantify aerobic metabolism and may not describe the relevant capacities in XCM. We aimed to validate a comprehensive test protocol quantifying its intermittent demands. Methods Forty-nine athletes (38.8±9.1 years; 38 male; 11 female) performed a laboratory performance test, including an incremental test, to determine individual anaerobic threshold (IAT), peak power output (PPO) and three maximal efforts (10 s all-out sprint, 1 min maximal effort and 5 min maximal effort). Within 2 weeks, the athletes participated in one of three XCM races (n=15, n=9 and n=25). Correlations between test variables and race times were calculated separately. In addition, multiple regression models of the predictive value of laboratory outcomes were calculated for race 3 and across all races (z-transformed data). Results All variables were correlated with race times 1, 2 and 3: 10 s all-out sprint (r=−0.72; r=−0.59; r=−0.61), 1 min maximal effort (r=−0.85; r=−0.84; r=−0.82), 5 min maximal effort (r=−0.57; r=−0.85; r=−0.76), PPO (r=−0.77; r=−0.73; r=−0.76) and IAT (r=−0.71; r=−0.67; r=−0.68). The best-fitting multiple regression models for race 3 (r2=0.868) and across all races (r2=0.757) comprised 1 min maximal effort, IAT and body weight. Conclusion Aerobic and intermittent variables correlated least strongly with race times. Their use in a multiple regression model confirmed additional explanatory power to predict XCM performance. These findings underline the usefulness of the comprehensive incremental test to predict performance in that sport more precisely. PMID:29387445

  11. Numerical modeling of particle generation from ozone reactions with human-worn clothing in indoor environments

    NASA Astrophysics Data System (ADS)

    Rai, Aakash C.; Lin, Chao-Hsin; Chen, Qingyan

    2015-02-01

    Ozone-terpene reactions are important sources of indoor ultrafine particles (UFPs), a potential health hazard for human beings. Humans themselves act as possible sites for ozone-initiated particle generation through reactions with squalene (a terpene) that is present in their skin, hair, and clothing. This investigation developed a numerical model to probe particle generation from ozone reactions with clothing worn by humans. The model was based on particle generation measured in an environmental chamber as well as physical formulations of particle nucleation, condensational growth, and deposition. In five out of the six test cases, the model was able to predict particle size distributions reasonably well. The failure in the remaining case demonstrated the fundamental limitations of nucleation models. The model that was developed was used to predict particle generation under various building and airliner cabin conditions. These predictions indicate that ozone reactions with human-worn clothing could be an important source of UFPs in densely occupied classrooms and airliner cabins. Those reactions could account for about 40% of the total UFPs measured on a Boeing 737-700 flight. The model predictions at this stage are indicative and should be improved further.

  12. Assessment of PDF Micromixing Models Using DNS Data for a Two-Step Reaction

    NASA Astrophysics Data System (ADS)

    Tsai, Kuochen; Chakrabarti, Mitali; Fox, Rodney O.; Hill, James C.

    1996-11-01

    Although the probability density function (PDF) method is known to treat the chemical reaction terms exactly, its application to turbulent reacting flows have been overshadowed by the ability to model the molecular mixing terms satisfactorily. In this study, two PDF molecular mixing models, the linear-mean-square-estimation (LMSE or IEM) model and the generalized interaction-by-exchange-with-the-mean (GIEM) model, are compared with the DNS data in decaying turbulence with a two-step parallel-consecutive reaction and two segregated initial conditions: ``slabs" and ``blobs". Since the molecular mixing model is expected to have a strong effect on the mean values of chemical species under such initial conditions, the model evaluation is intended to answer the following questions: Can the PDF models predict the mean values of chemical species correctly with completely segregated initial conditions? (2) Is a single molecular mixing timescale sufficient for the PDF models to predict the mean values with different initial conditions? (3) Will the chemical reactions change the molecular mixing timescales of the reacting species enough to affect the accuracy of the model's prediction for the mean values of chemical species?

  13. Ethical leadership: meta-analytic evidence of criterion-related and incremental validity.

    PubMed

    Ng, Thomas W H; Feldman, Daniel C

    2015-05-01

    This study examines the criterion-related and incremental validity of ethical leadership (EL) with meta-analytic data. Across 101 samples published over the last 15 years (N = 29,620), we observed that EL demonstrated acceptable criterion-related validity with variables that tap followers' job attitudes, job performance, and evaluations of their leaders. Further, followers' trust in the leader mediated the relationships of EL with job attitudes and performance. In terms of incremental validity, we found that EL significantly, albeit weakly in some cases, predicted task performance, citizenship behavior, and counterproductive work behavior-even after controlling for the effects of such variables as transformational leadership, use of contingent rewards, management by exception, interactional fairness, and destructive leadership. The article concludes with a discussion of ways to strengthen the incremental validity of EL. (PsycINFO Database Record (c) 2015 APA, all rights reserved).

  14. A diffusion-reaction scheme for modeling ignition and self-propagating reactions in Al/CuO multilayered thin films

    NASA Astrophysics Data System (ADS)

    Lahiner, Guillaume; Nicollet, Andrea; Zapata, James; Marín, Lorena; Richard, Nicolas; Rouhani, Mehdi Djafari; Rossi, Carole; Estève, Alain

    2017-10-01

    Thermite multilayered films have the potential to be used as local high intensity heat sources for a variety of applications. Improving the ability of researchers to more rapidly develop Micro Electro Mechanical Systems devices based on thermite multilayer films requires predictive modeling in which an understanding of the relationship between the properties (ignition and flame propagation), the multilayer structure and composition (bilayer thicknesses, ratio of reactants, and nature of interfaces), and aspects related to integration (substrate conductivity and ignition apparatus) is achieved. Assembling all these aspects, this work proposes an original 2D diffusion-reaction modeling framework to predict the ignition threshold and reaction dynamics of Al/CuO multilayered thin films. This model takes into consideration that CuO first decomposes into Cu2O, and then, released oxygen diffuses across the Cu2O and Al2O3 layers before reacting with pure Al to form Al2O3. This model is experimentally validated from ignition and flame velocity data acquired on Al/CuO multilayers deposited on a Kapton layer. This paper discusses, for the first time, the importance of determining the ceiling temperature above which the multilayers disintegrate, possibly before their complete combustion, thus severely impacting the reaction front velocity and energy release. This work provides a set of heating surface areas to obtain the best ignition conditions, i.e., with minimal ignition power, as a function of the substrate type.

  15. Multi-purpose wind tunnel reaction control model block

    NASA Technical Reports Server (NTRS)

    Dresser, H. S.; Daileda, J. J. (Inventor)

    1978-01-01

    A reaction control system nozzle block is provided for testing the response characteristics of space vehicles to a variety of reaction control thruster configurations. A pressurized air system is connected with the supply lines which lead to the individual jet nozzles. Each supply line terminates in a compact cylindrical plenum volume, axially perpendicular and adjacent to the throat of the jet nozzle. The volume of the cylindrical plenum is sized to provide uniform thrust characteristics from each jet nozzle irrespective of the angle of approach of the supply line to the plenum. Each supply line may be plugged or capped to stop the air supply to selected jet nozzles, thereby enabling a variety of nozzle configurations to be obtained from a single model nozzle block.

  16. A measurement model for general noise reaction in response to aircraft noise.

    PubMed

    Kroesen, Maarten; Schreckenberg, Dirk

    2011-01-01

    In this paper a measurement model for general noise reaction (GNR) in response to aircraft noise is developed to assess the performance of aircraft noise annoyance and a direct measure of general reaction as indicators of this concept. For this purpose GNR is conceptualized as a superordinate latent construct underlying particular manifestations. This conceptualization is empirically tested through estimation of a second-order factor model. Data from a community survey at Frankfurt Airport are used for this purpose (N=2206). The data fit the hypothesized factor structure well and support the conceptualization of GNR as a superordinate construct. It is concluded that noise annoyance and a direct measure of general reaction to noise capture a large part of the negative feelings and emotions in response to aircraft noise but are unable to capture all relevant variance. The paper concludes with recommendations for the valid measurement of community reaction and several directions for further research.

  17. Accounting for between-study variation in incremental net benefit in value of information methodology.

    PubMed

    Willan, Andrew R; Eckermann, Simon

    2012-10-01

    Previous applications of value of information methods for determining optimal sample size in randomized clinical trials have assumed no between-study variation in mean incremental net benefit. By adopting a hierarchical model, we provide a solution for determining optimal sample size with this assumption relaxed. The solution is illustrated with two examples from the literature. Expected net gain increases with increasing between-study variation, reflecting the increased uncertainty in incremental net benefit and reduced extent to which data are borrowed from previous evidence. Hence, a trial can become optimal where current evidence is sufficient assuming no between-study variation. However, despite the expected net gain increasing, the optimal sample size in the illustrated examples is relatively insensitive to the amount of between-study variation. Further percentage losses in expected net gain were small even when choosing sample sizes that reflected widely different between-study variation. Copyright © 2011 John Wiley & Sons, Ltd.

  18. Recent developments of the quantum chemical cluster approach for modeling enzyme reactions.

    PubMed

    Siegbahn, Per E M; Himo, Fahmi

    2009-06-01

    The quantum chemical cluster approach for modeling enzyme reactions is reviewed. Recent applications have used cluster models much larger than before which have given new modeling insights. One important and rather surprising feature is the fast convergence with cluster size of the energetics of the reactions. Even for reactions with significant charge separation it has in some cases been possible to obtain full convergence in the sense that dielectric cavity effects from outside the cluster do not contribute to any significant extent. Direct comparisons between quantum mechanics (QM)-only and QM/molecular mechanics (MM) calculations for quite large clusters in a case where the results differ significantly have shown that care has to be taken when using the QM/MM approach where there is strong charge polarization. Insights from the methods used, generally hybrid density functional methods, have also led to possibilities to give reasonable error limits for the results. Examples are finally given from the most extensive study using the cluster model, the one of oxygen formation at the oxygen-evolving complex in photosystem II.

  19. Neuromorphic transistor achieved by redox reaction of WO3 thin film

    NASA Astrophysics Data System (ADS)

    Tsuchiya, Takashi; Jayabalan, Manikandan; Kawamura, Kinya; Takayanagi, Makoto; Higuchi, Tohru; Jayavel, Ramasamy; Terabe, Kazuya

    2018-04-01

    An all-solid-state neuromorphic transistor composed of a WO3 thin film and a proton-conducting electrolyte was fabricated for application to next-generation information and communication technology including artificial neural networks. The drain current exhibited a 4-order-of-magnitude increment by redox reaction of the WO3 thin film owing to proton migration. Learning and forgetting characteristics were well tuned by the gate control of WO3 redox reactions owing to the separation of the current reading path and pulse application path in the transistor structure. This technique should lead to the development of versatile and low-power-consumption neuromorphic devices.

  20. The Space Station decision - Incremental politics and technological choice

    NASA Technical Reports Server (NTRS)

    Mccurdy, Howard E.

    1990-01-01

    Using primary documents and interviews with participants, this book describes the events that led up to the 1984 decision that NASA should build a permanently occupied, international space station in low earth orbit. The role that civil servants in NASA played in initiating the program is highlighted. The trail of the Space Station proposal as its advocates devised strategies to push it through the White House policy review process is followed. The critical analysis focuses on the way in which 'incrementalism' (the tendency of policy makers to introduce incremental changes once projects are under way) operated in connection with the Space Station program. The book calls for a commitment to a long-range space policy.

  1. Reynolds number scaling of velocity increments in isotropic turbulence.

    PubMed

    Iyer, Kartik P; Sreenivasan, Katepalli R; Yeung, P K

    2017-02-01

    Using the largest database of isotropic turbulence available to date, generated by the direct numerical simulation (DNS) of the Navier-Stokes equations on an 8192^{3} periodic box, we show that the longitudinal and transverse velocity increments scale identically in the inertial range. By examining the DNS data at several Reynolds numbers, we infer that the contradictory results of the past on the inertial-range universality are artifacts of low Reynolds number and residual anisotropy. We further show that both longitudinal and transverse velocity increments scale on locally averaged dissipation rate, just as postulated by Kolmogorov's refined similarity hypothesis, and that, in isotropic turbulence, a single independent scaling adequately describes fluid turbulence in the inertial range.

  2. Apparatus for electrical-assisted incremental forming and process thereof

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roth, John; Cao, Jian

    A process and apparatus for forming a sheet metal component using an electric current passing through the component. The process can include providing an incremental forming machine, the machine having at least one arcuate tipped tool and at least electrode spaced a predetermined distance from the arcuate tipped tool. The machine is operable to perform a plurality of incremental deformations on the sheet metal component using the arcuate tipped tool. The machine is also operable to apply an electric direct current through the electrode into the sheet metal component at the predetermined distance from the arcuate tipped tool while themore » machine is forming the sheet metal component.« less

  3. A fully coupled 3D transport model in SPH for multi-species reaction-diffusion systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adami, Stefan; Hu, X. Y.; Adams, N. A.

    2011-08-23

    Abstract—In this paper we present a fully generalized transport model for multiple species in complex two and threedimensional geometries. Based on previous work [1] we have extended our interfacial reaction-diffusion model to handle arbitrary numbers of species allowing for coupled reaction models. Each species is tracked independently and we consider different physics of a species with respect to the bulk phases in contact. We use our SPH model to simulate the reaction-diffusion problem on a pore-scale level of a solid oxide fuel cell (SOFC) with special emphasize on the effect of surface diffusion.

  4. Molecular modeling of the reaction pathway and hydride transfer reactions of HMG-CoA reductase.

    PubMed

    Haines, Brandon E; Steussy, C Nicklaus; Stauffacher, Cynthia V; Wiest, Olaf

    2012-10-09

    HMG-CoA reductase catalyzes the four-electron reduction of HMG-CoA to mevalonate and is an enzyme of considerable biomedical relevance because of the impact of its statin inhibitors on public health. Although the reaction has been studied extensively using X-ray crystallography, there are surprisingly no computational studies that test the mechanistic hypotheses suggested for this complex reaction. Theozyme and quantum mechanical (QM)/molecular mechanical (MM) calculations up to the B3LYP/6-31g(d,p)//B3LYP/6-311++g(2d,2p) level of theory were employed to generate an atomistic description of the enzymatic reaction process and its energy profile. The models generated here predict that the catalytically important Glu83 is protonated prior to hydride transfer and that it acts as the general acid or base in the reaction. With Glu83 protonated, the activation energies calculated for the sequential hydride transfer reactions, 21.8 and 19.3 kcal/mol, are in qualitative agreement with the experimentally determined rate constant for the entire reaction (1 s(-1) to 1 min(-1)). When Glu83 is not protonated, the first hydride transfer reaction is predicted to be disfavored by >20 kcal/mol, and the activation energy is predicted to be higher by >10 kcal/mol. While not involved in the reaction as an acid or base, Lys267 is critical for stabilization of the transition state in forming an oxyanion hole with the protonated Glu83. Molecular dynamics simulations and MM/Poisson-Boltzmann surface area free energy calculations predict that the enzyme active site stabilizes the hemithioacetal intermediate better than the aldehyde intermediate. This suggests a mechanism in which cofactor exchange occurs before the breakdown of the hemithioacetal. Slowing the conversion to aldehyde would provide the enzyme with a mechanism to protect it from solvent and explain why the free aldehyde is not observed experimentally. Our results support the hypothesis that the pK(a) of an active site acidic

  5. Incremental comprehension of spoken quantifier sentences: Evidence from brain potentials.

    PubMed

    Freunberger, Dominik; Nieuwland, Mante S

    2016-09-01

    Do people incrementally incorporate the meaning of quantifier expressions to understand an unfolding sentence? Most previous studies concluded that quantifiers do not immediately influence how a sentence is understood based on the observation that online N400-effects differed from offline plausibility judgments. Those studies, however, used serial visual presentation (SVP), which involves unnatural reading. In the current ERP-experiment, we presented spoken positive and negative quantifier sentences ("Practically all/practically no postmen prefer delivering mail, when the weather is good/bad during the day"). Different from results obtained in a previously reported SVP-study (Nieuwland, 2016) sentence truth-value N400 effects occurred in positive and negative quantifier sentences alike, reflecting fully incremental quantifier comprehension. This suggests that the prosodic information available during spoken language comprehension supports the generation of online predictions for upcoming words and that, at least for quantifier sentences, comprehension of spoken language may proceed more incrementally than comprehension during SVP reading. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  6. 7Li-induced reaction on natMo: A study of complete versus incomplete fusion

    NASA Astrophysics Data System (ADS)

    Kumar, Deepak; Maiti, Moumita; Lahiri, Susanta

    2017-07-01

    Background: Several investigations on the complete-incomplete fusion (CF-ICF) dynamics of α -cluster well-bound nuclei have been contemplated above the Coulomb barrier (˜4 -7 MeV/nucleon) in recent years. It is therefore expected to observe significant ICF over CF in the reactions induced by a weakly bound α -cluster nucleus at slightly above the barrier. Purpose: Study of the CF-ICF dynamics by measuring the populated residues in the weakly bound 7Li+natMo system at energies slightly above the Coulomb barrier to well above it. Method: In order to investigate CF-ICF in the loosely bound system, 7Li beam was bombarded on the natMo foils, separated by the aluminium (Al) catcher foils alternatively, within ˜3 -6.5 MeV/nucleon. Evaporation residues produced in each foil were identified by the off-line γ -ray spectrometry. Measured cross section data of the residues were compared with the theoretical model calculations based on the equilibrium (EQ) and pre-equilibrium (PEQ) reaction mechanisms. Results: The experimental cross section of Rh 101 m,100 ,99 m,97 ,Ru,9597,Tc 99 m,96 ,95 ,94 ,93 m+g , and 93mMo residues measured at various projectile energies were satisfactorily reproduced by the simplified coupled channel approach in comparison to single barrier penetration model calculation. Significant cross section enhancement in the α -emitting channels was observed compared to EQ and PEQ model calculations throughout observed energy region. The ICF process over CF was analyzed by comparing with EMPIRE. The increment of the incomplete fusion fraction was observed with increasing projectile energies. Conclusions: Theoretical model calculations reveal that the compound reaction mechanism is the major contributor to the production of residues in 7Li+natMo reaction. Theoretical evaluations substantiate the contribution of ICF over the CF in α -emitting channels. EMPIRE estimations shed light

  7. Prediction of hot regions in protein-protein interaction by combining density-based incremental clustering with feature-based classification.

    PubMed

    Hu, Jing; Zhang, Xiaolong; Liu, Xiaoming; Tang, Jinshan

    2015-06-01

    Discovering hot regions in protein-protein interaction is important for drug and protein design, while experimental identification of hot regions is a time-consuming and labor-intensive effort; thus, the development of predictive models can be very helpful. In hot region prediction research, some models are based on structure information, and others are based on a protein interaction network. However, the prediction accuracy of these methods can still be improved. In this paper, a new method is proposed for hot region prediction, which combines density-based incremental clustering with feature-based classification. The method uses density-based incremental clustering to obtain rough hot regions, and uses feature-based classification to remove the non-hot spot residues from the rough hot regions. Experimental results show that the proposed method significantly improves the prediction performance of hot regions. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. Anaerobic Threshold and Salivary α-amylase during Incremental Exercise.

    PubMed

    Akizuki, Kazunori; Yazaki, Syouichirou; Echizenya, Yuki; Ohashi, Yukari

    2014-07-01

    [Purpose] The purpose of this study was to clarify the validity of salivary α-amylase as a method of quickly estimating anaerobic threshold and to establish the relationship between salivary α-amylase and double-product breakpoint in order to create a way to adjust exercise intensity to a safe and effective range. [Subjects and Methods] Eleven healthy young adults performed an incremental exercise test using a cycle ergometer. During the incremental exercise test, oxygen consumption, carbon dioxide production, and ventilatory equivalent were measured using a breath-by-breath gas analyzer. Systolic blood pressure and heart rate were measured to calculate the double product, from which double-product breakpoint was determined. Salivary α-amylase was measured to calculate the salivary threshold. [Results] One-way ANOVA revealed no significant differences among workloads at the anaerobic threshold, double-product breakpoint, and salivary threshold. Significant correlations were found between anaerobic threshold and salivary threshold and between anaerobic threshold and double-product breakpoint. [Conclusion] As a method for estimating anaerobic threshold, salivary threshold was as good as or better than determination of double-product breakpoint because the correlation between anaerobic threshold and salivary threshold was higher than the correlation between anaerobic threshold and double-product breakpoint. Therefore, salivary threshold is a useful index of anaerobic threshold during an incremental workload.

  9. Estimation of incremental reactivities for multiple day scenarios: an application to ethane and dimethyoxymethane

    NASA Astrophysics Data System (ADS)

    Stockwell, William R.; Geiger, Harald; Becker, Karl H.

    Single-day scenarios are used to calculate incremental reactivities by definition (Carter, J. Air Waste Management Assoc. 44 (1994) 881-899.) but even unreactive organic compounds may have a non-negligible effect on ozone concentrations if multiple-day scenarios are considered. The concentration of unreactive compounds and their products may build up over a multiple-day period and the oxidation products may be highly reactive or highly unreactive affecting the overall incremental reactivity of the organic compound. We have developed a method for calculating incremental reactivities for multiple days based on a standard scenario for polluted European conditions. This method was used to estimate maximum incremental reactivities (MIR) and maximum ozone incremental reactivities (MOIR) for ethane and dimethyoxymethane for scenarios ranging from 1 to 6 days. It was found that the incremental reactivities increased as the length of the simulation period increased. The MIR of ethane increased faster than the value for dimethyoxymethane as the scenarios became longer. The MOIRs of ethane and dimethyoxymethane increased but the change was more modest for scenarios longer than 3 days. MOIRs of both volatile organic compounds were equal within the uncertainties of their chemical mechanisms by the 5 day scenario. These results show that dimethyoxymethane has an ozone forming potential on a per mass basis that is only somewhat greater than ethane if multiple-day scenarios are considered.

  10. [Spatiotemporal variation of Populus euphratica's radial increment at lower reaches of Tarim River after ecological water transfer].

    PubMed

    An, Hong-Yan; Xu, Hai-Liang; Ye, Mao; Yu, Pu-Ji; Gong, Jun-Jun

    2011-01-01

    Taking the Populus euphratica at lower reaches of Tarim River as test object, and by the methods of tree dendrohydrology, this paper studied the spatiotemporal variation of P. euphratic' s branch radial increment after ecological water transfer. There was a significant difference in the mean radial increment before and after ecological water transfer. The radial increment after the eco-water transfer was increased by 125%, compared with that before the water transfer. During the period of ecological water transfer, the radial increment was increased with increasing water transfer quantity, and there was a positive correlation between the annual radial increment and the total water transfer quantity (R2 = 0.394), suggesting that the radial increment of P. euphratica could be taken as the performance indicator of ecological water transfer. After the ecological water transfer, the radial increment changed greatly with the distance to the River, i.e. , decreased significantly along with the increasing distance to the River (P = 0.007). The P. euphratic' s branch radial increment also differed with stream segment (P = 0.017 ), i.e. , the closer to the head-water point (Daxihaizi Reservoir), the greater the branch radial increment. It was considered that the limited effect of the current ecological water transfer could scarcely change the continually deteriorating situation of the lower reaches of Tarim River.

  11. Reaction-diffusion processes and metapopulation models on duplex networks

    NASA Astrophysics Data System (ADS)

    Xuan, Qi; Du, Fang; Yu, Li; Chen, Guanrong

    2013-03-01

    Reaction-diffusion processes, used to model various spatially distributed dynamics such as epidemics, have been studied mostly on regular lattices or complex networks with simplex links that are identical and invariant in transferring different kinds of particles. However, in many self-organized systems, different particles may have their own private channels to keep their purities. Such division of links often significantly influences the underlying reaction-diffusion dynamics and thus needs to be carefully investigated. This article studies a special reaction-diffusion process, named susceptible-infected-susceptible (SIS) dynamics, given by the reaction steps β→α and α+β→2β, on duplex networks where links are classified into two groups: α and β links used to transfer α and β particles, which, along with the corresponding nodes, consist of an α subnetwork and a β subnetwork, respectively. It is found that the critical point of particle density to sustain reaction activity is independent of the network topology if there is no correlation between the degree sequences of the two subnetworks, and this critical value is suppressed or extended if the two degree sequences are positively or negatively correlated, respectively. Based on the obtained results, it is predicted that epidemic spreading may be promoted on positive correlated traffic networks but may be suppressed on networks with modules composed of different types of diffusion links.

  12. Prediction of Enzyme Mutant Activity Using Computational Mutagenesis and Incremental Transduction

    PubMed Central

    Basit, Nada; Wechsler, Harry

    2011-01-01

    Wet laboratory mutagenesis to determine enzyme activity changes is expensive and time consuming. This paper expands on standard one-shot learning by proposing an incremental transductive method (T2bRF) for the prediction of enzyme mutant activity during mutagenesis using Delaunay tessellation and 4-body statistical potentials for representation. Incremental learning is in tune with both eScience and actual experimentation, as it accounts for cumulative annotation effects of enzyme mutant activity over time. The experimental results reported, using cross-validation, show that overall the incremental transductive method proposed, using random forest as base classifier, yields better results compared to one-shot learning methods. T2bRF is shown to yield 90% on T4 and LAC (and 86% on HIV-1). This is significantly better than state-of-the-art competing methods, whose performance yield is at 80% or less using the same datasets. PMID:22007208

  13. 48 CFR 3432.771 - Provision for incremental funding.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 7 2010-10-01 2010-10-01 false Provision for incremental funding. 3432.771 Section 3432.771 Federal Acquisition Regulations System DEPARTMENT OF EDUCATION ACQUISITION REGULATION GENERAL CONTRACTING REQUIREMENTS CONTRACT FINANCING Contract Funding 3432.771 Provision...

  14. Potential stocks and increments of woody biomass in the European Union under different management and climate scenarios.

    PubMed

    Kindermann, Georg E; Schörghuber, Stefan; Linkosalo, Tapio; Sanchez, Anabel; Rammer, Werner; Seidl, Rupert; Lexer, Manfred J

    2013-02-01

    Forests play an important role in the global carbon flow. They can store carbon and can also provide wood which can substitute other materials. In EU27 the standing biomass is steadily increasing. Increments and harvests seem to have reached a plateau between 2005 and 2010. One reason for reaching this plateau will be the circumstance that the forests are getting older. High ages have the advantage that they typical show high carbon concentration and the disadvantage that the increment rates are decreasing. It should be investigated how biomass stock, harvests and increments will develop under different climate scenarios and two management scenarios where one is forcing to store high biomass amounts in forests and the other tries to have high increment rates and much harvested wood. A management which is maximising standing biomass will raise the stem wood carbon stocks from 30 tC/ha to 50 tC/ha until 2100. A management which is maximising increments will lower the stock to 20 tC/ha until 2100. The estimates for the climate scenarios A1b, B1 and E1 are different but there is much more effect by the management target than by the climate scenario. By maximising increments the harvests are 0.4 tC/ha/year higher than in the management which maximises the standing biomass. The increments until 2040 are close together but around 2100 the increments when maximising standing biomass are approximately 50 % lower than those when maximising increments. Cold regions will benefit from the climate changes in the climate scenarios by showing higher increments. The results of this study suggest that forest management should maximise increments, not stocks to be more efficient in sense of climate change mitigation. This is true especially for regions which have already high carbon stocks in forests, what is the case in many regions in Europe. During the time span 2010-2100 the forests of EU27 will absorb additional 1750 million tC if they are managed to maximise increments compared

  15. Incremental Scheduling Engines: Cost Savings through Automation

    NASA Technical Reports Server (NTRS)

    Jaap, John; Phillips, Shaun

    2005-01-01

    As humankind embarks on longer space missions farther from home, the requirements and environments for scheduling the activities performed on these missions are changing. As we begin to prepare for these missions it is appropriate to evaluate the merits and applicability of the different types of scheduling engines. Scheduling engines temporally arrange tasks onto a timeline so that all constraints and ob.jectives are met and resources are not over-booked. Scheduling engines used to schedule space missions fall into three general categories: batch, mixed-initiative, and incremental. This paper, presents an assessment of the engine types, a discussion of the impact of human exploration of the moon and Mars on planning and scheduling, and the applicability of the different types of scheduling engines. This paper will pursue the hypothesis that incremental scheduling engines may have a place in the new environment; they have the potential to reduce cost, to improve the satisfaction of those who execute or benefit from a particular timeline (the customers), and to allow astronauts to plan their own tasks and those of their companion robots.

  16. Echocardiography and risk prediction in advanced heart failure: incremental value over clinical markers.

    PubMed

    Agha, Syed A; Kalogeropoulos, Andreas P; Shih, Jeffrey; Georgiopoulou, Vasiliki V; Giamouzis, Grigorios; Anarado, Perry; Mangalat, Deepa; Hussain, Imad; Book, Wendy; Laskar, Sonjoy; Smith, Andrew L; Martin, Randolph; Butler, Javed

    2009-09-01

    Incremental value of echocardiography over clinical parameters for outcome prediction in advanced heart failure (HF) is not well established. We evaluated 223 patients with advanced HF receiving optimal therapy (91.9% angiotensin-converting enzyme inhibitor/angiotensin receptor blocker, 92.8% beta-blockers, 71.8% biventricular pacemaker, and/or defibrillator use). The Seattle Heart Failure Model (SHFM) was used as the reference clinical risk prediction scheme. The incremental value of echocardiographic parameters for event prediction (death or urgent heart transplantation) was measured by the improvement in fit and discrimination achieved by addition of standard echocardiographic parameters to the SHFM. After a median follow-up of 2.4 years, there were 38 (17.0%) events (35 deaths; 3 urgent transplants). The SHFM had likelihood ratio (LR) chi(2) 32.0 and C statistic 0.756 for event prediction. Left ventricular end-systolic volume, stroke volume, and severe tricuspid regurgitation were independent echocardiographic predictors of events. The addition of these parameters to SHFM improved LR chi(2) to 72.0 and C statistic to 0.866 (P < .001 and P=.019, respectively). Reclassifying the SHFM-predicted risk with use of the echocardiography-added model resulted in improved prognostic separation. Addition of standard echocardiographic variables to the SHFM results in significant improvement in risk prediction for patients with advanced HF.

  17. A Computational Approach to Increase Time Scales in Brownian Dynamics–Based Reaction-Diffusion Modeling

    PubMed Central

    Frazier, Zachary

    2012-01-01

    Abstract Particle-based Brownian dynamics simulations offer the opportunity to not only simulate diffusion of particles but also the reactions between them. They therefore provide an opportunity to integrate varied biological data into spatially explicit models of biological processes, such as signal transduction or mitosis. However, particle based reaction-diffusion methods often are hampered by the relatively small time step needed for accurate description of the reaction-diffusion framework. Such small time steps often prevent simulation times that are relevant for biological processes. It is therefore of great importance to develop reaction-diffusion methods that tolerate larger time steps while maintaining relatively high accuracy. Here, we provide an algorithm, which detects potential particle collisions prior to a BD-based particle displacement and at the same time rigorously obeys the detailed balance rule of equilibrium reactions. We can show that for reaction-diffusion processes of particles mimicking proteins, the method can increase the typical BD time step by an order of magnitude while maintaining similar accuracy in the reaction diffusion modelling. PMID:22697237

  18. Model creation of moving redox reaction boundary in agarose gel electrophoresis by traditional potassium permanganate method.

    PubMed

    Xie, Hai-Yang; Liu, Qian; Li, Jia-Hao; Fan, Liu-Yin; Cao, Cheng-Xi

    2013-02-21

    A novel moving redox reaction boundary (MRRB) model was developed for studying electrophoretic behaviors of analytes involving redox reaction on the principle of moving reaction boundary (MRB). Traditional potassium permanganate method was used to create the boundary model in agarose gel electrophoresis because of the rapid reaction rate associated with MnO(4)(-) ions and Fe(2+) ions. MRB velocity equation was proposed to describe the general functional relationship between velocity of moving redox reaction boundary (V(MRRB)) and concentration of reactant, and can be extrapolated to similar MRB techniques. Parameters affecting the redox reaction boundary were investigated in detail. Under the selected conditions, good linear relationship between boundary movement distance and time were obtained. The potential application of MRRB in electromigration redox reaction titration was performed in two different concentration levels. The precision of the V(MRRB) was studied and the relative standard deviations were below 8.1%, illustrating the good repeatability achieved in this experiment. The proposed MRRB model enriches the MRB theory and also provides a feasible realization of manual control of redox reaction process in electrophoretic analysis.

  19. Diabatic models with transferrable parameters for generalized chemical reactions

    NASA Astrophysics Data System (ADS)

    Reimers, Jeffrey R.; McKemmish, Laura K.; McKenzie, Ross H.; Hush, Noel S.

    2017-05-01

    Diabatic models applied to adiabatic electron-transfer theory yield many equations involving just a few parameters that connect ground-state geometries and vibration frequencies to excited-state transition energies and vibration frequencies to the rate constants for electron-transfer reactions, utilizing properties of the conical-intersection seam linking the ground and excited states through the Pseudo Jahn-Teller effect. We review how such simplicity in basic understanding can also be obtained for general chemical reactions. The key feature that must be recognized is that electron-transfer (or hole transfer) processes typically involve one electron (hole) moving between two orbitals, whereas general reactions typically involve two electrons or even four electrons for processes in aromatic molecules. Each additional moving electron leads to new high-energy but interrelated conical-intersection seams that distort the shape of the critical lowest-energy seam. Recognizing this feature shows how conical-intersection descriptors can be transferred between systems, and how general chemical reactions can be compared using the same set of simple parameters. Mathematical relationships are presented depicting how different conical-intersection seams relate to each other, showing that complex problems can be reduced into an effective interaction between the ground-state and a critical excited state to provide the first semi-quantitative implementation of Shaik’s “twin state” concept. Applications are made (i) demonstrating why the chemistry of the first-row elements is qualitatively so different to that of the second and later rows, (ii) deducing the bond-length alternation in hypothetical cyclohexatriene from the observed UV spectroscopy of benzene, (iii) demonstrating that commonly used procedures for modelling surface hopping based on inclusion of only the first-derivative correction to the Born-Oppenheimer approximation are valid in no region of the chemical

  20. Factors for radical creativity, incremental creativity, and routine, noncreative performance.

    PubMed

    Madjar, Nora; Greenberg, Ellen; Chen, Zheng

    2011-07-01

    This study extends theory and research by differentiating between routine, noncreative performance and 2 distinct types of creativity: radical and incremental. We also use a sensemaking perspective to examine the interplay of social and personal factors that may influence a person's engagement in a certain level of creative action versus routine, noncreative work. Results demonstrate that willingness to take risks, resources for creativity, and career commitment are associated primarily with radical creativity; that the presence of creative coworkers and organizational identification are associated with incremental creativity; and that conformity and organizational identification are linked with routine performance. Theoretical and managerial implications are discussed.

  1. When Natural Disaster Follows Economic Downturn: The Incremental Impact of Multiple Stressor Events on Trajectories of Depression and Posttraumatic Stress Disorder.

    PubMed

    Mandavia, Amar D; Bonanno, George A

    2018-04-29

    To determine whether there were incremental mental health impacts, specifically on depression trajectories, as a result of the 2008 economic crisis (the Great Recession) and subsequent Hurricane Sandy. Using latent growth mixture modeling and the ORANJ BOWL dataset, we examined prospective trajectories of depression among older adults (mean age, 60.67; SD, 6.86) who were exposed to the 2 events. We also collected community economic and criminal justice data to examine their impact upon depression trajectories. Participants (N=1172) were assessed at 3 times for affect, successful aging, and symptoms of depression. We additionally assessed posttraumatic stress disorder (PTSD) symptomology after Hurricane Sandy. We identified 3 prospective trajectories of depression. The majority (83.6%) had no significant change in depression from before to after these events (resilience), while 7.2% of the sample increased in depression incrementally after each event (incremental depression). A third group (9.2%) went from high to low depression symptomology following the 2 events (depressive-improving). Only those in the incremental depression group had significant PTSD symptoms following Hurricane Sandy. We identified a small group of individuals for whom the experience of multiple stressful events had an incremental negative effect on mental health outcomes. These results highlight the importance of understanding the perseveration of depression symptomology from one event to another. (Disaster Med Public Health Preparedness. 2018;page 1 of 10).

  2. Modeling thermal spike driven reactions at low temperature and application to zirconium carbide radiation damage

    NASA Astrophysics Data System (ADS)

    Ulmer, Christopher J.; Motta, Arthur T.

    2017-11-01

    The development of TEM-visible damage in materials under irradiation at cryogenic temperatures cannot be explained using classical rate theory modeling with thermally activated reactions since at low temperatures thermal reaction rates are too low. Although point defect mobility approaches zero at low temperature, the thermal spikes induced by displacement cascades enable some atom mobility as it cools. In this work a model is developed to calculate "athermal" reaction rates from the atomic mobility within the irradiation-induced thermal spikes, including both displacement cascades and electronic stopping. The athermal reaction rates are added to a simple rate theory cluster dynamics model to allow for the simulation of microstructure evolution during irradiation at cryogenic temperatures. The rate theory model is applied to in-situ irradiation of ZrC and compares well at cryogenic temperatures. The results show that the addition of the thermal spike model makes it possible to rationalize microstructure evolution in the low temperature regime.

  3. Using Incremental Rehearsal to Increase Fluency of Single-Digit Multiplication Facts with Children Identified as Learning Disabled in Mathematics Computation

    ERIC Educational Resources Information Center

    Burns, Matthew K.

    2005-01-01

    Previous research suggested that Incremental Rehearsal (IR; Tucker, 1989) led to better retention than other drill practices models. However, little research exists in the literature regarding drill models for mathematics and no studies were found that used IR to practice multiplication facts. Therefore, the current study used IR as an…

  4. Optimization of Maillard Reaction in Model System of Glucosamine and Cysteine Using Response Surface Methodology

    PubMed Central

    Arachchi, Shanika Jeewantha Thewarapperuma; Kim, Ye-Joo; Kim, Dae-Wook; Oh, Sang-Chul; Lee, Yang-Bong

    2017-01-01

    Sulfur-containing amino acids play important roles in good flavor generation in Maillard reaction of non-enzymatic browning, so aqueous model systems of glucosamine and cysteine were studied to investigate the effects of reaction temperature, initial pH, reaction time, and concentration ratio of glucosamine and cysteine. Response surface methodology was applied to optimize the independent reaction parameters of cysteine and glucosamine in Maillard reaction. Box-Behnken factorial design was used with 30 runs of 16 factorial levels, 8 axial levels and 6 central levels. The degree of Maillard reaction was determined by reading absorption at 425 nm in a spectrophotometer and Hunter’s L, a, and b values. ΔE was consequently set as the fifth response factor. In the statistical analyses, determination coefficients (R2) for their absorbance, Hunter’s L, a, b values, and ΔE were 0.94, 0.79, 0.73, 0.96, and 0.79, respectively, showing that the absorbance and Hunter’s b value were good dependent variables for this model system. The optimum processing parameters were determined to yield glucosamine-cysteine Maillard reaction product with higher absorbance and higher colour change. The optimum estimated absorbance was achieved at the condition of initial pH 8.0, 111°C reaction temperature, 2.47 h reaction time, and 1.30 concentration ratio. The optimum condition for colour change measured by Hunter’s b value was 2.41 h reaction time, 114°C reaction temperature, initial pH 8.3, and 1.26 concentration ratio. These results can provide the basic information for Maillard reaction of aqueous model system between glucosamine and cysteine. PMID:28401086

  5. Optimization of Maillard Reaction in Model System of Glucosamine and Cysteine Using Response Surface Methodology.

    PubMed

    Arachchi, Shanika Jeewantha Thewarapperuma; Kim, Ye-Joo; Kim, Dae-Wook; Oh, Sang-Chul; Lee, Yang-Bong

    2017-03-01

    Sulfur-containing amino acids play important roles in good flavor generation in Maillard reaction of non-enzymatic browning, so aqueous model systems of glucosamine and cysteine were studied to investigate the effects of reaction temperature, initial pH, reaction time, and concentration ratio of glucosamine and cysteine. Response surface methodology was applied to optimize the independent reaction parameters of cysteine and glucosamine in Maillard reaction. Box-Behnken factorial design was used with 30 runs of 16 factorial levels, 8 axial levels and 6 central levels. The degree of Maillard reaction was determined by reading absorption at 425 nm in a spectrophotometer and Hunter's L, a, and b values. ΔE was consequently set as the fifth response factor. In the statistical analyses, determination coefficients (R 2 ) for their absorbance, Hunter's L, a, b values, and ΔE were 0.94, 0.79, 0.73, 0.96, and 0.79, respectively, showing that the absorbance and Hunter's b value were good dependent variables for this model system. The optimum processing parameters were determined to yield glucosamine-cysteine Maillard reaction product with higher absorbance and higher colour change. The optimum estimated absorbance was achieved at the condition of initial pH 8.0, 111°C reaction temperature, 2.47 h reaction time, and 1.30 concentration ratio. The optimum condition for colour change measured by Hunter's b value was 2.41 h reaction time, 114°C reaction temperature, initial pH 8.3, and 1.26 concentration ratio. These results can provide the basic information for Maillard reaction of aqueous model system between glucosamine and cysteine.

  6. Gaussian graphical modeling reconstructs pathway reactions from high-throughput metabolomics data

    PubMed Central

    2011-01-01

    Background With the advent of high-throughput targeted metabolic profiling techniques, the question of how to interpret and analyze the resulting vast amount of data becomes more and more important. In this work we address the reconstruction of metabolic reactions from cross-sectional metabolomics data, that is without the requirement for time-resolved measurements or specific system perturbations. Previous studies in this area mainly focused on Pearson correlation coefficients, which however are generally incapable of distinguishing between direct and indirect metabolic interactions. Results In our new approach we propose the application of a Gaussian graphical model (GGM), an undirected probabilistic graphical model estimating the conditional dependence between variables. GGMs are based on partial correlation coefficients, that is pairwise Pearson correlation coefficients conditioned against the correlation with all other metabolites. We first demonstrate the general validity of the method and its advantages over regular correlation networks with computer-simulated reaction systems. Then we estimate a GGM on data from a large human population cohort, covering 1020 fasting blood serum samples with 151 quantified metabolites. The GGM is much sparser than the correlation network, shows a modular structure with respect to metabolite classes, and is stable to the choice of samples in the data set. On the example of human fatty acid metabolism, we demonstrate for the first time that high partial correlation coefficients generally correspond to known metabolic reactions. This feature is evaluated both manually by investigating specific pairs of high-scoring metabolites, and then systematically on a literature-curated model of fatty acid synthesis and degradation. Our method detects many known reactions along with possibly novel pathway interactions, representing candidates for further experimental examination. Conclusions In summary, we demonstrate strong signatures of

  7. Reaction norm model with unknown environmental covariate to analyze heterosis by environment interaction.

    PubMed

    Su, G; Madsen, P; Lund, M S

    2009-05-01

    Crossbreeding is currently increasing in dairy cattle production. Several studies have shown an environment-dependent heterosis [i.e., an interaction between heterosis and environment (H x E)]. An H x E interaction is usually estimated from a few discrete environment levels. The present study proposes a reaction norm model to describe H x E interaction, which can deal with a large number of environment levels using few parameters. In the proposed model, total heterosis consists of an environment-independent part, which is described as a function of heterozygosity, and an environment-dependent part, which is described as a function of heterozygosity and environmental value (e.g., herd-year effect). A Bayesian approach is developed to estimate the environmental covariates, the regression coefficients of the reaction norm, and other parameters of the model simultaneously in both linear and nonlinear reaction norms. In the nonlinear reaction norm model, the H x E is approximated using linear splines. The approach was tested using simulated data, which were generated using an animal model with a reaction norm for heterosis. The simulation study includes 4 scenarios (the combinations of moderate vs. low heritability and moderate vs. low herd-year variation) of H x E interaction in a nonlinear form. In all scenarios, the proposed model predicted total heterosis very well. The correlation between true heterosis and predicted heterosis was 0.98 in the scenarios with low herd-year variation and 0.99 in the scenarios with moderate herd-year variation. This suggests that the proposed model and method could be a good approach to analyze H x E interactions and predict breeding values in situations in which heterosis changes gradually and continuously over an environmental gradient. On the other hand, it was found that a model ignoring H x E interaction did not significantly harm the prediction of breeding value under the simulated scenarios in which the variance for environment

  8. Hybrid approaches for multiple-species stochastic reaction-diffusion models

    NASA Astrophysics Data System (ADS)

    Spill, Fabian; Guerrero, Pilar; Alarcon, Tomas; Maini, Philip K.; Byrne, Helen

    2015-10-01

    Reaction-diffusion models are used to describe systems in fields as diverse as physics, chemistry, ecology and biology. The fundamental quantities in such models are individual entities such as atoms and molecules, bacteria, cells or animals, which move and/or react in a stochastic manner. If the number of entities is large, accounting for each individual is inefficient, and often partial differential equation (PDE) models are used in which the stochastic behaviour of individuals is replaced by a description of the averaged, or mean behaviour of the system. In some situations the number of individuals is large in certain regions and small in others. In such cases, a stochastic model may be inefficient in one region, and a PDE model inaccurate in another. To overcome this problem, we develop a scheme which couples a stochastic reaction-diffusion system in one part of the domain with its mean field analogue, i.e. a discretised PDE model, in the other part of the domain. The interface in between the two domains occupies exactly one lattice site and is chosen such that the mean field description is still accurate there. In this way errors due to the flux between the domains are small. Our scheme can account for multiple dynamic interfaces separating multiple stochastic and deterministic domains, and the coupling between the domains conserves the total number of particles. The method preserves stochastic features such as extinction not observable in the mean field description, and is significantly faster to simulate on a computer than the pure stochastic model.

  9. Hybrid approaches for multiple-species stochastic reaction-diffusion models.

    PubMed

    Spill, Fabian; Guerrero, Pilar; Alarcon, Tomas; Maini, Philip K; Byrne, Helen

    2015-10-15

    Reaction-diffusion models are used to describe systems in fields as diverse as physics, chemistry, ecology and biology. The fundamental quantities in such models are individual entities such as atoms and molecules, bacteria, cells or animals, which move and/or react in a stochastic manner. If the number of entities is large, accounting for each individual is inefficient, and often partial differential equation (PDE) models are used in which the stochastic behaviour of individuals is replaced by a description of the averaged, or mean behaviour of the system. In some situations the number of individuals is large in certain regions and small in others. In such cases, a stochastic model may be inefficient in one region, and a PDE model inaccurate in another. To overcome this problem, we develop a scheme which couples a stochastic reaction-diffusion system in one part of the domain with its mean field analogue, i.e. a discretised PDE model, in the other part of the domain. The interface in between the two domains occupies exactly one lattice site and is chosen such that the mean field description is still accurate there. In this way errors due to the flux between the domains are small. Our scheme can account for multiple dynamic interfaces separating multiple stochastic and deterministic domains, and the coupling between the domains conserves the total number of particles. The method preserves stochastic features such as extinction not observable in the mean field description, and is significantly faster to simulate on a computer than the pure stochastic model.

  10. Global Combat Support System Army Increment 1 (GCSS-A Inc 1)

    DTIC Science & Technology

    2016-03-01

    Acquisition Executive DoD - Department of Defense DoDAF - DoD Architecture Framework FD - Full Deployment FDD - Full Deployment Decision FY - Fiscal Year...another economic anaylsis was completed on November 14, 2012, in advance of a successful FDD . The program is now in the O&S Phase. GCSS-A Inc 1 2016...Increment I Feb 2011 Aug 2011 Full Deployment Decision ( FDD )1 Feb 2012 Dec 2012 Full Deployment (FD)2 Sep 2017 Mar 2018 Memo 1/ GCSS-A Increment 1

  11. Particle-scale CO2 adsorption kinetics modeling considering three reaction mechanisms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Suh, Dong-Myung; Sun, Xin

    2013-09-01

    In the presence of water (H2O), dry and wet adsorptions of carbon dioxide (CO2) and physical adsorption of H2O happen concurrently in a sorbent particle. The three reactions depend on each other and have a complicated, but important, effect on CO2 capturing via a solid sorbent. In this study, transport phenomena in the sorbent were modeled, including the tree reactions, and a numerical solving procedure for the model also was explained. The reaction variable distribution in the sorbent and their average values were calculated, and simulation results were compared with experimental data to validate the proposed model. Some differences, causedmore » by thermodynamic parameters, were observed between them. However, the developed model reasonably simulated the adsorption behaviors of a sorbent. The weight gained by each adsorbed species, CO2 and H2O, is difficult to determine experimentally. It is known that more CO2 can be captured in the presence of water. Still, it is not yet known quantitatively how much more CO2 the sorbent can capture, nor is it known how much dry and wet adsorptions separately account for CO2 capture. This study addresses those questions by modeling CO2 adsorption in a particle and simulating the adsorption process using the model. As adsorption temperature changed into several values, the adsorbed amount of each species was calculated. The captured CO2 in the sorbent particle was compared quantitatively between dry and wet conditions. As the adsorption temperature decreased, wet adsorption increased. However, dry adsorption was reduced.« less

  12. No Fear of Commitment: Children's Incremental Interpretation in English and Japanese Wh-Questions

    ERIC Educational Resources Information Center

    Omaki, Akira; Davidson White, Imogen; Goro, Takuya; Lidz, Jeffrey; Phillips, Colin

    2014-01-01

    Much work on child sentence processing has demonstrated that children are able to use various linguistic cues to incrementally resolve temporary syntactic ambiguities, but they fail to use syntactic or interpretability cues that arrive later in the sentence. The present study explores whether children incrementally resolve filler-gap dependencies,…

  13. Multi-scale modeling of diffusion-controlled reactions in polymers: renormalisation of reactivity parameters.

    PubMed

    Everaers, Ralf; Rosa, Angelo

    2012-01-07

    The quantitative description of polymeric systems requires hierarchical modeling schemes, which bridge the gap between the atomic scale, relevant to chemical or biomolecular reactions, and the macromolecular scale, where the longest relaxation modes occur. Here, we use the formalism for diffusion-controlled reactions in polymers developed by Wilemski, Fixman, and Doi to discuss the renormalisation of the reactivity parameters in polymer models with varying spatial resolution. In particular, we show that the adjustments are independent of chain length. As a consequence, it is possible to match reactions times between descriptions with different resolution for relatively short reference chains and to use the coarse-grained model to make quantitative predictions for longer chains. We illustrate our results by a detailed discussion of the classical problem of chain cyclization in the Rouse model, which offers the simplest example of a multi-scale descriptions, if we consider differently discretized Rouse models for the same physical system. Moreover, we are able to explore different combinations of compact and non-compact diffusion in the local and large-scale dynamics by varying the embedding dimension.

  14. Deuterium cluster model for low energy nuclear reactions (LENR)

    NASA Astrophysics Data System (ADS)

    Miley, George; Hora, Heinrich

    2007-11-01

    For studying the possible reactions of high density deuterons on the background of a degenerate electron gas, a summary of experimental observations resulted in the possibility of reactions in pm distance and more than ksec duration similar to the K-shell electron capture [1]. The essential reason was the screening of the deuterons by a factor of 14 based on the observations. Using the bosonic properties for a cluster formation of the deuterons and a model of compound nuclear reactions [2], the measured distribution of the resulting nuclei may be explained as known from the Maruhn-Greiner theory for fission. The local maximum of the distribution at the main minimum indicates the excited states of the compound nuclei during their intermediary state. This measured local maximum may be an independent proof for the deuteron clusters at LENR. [1] H. Hora, G.H. Miley et al. Physics Letters A175, 138 (1993) [2] H. Hora and G.H. Miley, APS March Meeting 2007, Program p. 116

  15. A multi-step reaction model for ignition of fully-dense Al-CuO nanocomposite powders

    NASA Astrophysics Data System (ADS)

    Stamatis, D.; Ermoline, A.; Dreizin, E. L.

    2012-12-01

    A multi-step reaction model is developed to describe heterogeneous processes occurring upon heating of an Al-CuO nanocomposite material prepared by arrested reactive milling. The reaction model couples a previously derived Cabrera-Mott oxidation mechanism describing initial, low temperature processes and an aluminium oxidation model including formation of different alumina polymorphs at increased film thicknesses and higher temperatures. The reaction model is tuned using traces measured by differential scanning calorimetry. Ignition is studied for thin powder layers and individual particles using respectively the heated filament (heating rates of 103-104 K s-1) and laser ignition (heating rate ∼106 K s-1) experiments. The developed heterogeneous reaction model predicts a sharp temperature increase, which can be associated with ignition when the laser power approaches the experimental ignition threshold. In experiments, particles ignited by the laser beam are observed to explode, indicating a substantial gas release accompanying ignition. For the heated filament experiments, the model predicts exothermic reactions at the temperatures, at which ignition is observed experimentally; however, strong thermal contact between the metal filament and powder prevents the model from predicting the thermal runaway. It is suggested that oxygen gas release from decomposing CuO, as observed from particles exploding upon ignition in the laser beam, disrupts the thermal contact of the powder and filament; this phenomenon must be included in the filament ignition model to enable prediction of the temperature runaway.

  16. Predicting success of methotrexate treatment by pretreatment HCG level and 24-hour HCG increment.

    PubMed

    Levin, Gabriel; Saleh, Narjes A; Haj-Yahya, Rani; Matan, Liat S; Avi, Benshushan

    2018-04-01

    To evaluate β-human chorionic gonadotropin (β-HCG) level and its 24-hour increment as predictors of successful methotrexate treatment for ectopic pregnancy. Data were retrospectively reviewed from women with ectopic pregnancy who were treated by single-dose methotrexate (50 mg/m 2 ) at a university hospital in Jerusalem, Israel, between January 1, 2000, and June 30, 2015. Serum β-HCG before treatment and its percentage increment in the 24 hours before treatment were compared between treatment success and failure groups. Sixty-nine women were included in the study. Single-dose methotrexate treatment was successful for 44 (63.8%) women. Both mean β-HCG level and its 24-hour increment were lower for women with successful treatment than for those with failed treatment (respectively, 1224 IU\\L vs 2362 IU\\L, P=0.018; and 13.5% vs 29.6%, P=0.009). Receiver operator characteristic curve analysis yielded cutoff values of 1600 IU\\L and 14% increment with a positive predictive value of 75% and 82%, respectively, for treatment success. β-HCG level and its 24-hour increment were independent predictors of treatment outcome by logistic regression (both P<0.01). A β-HCG increment of less than 14% in the 24 hours before single-dose methotrexate and serum β-HCG of less than 1600 IU\\L were found to be good predictors of treatment success. © 2017 International Federation of Gynecology and Obstetrics.

  17. A reaction-diffusion model of cytosolic hydrogen peroxide.

    PubMed

    Lim, Joseph B; Langford, Troy F; Huang, Beijing K; Deen, William M; Sikes, Hadley D

    2016-01-01

    As a signaling molecule in mammalian cells, hydrogen peroxide (H2O2) determines the thiol/disulfide oxidation state of several key proteins in the cytosol. Localization is a key concept in redox signaling; the concentrations of signaling molecules within the cell are expected to vary in time and in space in manner that is essential for function. However, as a simplification, all theoretical studies of intracellular hydrogen peroxide and many experimental studies to date have treated the cytosol as a well-mixed compartment. In this work, we incorporate our previously reported reduced kinetic model of the network of reactions that metabolize hydrogen peroxide in the cytosol into a model that explicitly treats diffusion along with reaction. We modeled a bolus addition experiment, solved the model analytically, and used the resulting equations to quantify the spatiotemporal variations in intracellular H2O2 that result from this kind of perturbation to the extracellular H2O2 concentration. We predict that micromolar bolus additions of H2O2 to suspensions of HeLa cells (0.8 × 10(9)cells/l) result in increases in the intracellular concentration that are localized near the membrane. These findings challenge the assumption that intracellular concentrations of H2O2 are increased uniformly throughout the cell during bolus addition experiments and provide a theoretical basis for differing phenotypic responses of cells to intracellular versus extracellular perturbations to H2O2 levels. Copyright © 2015 Elsevier Inc. All rights reserved.

  18. Extending XNAT Platform with an Incremental Semantic Framework

    PubMed Central

    Timón, Santiago; Rincón, Mariano; Martínez-Tomás, Rafael

    2017-01-01

    Informatics increases the yield from neuroscience due to improved data. Data sharing and accessibility enable joint efforts between different research groups, as well as replication studies, pivotal for progress in the field. Research data archiving solutions are evolving rapidly to address these necessities, however, distributed data integration is still difficult because of the need of explicit agreements for disparate data models. To address these problems, ontologies are widely used in biomedical research to obtain common vocabularies and logical descriptions, but its application may suffer from scalability issues, domain bias, and loss of low-level data access. With the aim of improving the application of semantic models in biobanking systems, an incremental semantic framework that takes advantage of the latest advances in biomedical ontologies and the XNAT platform is designed and implemented. We follow a layered architecture that allows the alignment of multi-domain biomedical ontologies to manage data at different levels of abstraction. To illustrate this approach, the development is integrated in the JPND (EU Joint Program for Neurodegenerative Disease) APGeM project, focused on finding early biomarkers for Alzheimer's and other dementia related diseases. PMID:28912709

  19. Extending XNAT Platform with an Incremental Semantic Framework.

    PubMed

    Timón, Santiago; Rincón, Mariano; Martínez-Tomás, Rafael

    2017-01-01

    Informatics increases the yield from neuroscience due to improved data. Data sharing and accessibility enable joint efforts between different research groups, as well as replication studies, pivotal for progress in the field. Research data archiving solutions are evolving rapidly to address these necessities, however, distributed data integration is still difficult because of the need of explicit agreements for disparate data models. To address these problems, ontologies are widely used in biomedical research to obtain common vocabularies and logical descriptions, but its application may suffer from scalability issues, domain bias, and loss of low-level data access. With the aim of improving the application of semantic models in biobanking systems, an incremental semantic framework that takes advantage of the latest advances in biomedical ontologies and the XNAT platform is designed and implemented. We follow a layered architecture that allows the alignment of multi-domain biomedical ontologies to manage data at different levels of abstraction. To illustrate this approach, the development is integrated in the JPND (EU Joint Program for Neurodegenerative Disease) APGeM project, focused on finding early biomarkers for Alzheimer's and other dementia related diseases.

  20. The National Institute of Education and Incremental Budgeting.

    ERIC Educational Resources Information Center

    Hastings, Anne H.

    1979-01-01

    The National Institute of Education's (NIE) history demonstrates that the relevant criteria for characterizing budgeting as incremental are not the predictability and stability of appropriations but the conditions of complexity, limited information, multiple factors, and imperfect agreement on ends; NIE's appropriations were dominated by political…

  1. Retroactive Operations: On "Increments" in Mandarin Chinese Conversations

    ERIC Educational Resources Information Center

    Lim, Ni Eng

    2014-01-01

    Conversation Analysis (CA) has established repair (Schegloff, Jefferson & Sacks 1977; Schegloff 1979; Kitzinger 2013) as a conversational mechanism for managing contingencies of talk-in-interaction. In this dissertation, I look at a particular sort of "repair" termed TCU-continuations (or otherwise known increments in other…

  2. Dynamic Model of Basic Oxygen Steelmaking Process Based on Multizone Reaction Kinetics: Modeling of Decarburization

    NASA Astrophysics Data System (ADS)

    Rout, Bapin Kumar; Brooks, Geoffrey; Akbar Rhamdhani, M.; Li, Zushu; Schrama, Frank N. H.; Overbosch, Aart

    2018-06-01

    In a previous study by the authors (Rout et al. in Metall Mater Trans B 49:537-557, 2018), a dynamic model for the BOF, employing the concept of multizone kinetics was developed. In the current study, the kinetics of decarburization reaction is investigated. The jet impact and slag-metal emulsion zones were identified to be primary zones for carbon oxidation. The dynamic parameters in the rate equation of decarburization such as residence time of metal drops in the emulsion, interfacial area evolution, initial size, and the effects of surface-active oxides have been included in the kinetic rate equation of the metal droplet. A modified mass-transfer coefficient based on the ideal Langmuir adsorption equilibrium has been proposed to take into account the surface blockage effects of SiO2 and P2O5 in slag on the decarburization kinetics of a metal droplet in the emulsion. Further, a size distribution function has been included in the rate equation to evaluate the effect of droplet size on reaction kinetics. The mathematical simulation indicates that decarburization of the droplet in the emulsion is a strong function of the initial size and residence time. A modified droplet generation rate proposed previously by the authors has been used to estimate the total decarburization rate by slag-metal emulsion. The model's prediction shows that about 76 pct of total carbon is removed by reactions in the emulsion, and the remaining is removed by reactions at the jet impact zone. The predicted bath carbon by the model has been found to be in good agreement with the industrially measured data.

  3. Dynamic Model of Basic Oxygen Steelmaking Process Based on Multizone Reaction Kinetics: Modeling of Decarburization

    NASA Astrophysics Data System (ADS)

    Rout, Bapin Kumar; Brooks, Geoffrey; Akbar Rhamdhani, M.; Li, Zushu; Schrama, Frank N. H.; Overbosch, Aart

    2018-03-01

    In a previous study by the authors (Rout et al. in Metall Mater Trans B 49:537-557, 2018), a dynamic model for the BOF, employing the concept of multizone kinetics was developed. In the current study, the kinetics of decarburization reaction is investigated. The jet impact and slag-metal emulsion zones were identified to be primary zones for carbon oxidation. The dynamic parameters in the rate equation of decarburization such as residence time of metal drops in the emulsion, interfacial area evolution, initial size, and the effects of surface-active oxides have been included in the kinetic rate equation of the metal droplet. A modified mass-transfer coefficient based on the ideal Langmuir adsorption equilibrium has been proposed to take into account the surface blockage effects of SiO2 and P2O5 in slag on the decarburization kinetics of a metal droplet in the emulsion. Further, a size distribution function has been included in the rate equation to evaluate the effect of droplet size on reaction kinetics. The mathematical simulation indicates that decarburization of the droplet in the emulsion is a strong function of the initial size and residence time. A modified droplet generation rate proposed previously by the authors has been used to estimate the total decarburization rate by slag-metal emulsion. The model's prediction shows that about 76 pct of total carbon is removed by reactions in the emulsion, and the remaining is removed by reactions at the jet impact zone. The predicted bath carbon by the model has been found to be in good agreement with the industrially measured data.

  4. Sustained mahogany (Swietenia macrophylla) plantation heartwood increment.

    Treesearch

    Frank H. Wadsworth; Edgardo. Gonzalez

    2008-01-01

    In a search for an increment-based rotation for plantation mahogany(Swietenia macrophylla King), heartwood volume per tree was regressed on DBH (trunk diameter outside bark at 1.4 m above the ground) and merchantable height measurements. We updated a previous study [Wadsworth, F.H., González González, E., Figuera Colón, J.C., Lugo P...

  5. International Space Station Increment-2 Microgravity Environment Summary Report

    NASA Technical Reports Server (NTRS)

    Jules, Kenol; Hrovat, Kenneth; Kelly, Eric; McPherson, Kevin; Reckart, Timothy

    2002-01-01

    This summary report presents the results of some of the processed acceleration data, collected aboard the International Space Station during the period of May to August 2001, the Increment-2 phase of the station. Two accelerometer systems were used to measure the acceleration levels during activities that took place during the Increment-2 segment. However, not all of the activities were analyzed for this report due to time constraints, lack of precise information regarding some payload operations and other station activities. The National Aeronautics and Space Administration sponsors the Microgravity Acceleration Measurement System and the Space Acceleration Microgravity System to support microgravity science experiments, which require microgravity acceleration measurements. On April 19, 2001, both the Microgravity Acceleration Measurement System and the Space Acceleration Measurement System units were launched on STS-100 from the Kennedy Space Center for installation on the International Space Station. The Microgravity Acceleration Measurement System unit was flown to the station in support of science experiments requiring quasi-steady acceleration measurements, while the Space Acceleration Measurement System unit was flown to support experiments requiring vibratory acceleration measurement. Both acceleration systems are also used in support of vehicle microgravity requirements verification. The International Space Station Increment-2 reduced gravity environment analysis presented in this report uses acceleration data collected by both sets of accelerometer systems: 1) The Microgravity Acceleration Measurement System, which consists of two sensors: the Orbital Acceleration Research Experiment Sensor Subsystem, a low frequency range sensor (up to 1 Hz), is used to characterize the quasi-steady environment for payloads and the vehicle, and the High Resolution Accelerometer Package, which is used to characterize the vibratory environment up to 100 Hz. 2) The Space

  6. Stochastic modeling and simulation of reaction-diffusion system with Hill function dynamics.

    PubMed

    Chen, Minghan; Li, Fei; Wang, Shuo; Cao, Young

    2017-03-14

    Stochastic simulation of reaction-diffusion systems presents great challenges for spatiotemporal biological modeling and simulation. One widely used framework for stochastic simulation of reaction-diffusion systems is reaction diffusion master equation (RDME). Previous studies have discovered that for the RDME, when discretization size approaches zero, reaction time for bimolecular reactions in high dimensional domains tends to infinity. In this paper, we demonstrate that in the 1D domain, highly nonlinear reaction dynamics given by Hill function may also have dramatic change when discretization size is smaller than a critical value. Moreover, we discuss methods to avoid this problem: smoothing over space, fixed length smoothing over space and a hybrid method. Our analysis reveals that the switch-like Hill dynamics reduces to a linear function of discretization size when the discretization size is small enough. The three proposed methods could correctly (under certain precision) simulate Hill function dynamics in the microscopic RDME system.

  7. Limitations of the Weissler reaction as a model reaction for measuring the efficiency of hydrodynamic cavitation.

    PubMed

    Morison, K R; Hutchinson, C A

    2009-01-01

    The Weissler reaction in which iodide is oxidised to a tri-iodide complex (I(3)(-)) has been widely used for measurement of the intensity of ultrasonic and hydrodynamic cavitation. It was used in this work to compare ultrasonic cavitation at 24 kHz with hydrodynamic cavitation using two different devices, one a venturi and the other a sudden expansion, operated up to 8.7 bar. Hydrodynamic cavitation had a maximum efficiency of about 5 x 10(-11) moles of I(3)(-) per joule of energy compared with the maximum of almost 8 x 10(-11) mol J(-1) for ultrasonic cavitation. Hydrodynamic cavitation was found to be most effective at 10 degrees C compared with 20 degrees C and 30 degrees C and at higher upstream pressures. However, it was found that in hydrodynamic conditions, even without cavitation, I(3)(-) was consumed at a rapid rate leading to an equilibrium concentration. It was concluded that the Weissler reaction was not a good model reaction for the assessment of the effectiveness of hydrodynamic cavitation.

  8. Relating annual increments of the endangered Blanding's turtle plastron growth to climate

    PubMed Central

    Richard, Monik G; Laroque, Colin P; Herman, Thomas B

    2014-01-01

    This research is the first published study to report a relationship between climate variables and plastron growth increments of turtles, in this case the endangered Nova Scotia Blanding's turtle (Emydoidea blandingii). We used techniques and software common to the discipline of dendrochronology to successfully cross-date our growth increment data series, to detrend and average our series of 80 immature Blanding's turtles into one common chronology, and to seek correlations between the chronology and environmental temperature and precipitation variables. Our cross-dated chronology had a series intercorrelation of 0.441 (above 99% confidence interval), an average mean sensitivity of 0.293, and an average unfiltered autocorrelation of 0.377. Our master chronology represented increments from 1975 to 2007 (33 years), with index values ranging from a low of 0.688 in 2006 to a high of 1.303 in 1977. Univariate climate response function analysis on mean monthly air temperature and precipitation values revealed a positive correlation with the previous year's May temperature and current year's August temperature; a negative correlation with the previous year's October temperature; and no significant correlation with precipitation. These techniques for determining growth increment response to environmental variables should be applicable to other turtle species and merit further exploration. PMID:24963390

  9. Relating annual increments of the endangered Blanding's turtle plastron growth to climate.

    PubMed

    Richard, Monik G; Laroque, Colin P; Herman, Thomas B

    2014-05-01

    This research is the first published study to report a relationship between climate variables and plastron growth increments of turtles, in this case the endangered Nova Scotia Blanding's turtle (Emydoidea blandingii). We used techniques and software common to the discipline of dendrochronology to successfully cross-date our growth increment data series, to detrend and average our series of 80 immature Blanding's turtles into one common chronology, and to seek correlations between the chronology and environmental temperature and precipitation variables. Our cross-dated chronology had a series intercorrelation of 0.441 (above 99% confidence interval), an average mean sensitivity of 0.293, and an average unfiltered autocorrelation of 0.377. Our master chronology represented increments from 1975 to 2007 (33 years), with index values ranging from a low of 0.688 in 2006 to a high of 1.303 in 1977. Univariate climate response function analysis on mean monthly air temperature and precipitation values revealed a positive correlation with the previous year's May temperature and current year's August temperature; a negative correlation with the previous year's October temperature; and no significant correlation with precipitation. These techniques for determining growth increment response to environmental variables should be applicable to other turtle species and merit further exploration.

  10. Addressing System Reconfiguration and Incremental Integration within IMA Systems

    NASA Astrophysics Data System (ADS)

    Ferrero, F.; Rodríques, A. I.

    2009-05-01

    Recently space industry is paying special attention to Integrated Modular Avionics (IMA) systems due to the benefits that modular concepts could bring to the development of space applications, especially in terms of interoperability, flexibility and software reuse. Two important IMA goals to be highlighted are system reconfiguration, and incremental integration of new functionalities into a pre-existing system. The purpose of this paper is to show how system reconfiguration is conducted based on Allied Standard Avionics Architecture Council (ASAAC) concepts for IMA Systems. Besides, it aims to provide a proposal for addressing the incremental integration concept supported by our experience gained during European Technology Acquisition Program (ETAP) TDP1.7 programme. All these topics will be discussed taking into account safety issues and showing the blueprint as an appropriate technique to support these concepts.

  11. Development of an Uncertainty Quantification Predictive Chemical Reaction Model for Syngas Combustion

    DOE PAGES

    Slavinskaya, N. A.; Abbasi, M.; Starcke, J. H.; ...

    2017-01-24

    An automated data-centric infrastructure, Process Informatics Model (PrIMe), was applied to validation and optimization of a syngas combustion model. The Bound-to-Bound Data Collaboration (B2BDC) module of PrIMe was employed to discover the limits of parameter modifications based on uncertainty quantification (UQ) and consistency analysis of the model–data system and experimental data, including shock-tube ignition delay times and laminar flame speeds. Existing syngas reaction models are reviewed, and the selected kinetic data are described in detail. Empirical rules were developed and applied to evaluate the uncertainty bounds of the literature experimental data. Here, the initial H 2/CO reaction model, assembled frommore » 73 reactions and 17 species, was subjected to a B2BDC analysis. For this purpose, a dataset was constructed that included a total of 167 experimental targets and 55 active model parameters. Consistency analysis of the composed dataset revealed disagreement between models and data. Further analysis suggested that removing 45 experimental targets, 8 of which were self-inconsistent, would lead to a consistent dataset. This dataset was subjected to a correlation analysis, which highlights possible directions for parameter modification and model improvement. Additionally, several methods of parameter optimization were applied, some of them unique to the B2BDC framework. The optimized models demonstrated improved agreement with experiments compared to the initially assembled model, and their predictions for experiments not included in the initial dataset (i.e., a blind prediction) were investigated. The results demonstrate benefits of applying the B2BDC methodology for developing predictive kinetic models.« less

  12. Development of an Uncertainty Quantification Predictive Chemical Reaction Model for Syngas Combustion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Slavinskaya, N. A.; Abbasi, M.; Starcke, J. H.

    An automated data-centric infrastructure, Process Informatics Model (PrIMe), was applied to validation and optimization of a syngas combustion model. The Bound-to-Bound Data Collaboration (B2BDC) module of PrIMe was employed to discover the limits of parameter modifications based on uncertainty quantification (UQ) and consistency analysis of the model–data system and experimental data, including shock-tube ignition delay times and laminar flame speeds. Existing syngas reaction models are reviewed, and the selected kinetic data are described in detail. Empirical rules were developed and applied to evaluate the uncertainty bounds of the literature experimental data. Here, the initial H 2/CO reaction model, assembled frommore » 73 reactions and 17 species, was subjected to a B2BDC analysis. For this purpose, a dataset was constructed that included a total of 167 experimental targets and 55 active model parameters. Consistency analysis of the composed dataset revealed disagreement between models and data. Further analysis suggested that removing 45 experimental targets, 8 of which were self-inconsistent, would lead to a consistent dataset. This dataset was subjected to a correlation analysis, which highlights possible directions for parameter modification and model improvement. Additionally, several methods of parameter optimization were applied, some of them unique to the B2BDC framework. The optimized models demonstrated improved agreement with experiments compared to the initially assembled model, and their predictions for experiments not included in the initial dataset (i.e., a blind prediction) were investigated. The results demonstrate benefits of applying the B2BDC methodology for developing predictive kinetic models.« less

  13. Neutron-induced reactions on AlF3 studied using the optical model

    NASA Astrophysics Data System (ADS)

    Ma, Chun-Wang; Lv, Cui-Juan; Zhang, Guo-Qiang; Wang, Hong-Wei; Zuo, Jia-Xu

    2015-08-01

    Neutron-induced reactions on 27Al and 19F nuclei are investigated using the optical model implemented in the TALYS 1.4 toolkit. Incident neutron energies in a wide range from 0.1 keV to 30 MeV are calculated. The cross sections for the main channels (n, np), (n, p), (n, α), (n, 2n), and (n, γ) and the total reaction cross section (n, tot) of the reactions are obtained. When the default parameters in TALYS 1.4 are adopted, the calculated results agree with the measured results. Based on the calculated results for the n + 27Al and n + 19F reactions, the results of the n + 27Al19F reactions are predicted. These results are useful both for the design of thorium-based molten salt reactors and for neutron activation analysis techniques.

  14. Towards Reliable and Energy-Efficient Incremental Cooperative Communication for Wireless Body Area Networks.

    PubMed

    Yousaf, Sidrah; Javaid, Nadeem; Qasim, Umar; Alrajeh, Nabil; Khan, Zahoor Ali; Ahmed, Mansoor

    2016-02-24

    In this study, we analyse incremental cooperative communication for wireless body area networks (WBANs) with different numbers of relays. Energy efficiency (EE) and the packet error rate (PER) are investigated for different schemes. We propose a new cooperative communication scheme with three-stage relaying and compare it to existing schemes. Our proposed scheme provides reliable communication with less PER at the cost of surplus energy consumption. Analytical expressions for the EE of the proposed three-stage cooperative communication scheme are also derived, taking into account the effect of PER. Later on, the proposed three-stage incremental cooperation is implemented in a network layer protocol; enhanced incremental cooperative critical data transmission in emergencies for static WBANs (EInCo-CEStat). Extensive simulations are conducted to validate the proposed scheme. Results of incremental relay-based cooperative communication protocols are compared to two existing cooperative routing protocols: cooperative critical data transmission in emergencies for static WBANs (Co-CEStat) and InCo-CEStat. It is observed from the simulation results that incremental relay-based cooperation is more energy efficient than the existing conventional cooperation protocol, Co-CEStat. The results also reveal that EInCo-CEStat proves to be more reliable with less PER and higher throughput than both of the counterpart protocols. However, InCo-CEStat has less throughput with a greater stability period and network lifetime. Due to the availability of more redundant links, EInCo-CEStat achieves a reduced packet drop rate at the cost of increased energy consumption.

  15. Estimate of within population incremental selection through branch imbalance in lineage trees

    PubMed Central

    Liberman, Gilad; Benichou, Jennifer I.C.; Maman, Yaakov; Glanville, Jacob; Alter, Idan; Louzoun, Yoram

    2016-01-01

    Incremental selection within a population, defined as limited fitness changes following mutation, is an important aspect of many evolutionary processes. Strongly advantageous or deleterious mutations are detected using the synonymous to non-synonymous mutations ratio. However, there are currently no precise methods to estimate incremental selection. We here provide for the first time such a detailed method and show its precision in multiple cases of micro-evolution. The proposed method is a novel mixed lineage tree/sequence based method to detect within population selection as defined by the effect of mutations on the average number of offspring. Specifically, we propose to measure the log of the ratio between the number of leaves in lineage trees branches following synonymous and non-synonymous mutations. The method requires a high enough number of sequences, and a large enough number of independent mutations. It assumes that all mutations are independent events. It does not require of a baseline model and is practically not affected by sampling biases. We show the method's wide applicability by testing it on multiple cases of micro-evolution. We show that it can detect genes and inter-genic regions using the selection rate and detect selection pressures in viral proteins and in the immune response to pathogens. PMID:26586802

  16. New Approach for Nuclear Reaction Model in the Combination of Intra-nuclear Cascade and DWBA

    NASA Astrophysics Data System (ADS)

    Hashimoto, S.; Iwamoto, O.; Iwamoto, Y.; Sato, T.; Niita, K.

    2014-04-01

    We applied a new nuclear reaction model that is a combination of the intra nuclear cascade model and the distorted wave Born approximation (DWBA) calculation to estimate neutron spectra in reactions induced by protons incident on 7Li and 9Be targets at incident energies below 50 MeV, using the particle and heavy ion transport code system (PHITS). The results obtained by PHITS with the new model reproduce the sharp peaks observed in the experimental double-differential cross sections as a result of taking into account transitions between discrete nuclear states in the DWBA. An excellent agreement was observed between the calculated results obtained using the combination model and experimental data on neutron yields from thick targets in the inclusive (p, xn) reaction.

  17. Scaffolding Students' Online Critiquing of Expert- and Peer-generated Molecular Models of Chemical Reactions

    NASA Astrophysics Data System (ADS)

    Chang, Hsin-Yi; Chang, Hsiang-Chi

    2013-08-01

    In this study, we developed online critiquing activities using an open-source computer learning environment. We investigated how well the activities scaffolded students to critique molecular models of chemical reactions made by scientists, peers, and a fictitious peer, and whether the activities enhanced the students' understanding of science models and chemical reactions. The activities were implemented in an eighth-grade class with 28 students in a public junior high school in southern Taiwan. The study employed mixed research methods. Data collected included pre- and post-instructional assessments, post-instructional interviews, and students' electronic written responses and oral discussions during the critiquing activities. The results indicated that these activities guided the students to produce overall quality critiques. Also, the students developed a more sophisticated understanding of chemical reactions and scientific models as a result of the intervention. Design considerations for effective model critiquing activities are discussed based on observational results, including the use of peer-generated artefacts for critiquing to promote motivation and collaboration, coupled with critiques of scientific models to enhance students' epistemological understanding of model purpose and communication.

  18. Modeling of hydrogen evolution reaction on the surface of GaInP2

    NASA Astrophysics Data System (ADS)

    Choi, Woon Ih; Wood, Brandon; Schwegler, Eric; Ogitsu, Tadashi

    2012-02-01

    GaInP2 is promising candidate material for hydrogen production using sunlight. It reduces solvated proton into hydrogen molecule using light-induced excited electrons in the photoelectrochemical cell. However, it is challenging to model hydrogen evolution reaction (HER) using first-principles molecular dynamics. Instead, we use Anderson-Newns model and generalized solvent coordinate in Marcus-Hush theory to describe adiabatic free energy surface of HER. Model parameters are fitted from the DFT calculations. We model Volmer-Heyrovsky reaction path on the surfaces of CuPt phase of GaInP2. We also discuss effects of surface oxide and catalyst atoms that exist on top of bare surfaces in experimental circumstances.

  19. An incremental community detection method for social tagging systems using locality-sensitive hashing.

    PubMed

    Wu, Zhenyu; Zou, Ming

    2014-10-01

    An increasing number of users interact, collaborate, and share information through social networks. Unprecedented growth in social networks is generating a significant amount of unstructured social data. From such data, distilling communities where users have common interests and tracking variations of users' interests over time are important research tracks in fields such as opinion mining, trend prediction, and personalized services. However, these tasks are extremely difficult considering the highly dynamic characteristics of the data. Existing community detection methods are time consuming, making it difficult to process data in real time. In this paper, dynamic unstructured data is modeled as a stream. Tag assignments stream clustering (TASC), an incremental scalable community detection method, is proposed based on locality-sensitive hashing. Both tags and latent interactions among users are incorporated in the method. In our experiments, the social dynamic behaviors of users are first analyzed. The proposed TASC method is then compared with state-of-the-art clustering methods such as StreamKmeans and incremental k-clique; results indicate that TASC can detect communities more efficiently and effectively. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. Turing instability in reaction-diffusion models on complex networks

    NASA Astrophysics Data System (ADS)

    Ide, Yusuke; Izuhara, Hirofumi; Machida, Takuya

    2016-09-01

    In this paper, the Turing instability in reaction-diffusion models defined on complex networks is studied. Here, we focus on three types of models which generate complex networks, i.e. the Erdős-Rényi, the Watts-Strogatz, and the threshold network models. From analysis of the Laplacian matrices of graphs generated by these models, we numerically reveal that stable and unstable regions of a homogeneous steady state on the parameter space of two diffusion coefficients completely differ, depending on the network architecture. In addition, we theoretically discuss the stable and unstable regions in the cases of regular enhanced ring lattices which include regular circles, and networks generated by the threshold network model when the number of vertices is large enough.

  1. Development of interactive graphic user interfaces for modeling reaction-based biogeochemical processes in batch systems with BIOGEOCHEM

    NASA Astrophysics Data System (ADS)

    Chang, C.; Li, M.; Yeh, G.

    2010-12-01

    The BIOGEOCHEM numerical model (Yeh and Fang, 2002; Fang et al., 2003) was developed with FORTRAN for simulating reaction-based geochemical and biochemical processes with mixed equilibrium and kinetic reactions in batch systems. A complete suite of reactions including aqueous complexation, adsorption/desorption, ion-exchange, redox, precipitation/dissolution, acid-base reactions, and microbial mediated reactions were embodied in this unique modeling tool. Any reaction can be treated as fast/equilibrium or slow/kinetic reaction. An equilibrium reaction is modeled with an implicit finite rate governed by a mass action equilibrium equation or by a user-specified algebraic equation. A kinetic reaction is modeled with an explicit finite rate with an elementary rate, microbial mediated enzymatic kinetics, or a user-specified rate equation. None of the existing models has encompassed this wide array of scopes. To ease the input/output learning curve using the unique feature of BIOGEOCHEM, an interactive graphic user interface was developed with the Microsoft Visual Studio and .Net tools. Several user-friendly features, such as pop-up help windows, typo warning messages, and on-screen input hints, were implemented, which are robust. All input data can be real-time viewed and automated to conform with the input file format of BIOGEOCHEM. A post-processor for graphic visualizations of simulated results was also embedded for immediate demonstrations. By following data input windows step by step, errorless BIOGEOCHEM input files can be created even if users have little prior experiences in FORTRAN. With this user-friendly interface, the time effort to conduct simulations with BIOGEOCHEM can be greatly reduced.

  2. Structural and incremental validity of the Wechsler Adult Intelligence Scale-Fourth Edition with a clinical sample.

    PubMed

    Nelson, Jason M; Canivez, Gary L; Watkins, Marley W

    2013-06-01

    Structural and incremental validity of the Wechsler Adult Intelligence Scale-Fourth Edition (WAIS-IV; Wechsler, 2008a) was examined with a sample of 300 individuals referred for evaluation at a university-based clinic. Confirmatory factor analysis indicated that the WAIS-IV structure was best represented by 4 first-order factors as well as a general intelligence factor in a direct hierarchical model. The general intelligence factor accounted for the most common and total variance among the subtests. Incremental validity analyses indicated that the Full Scale IQ (FSIQ) generally accounted for medium to large portions of academic achievement variance. For all measures of academic achievement, the first-order factors combined accounted for significant achievement variance beyond that accounted for by the FSIQ, but individual factor index scores contributed trivial amounts of achievement variance. Implications for interpreting WAIS-IV results are discussed. (PsycINFO Database Record (c) 2013 APA, all rights reserved).

  3. Effect of multiple forming tools on geometrical and mechanical properties in incremental sheet forming

    NASA Astrophysics Data System (ADS)

    Wernicke, S.; Dang, T.; Gies, S.; Tekkaya, A. E.

    2018-05-01

    The tendency to a higher variety of products requires economical manufacturing processes suitable for the production of prototypes and small batches. In the case of complex hollow-shaped parts, single point incremental forming (SPIF) represents a highly flexible process. The flexibility of this process comes along with a very long process time. To decrease the process time, a new incremental forming approach with multiple forming tools is investigated. The influence of two incremental forming tools on the resulting mechanical and geometrical component properties compared to SPIF is presented. Sheets made of EN AW-1050A were formed to frustums of a pyramid using different tool-path strategies. Furthermore, several variations of the tool-path strategy are analyzed. A time saving between 40% and 60% was observed depending on the tool-path and the radii of the forming tools while the mechanical properties remained unchanged. This knowledge can increase the cost efficiency of incremental forming processes.

  4. Incremental cost effectiveness evaluation in clinical research.

    PubMed

    Krummenauer, Frank; Landwehr, I

    2005-01-28

    The health economic evaluation of therapeutic and diagnostic strategies is of increasing importance in clinical research. Therefore also clinical trialists have to involve health economic aspects more frequently. However, whereas they are quite familiar with classical effect measures in clinical trials, the corresponding parameters in health economic evaluation of therapeutic and diagnostic procedures are still not this common. The concepts of incremental cost effectiveness ratios (ICERs) and incremental net health benefit (INHB) will be illustrated and contrasted along the cost effectiveness evaluation of cataract surgery with monofocal and multifocal intraocular lenses. ICERs relate the costs of a treatment to its clinical benefit in terms of a ratio expression (indexed as Euro per clinical benefit unit). Therefore ICERs can be directly compared to a pre-specified willingness to pay (WTP) benchmark, which represents the maximum costs, health insurers would invest to achieve one clinical benefit unit. INHBs estimate a treatment's net clinical benefit after accounting for its cost increase versus an established therapeutic standard. Resource allocation rules can be formulated by means of both effect measures. Both the ICER and the INHB approach enable the definition of directional resource allocation rules. The allocation decisions arising from these rules are identical, as long as the willingness to pay benchmark is fixed in advance. Therefore both strategies crucially call for a priori determination of both the underlying clinical benefit endpoint (such as gain in vision lines after cataract surgery or gain in quality-adjusted life years) and the corresponding willingness to pay benchmark. The use of incremental cost effectiveness and net health benefit estimates provides a rationale for health economic allocation discussions and founding decisions. It implies the same requirements on trial protocols as yet established for clinical trials, that is the a priori

  5. Systematic development of reduced reaction mechanisms for dynamic modeling

    NASA Technical Reports Server (NTRS)

    Frenklach, M.; Kailasanath, K.; Oran, E. S.

    1986-01-01

    A method for systematically developing a reduced chemical reaction mechanism for dynamic modeling of chemically reactive flows is presented. The method is based on the postulate that if a reduced reaction mechanism faithfully describes the time evolution of both thermal and chain reaction processes characteristic of a more complete mechanism, then the reduced mechanism will describe the chemical processes in a chemically reacting flow with approximately the same degree of accuracy. Here this postulate is tested by producing a series of mechanisms of reduced accuracy, which are derived from a full detailed mechanism for methane-oxygen combustion. These mechanisms were then tested in a series of reactive flow calculations in which a large-amplitude sinusoidal perturbation is applied to a system that is initially quiescent and whose temperature is high enough to start ignition processes. Comparison of the results for systems with and without convective flow show that this approach produces reduced mechanisms that are useful for calculations of explosions and detonations. Extensions and applicability to flames are discussed.

  6. Ontology aided modeling of organic reaction mechanisms with flexible and fragment based XML markup procedures.

    PubMed

    Sankar, Punnaivanam; Aghila, Gnanasekaran

    2007-01-01

    The mechanism models for primary organic reactions encoding the structural fragments undergoing substitution, addition, elimination, and rearrangements are developed. In the proposed models, each and every structural component of mechanistic pathways is represented with flexible and fragment based markup technique in XML syntax. A significant feature of the system is the encoding of the electron movements along with the other components like charges, partial charges, half bonded species, lone pair electrons, free radicals, reaction arrows, etc. needed for a complete representation of reaction mechanism. The rendering of reaction schemes described with the proposed methodology is achieved with a concise XML extension language interoperating with the structure markup. The reaction scheme is visualized as 2D graphics in a browser by converting them into SVG documents enabling the desired layouts normally perceived by the chemists conventionally. An automatic representation of the complex patterns of the reaction mechanism is achieved by reusing the knowledge in chemical ontologies and developing artificial intelligence components in terms of axioms.

  7. Online Bimanual Manipulation Using Surface Electromyography and Incremental Learning.

    PubMed

    Strazzulla, Ilaria; Nowak, Markus; Controzzi, Marco; Cipriani, Christian; Castellini, Claudio

    2017-03-01

    The paradigm of simultaneous and proportional myocontrol of hand prostheses is gaining momentum in the rehabilitation robotics community. As opposed to the traditional surface electromyography classification schema, in simultaneous and proportional control the desired force/torque at each degree of freedom of the hand/wrist is predicted in real-time, giving to the individual a more natural experience, reducing the cognitive effort and improving his dexterity in daily-life activities. In this study we apply such an approach in a realistic manipulation scenario, using 10 non-linear incremental regression machines to predict the desired torques for each motor of two robotic hands. The prediction is enforced using two sets of surface electromyography electrodes and an incremental, non-linear machine learning technique called Incremental Ridge Regression with Random Fourier Features. Nine able-bodied subjects were engaged in a functional test with the aim to evaluate the performance of the system. The robotic hands were mounted on two hand/wrist orthopedic splints worn by healthy subjects and controlled online. An average completion rate of more than 95% was achieved in single-handed tasks and 84% in bimanual tasks. On average, 5 min of retraining were necessary on a total session duration of about 1 h and 40 min. This work sets a beginning in the study of bimanual manipulation with prostheses and will be carried on through experiments in unilateral and bilateral upper limb amputees thus increasing its scientific value.

  8. Learned Helplessness: A Model to Understand and Overcome a Child's Extreme Reaction to Failure.

    ERIC Educational Resources Information Center

    Balk, David

    1983-01-01

    The author reviews literature on childrens' reactions to perceived failure and offers "learned helplessness" as a model to explain why a child who makes a mistake gives up. Suggestions for preventing these reactions are given. (Author/JMK)

  9. Incremental validity of mindfulness skills in relation to emotional dysregulation among a young adult community sample.

    PubMed

    Vujanovic, Anka A; Bonn-Miller, Marcel O; Bernstein, Amit; McKee, Laura G; Zvolensky, Michael J

    2010-01-01

    The present investigation examined the incremental predictive validity of mindfulness skills, as measured by the Kentucky Inventory of Mindfulness Skills (KIMS), in relation to multiple facets of emotional dysregulation, as indexed by the Difficulties in Emotion Regulation Scale (DERS), above and beyond variance explained by negative affectivity, anxiety sensitivity, and distress tolerance. Participants were a nonclinical community sample of 193 young adults (106 women, 87 men; M(age) = 23.91 years). The KIMS Accepting without Judgment subscale was incrementally negatively predictive of all facets of emotional dysregulation, as measured by the DERS. Furthermore, KIMS Acting with Awareness was incrementally negatively related to difficulties engaging in goal-directed behavior. Additionally, both observing and describing mindfulness skills were incrementally negatively related to lack of emotional awareness, and describing skills also were incrementally negatively related to lack of emotional clarity. Findings are discussed in relation to advancing scientific understanding of emotional dysregulation from a mindfulness skills-based framework.

  10. Increment and mortality in a virgin Douglas-fir forest.

    Treesearch

    Robert W. Steele; Norman P. Worthington

    1955-01-01

    Is there any basis to the forester's rule of thumb that virgin forests eventually reach an equilibrium where increment and mortality approximately balance? Are we wasting potential timber volume by failing to salvage mortality in old-growth stands?

  11. Indistinguishability and identifiability of kinetic models for the MurC reaction in peptidoglycan biosynthesis.

    PubMed

    Hattersley, J G; Pérez-Velázquez, J; Chappell, M J; Bearup, D; Roper, D; Dowson, C; Bugg, T; Evans, N D

    2011-11-01

    An important question in Systems Biology is the design of experiments that enable discrimination between two (or more) competing chemical pathway models or biological mechanisms. In this paper analysis is performed between two different models describing the kinetic mechanism of a three-substrate three-product reaction, namely the MurC reaction in the cytoplasmic phase of peptidoglycan biosynthesis. One model involves ordered substrate binding and ordered release of the three products; the competing model also assumes ordered substrate binding, but with fast release of the three products. The two versions are shown to be distinguishable; however, if standard quasi-steady-state assumptions are made distinguishability cannot be determined. Once model structure uniqueness is ensured the experimenter must determine if it is possible to successfully recover rate constant values given the experiment observations, a process known as structural identifiability. Structural identifiability analysis is carried out for both models to determine which of the unknown reaction parameters can be determined uniquely, or otherwise, from the ideal system outputs. This structural analysis forms an integrated step towards the modelling of the full pathway of the cytoplasmic phase of peptidoglycan biosynthesis. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  12. An incremental strategy for calculating consistent discrete CFD sensitivity derivatives

    NASA Technical Reports Server (NTRS)

    Korivi, Vamshi Mohan; Taylor, Arthur C., III; Newman, Perry A.; Hou, Gene W.; Jones, Henry E.

    1992-01-01

    In this preliminary study involving advanced computational fluid dynamic (CFD) codes, an incremental formulation, also known as the 'delta' or 'correction' form, is presented for solving the very large sparse systems of linear equations which are associated with aerodynamic sensitivity analysis. For typical problems in 2D, a direct solution method can be applied to these linear equations which are associated with aerodynamic sensitivity analysis. For typical problems in 2D, a direct solution method can be applied to these linear equations in either the standard or the incremental form, in which case the two are equivalent. Iterative methods appear to be needed for future 3D applications; however, because direct solver methods require much more computer memory than is currently available. Iterative methods for solving these equations in the standard form result in certain difficulties, such as ill-conditioning of the coefficient matrix, which can be overcome when these equations are cast in the incremental form; these and other benefits are discussed. The methodology is successfully implemented and tested in 2D using an upwind, cell-centered, finite volume formulation applied to the thin-layer Navier-Stokes equations. Results are presented for two laminar sample problems: (1) transonic flow through a double-throat nozzle; and (2) flow over an isolated airfoil.

  13. Population-reaction model and microbial experimental ecosystems for understanding hierarchical dynamics of ecosystems.

    PubMed

    Hosoda, Kazufumi; Tsuda, Soichiro; Kadowaki, Kohmei; Nakamura, Yutaka; Nakano, Tadashi; Ishii, Kojiro

    2016-02-01

    Understanding ecosystem dynamics is crucial as contemporary human societies face ecosystem degradation. One of the challenges that needs to be recognized is the complex hierarchical dynamics. Conventional dynamic models in ecology often represent only the population level and have yet to include the dynamics of the sub-organism level, which makes an ecosystem a complex adaptive system that shows characteristic behaviors such as resilience and regime shifts. The neglect of the sub-organism level in the conventional dynamic models would be because integrating multiple hierarchical levels makes the models unnecessarily complex unless supporting experimental data are present. Now that large amounts of molecular and ecological data are increasingly accessible in microbial experimental ecosystems, it is worthwhile to tackle the questions of their complex hierarchical dynamics. Here, we propose an approach that combines microbial experimental ecosystems and a hierarchical dynamic model named population-reaction model. We present a simple microbial experimental ecosystem as an example and show how the system can be analyzed by a population-reaction model. We also show that population-reaction models can be applied to various ecological concepts, such as predator-prey interactions, climate change, evolution, and stability of diversity. Our approach will reveal a path to the general understanding of various ecosystems and organisms. Copyright © 2015 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.

  14. Electrical-assisted double side incremental forming and processes thereof

    DOEpatents

    Roth, John; Cao, Jian

    2014-06-03

    A process for forming a sheet metal component using an electric current passing through the component is provided. The process can include providing a double side incremental forming machine, the machine operable to perform a plurality of double side incremental deformations on the sheet metal component and also apply an electric direct current to the sheet metal component during at least part of the forming. The direct current can be applied before or after the forming has started and/or be terminated before or after the forming has stopped. The direct current can be applied to any portion of the sheet metal. The electrical assistance can reduce the magnitude of force required to produce a given amount of deformation, increase the amount of deformation exhibited before failure and/or reduce any springback typically exhibited by the sheet metal component.

  15. Statistical Properties of Line Centroid Velocity Increments in the rho Ophiuchi Cloud

    NASA Technical Reports Server (NTRS)

    Lis, D. C.; Keene, Jocelyn; Li, Y.; Phillips, T. G.; Pety, J.

    1998-01-01

    We present a comparison of histograms of CO (2-1) line centroid velocity increments in the rho Ophiuchi molecular cloud with those computed for spectra synthesized from a three-dimensional, compressible, but non-starforming and non-gravitating hydrodynamic simulation. Histograms of centroid velocity increments in the rho Ophiuchi cloud show clearly non-Gaussian wings, similar to those found in histograms of velocity increments and derivatives in experimental studies of laboratory and atmospheric flows, as well as numerical simulations of turbulence. The magnitude of these wings increases monotonically with decreasing separation, down to the angular resolution of the data. This behavior is consistent with that found in the phase of the simulation which has most of the properties of incompressible turbulence. The time evolution of the magnitude of the non-Gaussian wings in the histograms of centroid velocity increments in the simulation is consistent with the evolution of the vorticity in the flow. However, we cannot exclude the possibility that the wings are associated with the shock interaction regions. Moreover, in an active starforming region like the rho Ophiuchi cloud, the effects of shocks may be more important than in the simulation. However, being able to identify shock interaction regions in the interstellar medium is also important, since numerical simulations show that vorticity is generated in shock interactions.

  16. Incremental and comparative health care expenditures for head and neck cancer in the United States.

    PubMed

    Dwojak, Sunshine M; Bhattacharyya, Neil

    2014-10-01

    Determine the incremental costs associated with head and neck cancer (HNCa) and compare the costs with other common cancers. Cross-sectional analysis of a healthcare expenditure database. The Medical Expenditure Panel Survey is a national survey of US households. All cases of HNCa were extracted for 2006, 2008, and 2010. The incremental expenditures associated with HNCa were determined by comparing the healthcare expenditures of individuals with HNCa to the population without cancer, controlling for age, sex, education, insurance status, marital status, geographic region, and comorbidities. Healthcare expenditures for HNCa were then compared to individuals with lung cancer and colon cancer to determine relative healthcare expenditures. An estimated 264,713 patients (annualized) with HNCa were identified. The mean annual healthcare expenditures per individual for HNCa were $23,408 ± $3,397 versus $3,860 ± $52 for those without cancer. The mean adjusted incremental cost associated with HNCa was $15,852 ± $3,297 per individual (P < .001). Within this incremental cost, there was an increased incremental outpatient services cost of $3,495 ± $1,044 (P = .001) and an increased incremental hospital inpatient cost of $6,783 ± $2,894 (P = .020) associated with HNCa. The annual healthcare expenditures per individual fell in between those for lung cancer ($25,267 ± $2,375, P = .607) and colon cancer ($16,975 ± $1,291, P = .055). Despite its lower relative incidence, HNCa is associated with a significant incremental increase in annual healthcare expenditures per individual, which is comparable to or higher than other common cancers. In aggregate, the estimated annual costs associated with HNCa are $4.20 billion. © 2014 The American Laryngological, Rhinological and Otological Society, Inc.

  17. Towards Reliable and Energy-Efficient Incremental Cooperative Communication for Wireless Body Area Networks

    PubMed Central

    Yousaf, Sidrah; Javaid, Nadeem; Qasim, Umar; Alrajeh, Nabil; Khan, Zahoor Ali; Ahmed, Mansoor

    2016-01-01

    In this study, we analyse incremental cooperative communication for wireless body area networks (WBANs) with different numbers of relays. Energy efficiency (EE) and the packet error rate (PER) are investigated for different schemes. We propose a new cooperative communication scheme with three-stage relaying and compare it to existing schemes. Our proposed scheme provides reliable communication with less PER at the cost of surplus energy consumption. Analytical expressions for the EE of the proposed three-stage cooperative communication scheme are also derived, taking into account the effect of PER. Later on, the proposed three-stage incremental cooperation is implemented in a network layer protocol; enhanced incremental cooperative critical data transmission in emergencies for static WBANs (EInCo-CEStat). Extensive simulations are conducted to validate the proposed scheme. Results of incremental relay-based cooperative communication protocols are compared to two existing cooperative routing protocols: cooperative critical data transmission in emergencies for static WBANs (Co-CEStat) and InCo-CEStat. It is observed from the simulation results that incremental relay-based cooperation is more energy efficient than the existing conventional cooperation protocol, Co-CEStat. The results also reveal that EInCo-CEStat proves to be more reliable with less PER and higher throughput than both of the counterpart protocols. However, InCo-CEStat has less throughput with a greater stability period and network lifetime. Due to the availability of more redundant links, EInCo-CEStat achieves a reduced packet drop rate at the cost of increased energy consumption. PMID:26927104

  18. Incremental Housing Development; An Approach In Meeting the Needs Of Low Cost Housing In Indonesia

    NASA Astrophysics Data System (ADS)

    Wibowo, A. H.; Larasati, D.

    2018-05-01

    As a country with a rapid population growth, there is a very high shortage of homes and need a quick solution to build houses for the community. The emerging solution is mass housing with industrialization system. As time goes by, this mass housing solution raises a new problem, the mass housing users are not satisfied with the existing home. Incremental development approach is considered as one of the solutions for solving the mass housing problem. Incremental development is a constructive approach that allows the separation of dwellings to be built, altered and dismantled without disturbing others. With this approach, dwelling is not seen as a finished product, but it’s a process where residents can participate in designing their own house according to the needs and economy capabilities. Furthermore, Housing provision is built according to minimal needs and it’s designed as a ‘permanent longlife’ and adaptable base. This paper will discuss the criteria of incremental house for low-income communities provided by the government. Literature studies and case studies are used to find the criteria for incremental house. Some criteria can be used as a reference for incremental house construction as a housing solution in Indonesia.

  19. Modeling of Water-Breathing Propulsion Systems Utilizing the Aluminum-Seawater Reaction and Solid-Oxide Fuel Cells

    DTIC Science & Technology

    2011-01-01

    ABSTRACT Title of Document: MODELING OF WATER-BREATHING PROPULSION SYSTEMS UTILIZING THE ALUMINUM-SEAWATER REACTION AND SOLID...Hybrid Aluminum Combustor (HAC): a novel underwater power system based on the exothermic reaction of aluminum with seawater. The system is modeled ...using a NASA-developed framework called Numerical Propulsion System Simulation (NPSS) by assembling thermodynamic models developed for each component

  20. Nonlinear electromechanical modelling and dynamical behavior analysis of a satellite reaction wheel

    NASA Astrophysics Data System (ADS)

    Aghalari, Alireza; Shahravi, Morteza

    2017-12-01

    The present research addresses the satellite reaction wheel (RW) nonlinear electromechanical coupling dynamics including dynamic eccentricity of brushless dc (BLDC) motor and gyroscopic effects, as well as dry friction of shaft-bearing joints (relative small slip) and bearing friction. In contrast to other studies, the rotational velocity of the flywheel is considered to be controllable, so it is possible to study the reaction wheel dynamical behavior in acceleration stages. The RW is modeled as a three-phases BLDC motor as well as flywheel with unbalances on a rigid shaft and flexible bearings. Improved Lagrangian dynamics for electromechanical systems is used to obtain the mathematical model of the system. The developed model can properly describe electromechanical nonlinear coupled dynamical behavior of the satellite RW. Numerical simulations show the effectiveness of the presented approach.

  1. 29 CFR 825.205 - Increments of FMLA leave for intermittent or reduced schedule leave.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... DIVISION, DEPARTMENT OF LABOR OTHER LAWS THE FAMILY AND MEDICAL LEAVE ACT OF 1993 Employee Leave Entitlements Under the Family and Medical Leave Act § 825.205 Increments of FMLA leave for intermittent or... 29 Labor 3 2014-07-01 2014-07-01 false Increments of FMLA leave for intermittent or reduced...

  2. 29 CFR 825.205 - Increments of FMLA leave for intermittent or reduced schedule leave.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... DIVISION, DEPARTMENT OF LABOR OTHER LAWS THE FAMILY AND MEDICAL LEAVE ACT OF 1993 Employee Leave Entitlements Under the Family and Medical Leave Act § 825.205 Increments of FMLA leave for intermittent or... 29 Labor 3 2012-07-01 2012-07-01 false Increments of FMLA leave for intermittent or reduced...

  3. 29 CFR 825.205 - Increments of FMLA leave for intermittent or reduced schedule leave.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... DIVISION, DEPARTMENT OF LABOR OTHER LAWS THE FAMILY AND MEDICAL LEAVE ACT OF 1993 Employee Leave Entitlements Under the Family and Medical Leave Act § 825.205 Increments of FMLA leave for intermittent or... 29 Labor 3 2013-07-01 2013-07-01 false Increments of FMLA leave for intermittent or reduced...

  4. Triple-α reaction rate constrained by stellar evolution models

    NASA Astrophysics Data System (ADS)

    Suda, Takuma; Hirschi, Raphael; Fujimoto, Masayuki Y.

    2012-11-01

    We investigate the quantitative constraint on the triple-α reaction rate based on stellar evolution theory, motivated by the recent significant revision of the rate proposed by nuclear physics calculations. Targeted stellar models were computed in order to investigate the impact of that rate in the mass range of 0.8<=M/Msolar<=25 and in the metallicity range between Z = 0 and Z = 0.02. The revised rate has a significant impact on the evolution of low-and intermediate-mass stars, while its influence on the evolution of massive stars (M > 10Msolar) is minimal. We find that employing the revised rate suppresses helium shell flashes on AGB phase for stars in the initial mass range 0.8<=M/Msolar<=6, which is contradictory to what is observed. The absence of helium shell flashes is due to the weak temperature dependence of the revised triple-α reaction cross section at the temperature involved. In our models, it is suggested that the temperature dependence of the cross section should have at least ν > 10 at T = 1-1.2×108K where the cross section is proportional to Tν. We also derive the helium ignition curve to estimate the maximum cross section to retain the low-mass first red giants. The semi-analytically derived ignition curves suggest that the reaction rate should be less than ~ 10-29 cm6 s-1 mole-2 at ~ 107.8 K, which corresponds to about three orders of magnitude larger than that of the NACRE compilation.

  5. Incremental Predictive Value of Serum AST-to-ALT Ratio for Incident Metabolic Syndrome: The ARIRANG Study

    PubMed Central

    Ahn, Song Vogue; Baik, Soon Koo; Cho, Youn zoo; Koh, Sang Baek; Huh, Ji Hye; Chang, Yoosoo; Sung, Ki-Chul; Kim, Jang Young

    2016-01-01

    Aims The ratio of aspartate aminotransferase (AST) to alanine aminotransferase (ALT) is of great interest as a possible novel marker of metabolic syndrome. However, longitudinal studies emphasizing the incremental predictive value of the AST-to-ALT ratio in diagnosing individuals at higher risk of developing metabolic syndrome are very scarce. Therefore, our study aimed to evaluate the AST-to-ALT ratio as an incremental predictor of new onset metabolic syndrome in a population-based cohort study. Material and Methods The population-based cohort study included 2276 adults (903 men and 1373 women) aged 40–70 years, who participated from 2005–2008 (baseline) without metabolic syndrome and were followed up from 2008–2011. Metabolic syndrome was defined according to the harmonized definition of metabolic syndrome. Serum concentrations of AST and ALT were determined by enzymatic methods. Results During an average follow-up period of 2.6-years, 395 individuals (17.4%) developed metabolic syndrome. In a multivariable adjusted model, the odds ratio (95% confidence interval) for new onset of metabolic syndrome, comparing the fourth quartile to the first quartile of the AST-to-ALT ratio, was 0.598 (0.422–0.853). The AST-to-ALT ratio also improved the area under the receiver operating characteristic curve (AUC) for predicting new cases of metabolic syndrome (0.715 vs. 0.732, P = 0.004). The net reclassification improvement of prediction models including the AST-to-ALT ratio was 0.23 (95% CI: 0.124–0.337, P<0.001), and the integrated discrimination improvement was 0.0094 (95% CI: 0.0046–0.0143, P<0.001). Conclusions The AST-to-ALT ratio independently predicted the future development of metabolic syndrome and had incremental predictive value for incident metabolic syndrome. PMID:27560931

  6. Application of incremental unknowns to the Burgers equation

    NASA Technical Reports Server (NTRS)

    Choi, Haecheon; Temam, Roger

    1993-01-01

    In this article, we make a few remarks on the role that attractors and inertial manifolds play in fluid mechanics problems. We then describe the role of incremental unknowns for approximating attractors and inertial manifolds when finite difference multigrid discretizations are used. The relation with direct numerical simulation and large eddy simulation is also mentioned.

  7. Generation of Referring Expressions: Assessing the Incremental Algorithm

    ERIC Educational Resources Information Center

    van Deemter, Kees; Gatt, Albert; van der Sluis, Ielka; Power, Richard

    2012-01-01

    A substantial amount of recent work in natural language generation has focused on the generation of "one-shot" referring expressions whose only aim is to identify a target referent. Dale and Reiter's Incremental Algorithm (IA) is often thought to be the best algorithm for maximizing the similarity to referring expressions produced by people. We…

  8. Exclusive data-based modeling of neutron-nuclear reactions below 20 MeV

    NASA Astrophysics Data System (ADS)

    Savin, Dmitry; Kosov, Mikhail

    2017-09-01

    We are developing CHIPS-TPT physics library for exclusive simulation of neutron-nuclear reactions below 20 MeV. Exclusive modeling reproduces each separate scattering and thus requires conservation of energy, momentum and quantum numbers in each reaction. Inclusive modeling reproduces only selected values while averaging over the others and imposes no such constraints. Therefore the exclusive modeling allows to simulate additional quantities like secondary particle correlations and gamma-lines broadening and avoid artificial fluctuations. CHIPS-TPT is based on the formerly included in Geant4 CHIPS library, which follows the exclusive approach, and extends it to incident neutrons with the energy below 20 MeV. The NeutronHP model for neutrons below 20 MeV included in Geant4 follows the inclusive approach like the well known MCNP code. Unfortunately, the available data in this energy region is mostly presented in ENDF-6 format and semi-inclusive. Imposing additional constraints on secondary particles complicates modeling but also allows to detect inconsistencies in the input data and to avoid errors that may remain unnoticed in inclusive modeling.

  9. Linear-scaling generation of potential energy surfaces using a double incremental expansion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    König, Carolin, E-mail: carolink@kth.se; Christiansen, Ove, E-mail: ove@chem.au.dk

    We present a combination of the incremental expansion of potential energy surfaces (PESs), known as n-mode expansion, with the incremental evaluation of the electronic energy in a many-body approach. The application of semi-local coordinates in this context allows the generation of PESs in a very cost-efficient way. For this, we employ the recently introduced flexible adaptation of local coordinates of nuclei (FALCON) coordinates. By introducing an additional transformation step, concerning only a fraction of the vibrational degrees of freedom, we can achieve linear scaling of the accumulated cost of the single point calculations required in the PES generation. Numerical examplesmore » of these double incremental approaches for oligo-phenyl examples show fast convergence with respect to the maximum number of simultaneously treated fragments and only a modest error introduced by the additional transformation step. The approach, presented here, represents a major step towards the applicability of vibrational wave function methods to sizable, covalently bound systems.« less

  10. Diurnal and seasonal changes in stem increment and water use by yellow poplar trees in response to environmental stress.

    PubMed

    McLaughlin, Samuel B; Wullschleger, Stan D; Nosal, Miloslav

    2003-11-01

    To evaluate indicators of whole-tree physiological responses to climate stress, we determined seasonal, daily and diurnal patterns of growth and water use in 10 yellow poplar (Liriodendron tulipifera L.) trees in a stand recently released from competition. Precise measurements of stem increment and sap flow made with automated electronic dendrometers and thermal dissipation probes, respectively, indicated close temporal linkages between water use and patterns of stem shrinkage and swelling during daily cycles of water depletion and recharge of extensible outer-stem tissues. These cycles also determined net daily basal area increment. Multivariate regression models based on a 123-day data series showed that daily diameter increments were related negatively to vapor pressure deficit (VPD), but positively to precipitation and temperature. The same model form with slight changes in coefficients yielded coefficients of determination of about 0.62 (0.57-0.66) across data subsets that included widely variable growth rates and VPDs. Model R2 was improved to 0.75 by using 3-day running mean daily growth data. Rapid recovery of stem diameter growth following short-term, diurnal reductions in VPD indicated that water stored in extensible stem tissues was part of a fast recharge system that limited hydration changes in the cambial zone during periods of water stress. There were substantial differences in the seasonal dynamics of growth among individual trees, and analyses indicated that faster-growing trees were more positively affected by precipitation, solar irradiance and temperature and more negatively affected by high VPD than slower-growing trees. There were no negative effects of ozone on daily growth rates in a year of low ozone concentrations.

  11. A comparison of total reaction cross section models used in particle and heavy ion transport codes

    NASA Astrophysics Data System (ADS)

    Sihver, Lembit; Lantz, M.; Takechi, M.; Kohama, A.; Ferrari, A.; Cerutti, F.; Sato, T.

    To be able to calculate the nucleon-nucleus and nucleus-nucleus total reaction cross sections with precision is very important for studies of basic nuclear properties, e.g. nuclear structure. This is also of importance for particle and heavy ion transport calculations because, in all particle and heavy ion transport codes, the probability function that a projectile particle will collide within a certain distance x in the matter depends on the total reaction cross sections. Furthermore, the total reaction cross sections will also scale the calculated partial fragmentation cross sections. It is therefore crucial that accurate total reaction cross section models are used in the transport calculations. In this paper, different models for calculating nucleon-nucleus and nucleus-nucleus total reaction cross sections are compared and discussed.

  12. Reaction pathways for the deoxygenation of vegetable oils and related model compounds.

    PubMed

    Gosselink, Robert W; Hollak, Stefan A W; Chang, Shu-Wei; van Haveren, Jacco; de Jong, Krijn P; Bitter, Johannes H; van Es, Daan S

    2013-09-01

    Vegetable oil-based feeds are regarded as an alternative source for the production of fuels and chemicals. Paraffins and olefins can be produced from these feeds through catalytic deoxygenation. The fundamentals of this process are mostly studied by using model compounds such as fatty acids, fatty acid esters, and specific triglycerides because of their structural similarity to vegetable oils. In this Review we discuss the impact of feedstock, reaction conditions, and nature of the catalyst on the reaction pathways of the deoxygenation of vegetable oils and its derivatives. As such, we conclude on the suitability of model compounds for this reaction. It is shown that the type of catalyst has a significant effect on the deoxygenation pathway, that is, group 10 metal catalysts are active in decarbonylation/decarboxylation whereas metal sulfide catalysts are more selective to hydrodeoxygenation. Deoxygenation studies performed under H2 showed similar pathways for fatty acids, fatty acid esters, triglycerides, and vegetable oils, as mostly deoxygenation occurs indirectly via the formation of fatty acids. Deoxygenation in the absence of H2 results in significant differences in reaction pathways and selectivities depending on the feedstock. Additionally, using unsaturated feedstocks under inert gas results in a high selectivity to undesired reactions such as cracking and the formation of heavies. Therefore, addition of H2 is proposed to be essential for the catalytic deoxygenation of vegetable oil feeds. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. Approach for Estimating Exposures and Incremental Health ...

    EPA Pesticide Factsheets

    Approach for Estimating Exposures and Incremental Health Effects from Lead During Renovation, Repair, and Painting Activities in Public and Commercial Buildings” (Technical Approach Document). Also available for public review and comment are two supplementary documents: the detailed appendices for the Technical Approach Document and a supplementary report entitled “Developing a Concentration-Response Function for Pb Exposure and Cardiovascular Disease-Related Mortality.” Together, these documents describes an analysis for estimating exposures and incremental health effects created by renovations of public and commercial buildings (P&CBs). This analysis could be used to identify and evaluate hazards from renovation, repair, and painting activities in P&CBs. A general overview of how this analysis can be used to inform EPA’s hazard finding is described in the Framework document that was previously made available for public comment (79 FR 31072; FRL9910-44). The analysis can be used in any proposed rulemaking to estimate the reduction in deleterious health effects that would result from any proposed regulatory requirements to mitigate exposure from P&CB renovation activities. The Technical Approach Document describes in detail how the analyses under this approach have been performed and presents the results – expected changes in blood lead levels and health effects due to lead exposure from renovation activities.

  14. Optimization of the p-xylene oxidation process by a multi-objective differential evolution algorithm with adaptive parameters co-derived with the population-based incremental learning algorithm

    NASA Astrophysics Data System (ADS)

    Guo, Zhan; Yan, Xuefeng

    2018-04-01

    Different operating conditions of p-xylene oxidation have different influences on the product, purified terephthalic acid. It is necessary to obtain the optimal combination of reaction conditions to ensure the quality of the products, cut down on consumption and increase revenues. A multi-objective differential evolution (MODE) algorithm co-evolved with the population-based incremental learning (PBIL) algorithm, called PBMODE, is proposed. The PBMODE algorithm was designed as a co-evolutionary system. Each individual has its own parameter individual, which is co-evolved by PBIL. PBIL uses statistical analysis to build a model based on the corresponding symbiotic individuals of the superior original individuals during the main evolutionary process. The results of simulations and statistical analysis indicate that the overall performance of the PBMODE algorithm is better than that of the compared algorithms and it can be used to optimize the operating conditions of the p-xylene oxidation process effectively and efficiently.

  15. Probing Complex Free-Radical Reaction Pathways of Fuel Model Compounds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Buchanan III, A C; Kidder, Michelle; Beste, Ariana

    2012-01-01

    Fossil (e.g. coal) and renewable (e.g. woody biomass) organic energy resources have received considerable attention as possible sources of liquid transportation fuels and commodity chemicals. Knowledge of the reactivity of these complex materials has been advanced through fundamental studies of organic compounds that model constituent substructures. In particular, an improved understanding of thermochemical reaction pathways involving free-radical intermediates has arisen from detailed experimental kinetic studies and, more recently, advanced computational investigations. In this presentation, we will discuss our recent investigations of the fundamental pyrolysis pathways of model compounds that represent key substructures in the lignin component of woody biomass withmore » a focus on molecules representative of the dominant beta-O-4 aryl ether linkages. Additional mechanistic insights gleaned from DFT calculations on the kinetics of key elementary reaction steps will also be presented, as well as a few thoughts on the significant contributions of Jim Franz to this area of free radical chemistry.« less

  16. Single-pass incremental force updates for adaptively restrained molecular dynamics.

    PubMed

    Singh, Krishna Kant; Redon, Stephane

    2018-03-30

    Adaptively restrained molecular dynamics (ARMD) allows users to perform more integration steps in wall-clock time by switching on and off positional degrees of freedoms. This article presents new, single-pass incremental force updates algorithms to efficiently simulate a system using ARMD. We assessed different algorithms for speedup measurements and implemented them in the LAMMPS MD package. We validated the single-pass incremental force update algorithm on four different benchmarks using diverse pair potentials. The proposed algorithm allows us to perform simulation of a system faster than traditional MD in both NVE and NVT ensembles. Moreover, ARMD using the new single-pass algorithm speeds up the convergence of observables in wall-clock time. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  17. Kinetic modeling and fitting software for interconnected reaction schemes: VisKin.

    PubMed

    Zhang, Xuan; Andrews, Jared N; Pedersen, Steen E

    2007-02-15

    Reaction kinetics for complex, highly interconnected kinetic schemes are modeled using analytical solutions to a system of ordinary differential equations. The algorithm employs standard linear algebra methods that are implemented using MatLab functions in a Visual Basic interface. A graphical user interface for simple entry of reaction schemes facilitates comparison of a variety of reaction schemes. To ensure microscopic balance, graph theory algorithms are used to determine violations of thermodynamic cycle constraints. Analytical solutions based on linear differential equations result in fast comparisons of first order kinetic rates and amplitudes as a function of changing ligand concentrations. For analysis of higher order kinetics, we also implemented a solution using numerical integration. To determine rate constants from experimental data, fitting algorithms that adjust rate constants to fit the model to imported data were implemented using the Levenberg-Marquardt algorithm or using Broyden-Fletcher-Goldfarb-Shanno methods. We have included the ability to carry out global fitting of data sets obtained at varying ligand concentrations. These tools are combined in a single package, which we have dubbed VisKin, to guide and analyze kinetic experiments. The software is available online for use on PCs.

  18. Incremental Structured Dictionary Learning for Video Sensor-Based Object Tracking

    PubMed Central

    Xue, Ming; Yang, Hua; Zheng, Shibao; Zhou, Yi; Yu, Zhenghua

    2014-01-01

    To tackle robust object tracking for video sensor-based applications, an online discriminative algorithm based on incremental discriminative structured dictionary learning (IDSDL-VT) is presented. In our framework, a discriminative dictionary combining both positive, negative and trivial patches is designed to sparsely represent the overlapped target patches. Then, a local update (LU) strategy is proposed for sparse coefficient learning. To formulate the training and classification process, a multiple linear classifier group based on a K-combined voting (KCV) function is proposed. As the dictionary evolves, the models are also trained to timely adapt the target appearance variation. Qualitative and quantitative evaluations on challenging image sequences compared with state-of-the-art algorithms demonstrate that the proposed tracking algorithm achieves a more favorable performance. We also illustrate its relay application in visual sensor networks. PMID:24549252

  19. Electrifying model catalysts for understanding electrocatalytic reactions in liquid electrolytes.

    PubMed

    Faisal, Firas; Stumm, Corinna; Bertram, Manon; Waidhas, Fabian; Lykhach, Yaroslava; Cherevko, Serhiy; Xiang, Feifei; Ammon, Maximilian; Vorokhta, Mykhailo; Šmíd, Břetislav; Skála, Tomáš; Tsud, Nataliya; Neitzel, Armin; Beranová, Klára; Prince, Kevin C; Geiger, Simon; Kasian, Olga; Wähler, Tobias; Schuster, Ralf; Schneider, M Alexander; Matolín, Vladimír; Mayrhofer, Karl J J; Brummel, Olaf; Libuda, Jörg

    2018-07-01

    Electrocatalysis is at the heart of our future transition to a renewable energy system. Most energy storage and conversion technologies for renewables rely on electrocatalytic processes and, with increasing availability of cheap electrical energy from renewables, chemical production will witness electrification in the near future 1-3 . However, our fundamental understanding of electrocatalysis lags behind the field of classical heterogeneous catalysis that has been the dominating chemical technology for a long time. Here, we describe a new strategy to advance fundamental studies on electrocatalytic materials. We propose to 'electrify' complex oxide-based model catalysts made by surface science methods to explore electrocatalytic reactions in liquid electrolytes. We demonstrate the feasibility of this concept by transferring an atomically defined platinum/cobalt oxide model catalyst into the electrochemical environment while preserving its atomic surface structure. Using this approach, we explore particle size effects and identify hitherto unknown metal-support interactions that stabilize oxidized platinum at the nanoparticle interface. The metal-support interactions open a new synergistic reaction pathway that involves both metallic and oxidized platinum. Our results illustrate the potential of the concept, which makes available a systematic approach to build atomically defined model electrodes for fundamental electrocatalytic studies.

  20. Chemical reaction fouling model for single-phase heat transfer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Panchal, C.B.; Watkinson, A.P.

    1993-08-01

    A fouling model was developed on the premise that the chemical reaction for generation of precursor can take place in the bulk fluid, in the thermalboundary layer, or at the fluid/wall interface, depending upon the interactive effects of flu id dynamics, heat and mass transfer, and the controlling chemical reaction. The analysis was used to examine the experimental data for fouling deposition of polyperoxides produced by autoxidation of indene in kerosene. The effects of fluid and wall temperatures for two flow geometries were analyzed. The results showed that the relative effects of physical parameters on the fouling rate would differmore » for the three fouling mechanisms; therefore, it is important to identify the controlling mechanism in applying the closed-flow-loop data to industrial conditions.« less