Sample records for step function model

  1. Functional Fault Modeling of a Cryogenic System for Real-Time Fault Detection and Isolation

    NASA Technical Reports Server (NTRS)

    Ferrell, Bob; Lewis, Mark; Perotti, Jose; Oostdyk, Rebecca; Brown, Barbara

    2010-01-01

    The purpose of this paper is to present the model development process used to create a Functional Fault Model (FFM) of a liquid hydrogen (L H2) system that will be used for realtime fault isolation in a Fault Detection, Isolation and Recover (FDIR) system. The paper explains th e steps in the model development process and the data products required at each step, including examples of how the steps were performed fo r the LH2 system. It also shows the relationship between the FDIR req uirements and steps in the model development process. The paper concl udes with a description of a demonstration of the LH2 model developed using the process and future steps for integrating the model in a live operational environment.

  2. Derivation of linearized transfer functions for switching-mode regulations. Phase A: Current step-up and voltage step-up converters

    NASA Technical Reports Server (NTRS)

    Wong, R. C.; Owen, H. A., Jr.; Wilson, T. G.

    1981-01-01

    Small-signal models are derived for the power stage of the voltage step-up (boost) and the current step-up (buck) converters. The modeling covers operation in both the continuous-mmf mode and the discontinuous-mmf mode. The power stage in the regulated current step-up converter on board the Dynamics Explorer Satellite is used as an example to illustrate the procedures in obtaining the small-signal functions characterizing a regulated converter.

  3. Functional-to-form mapping for assembly design automation

    NASA Astrophysics Data System (ADS)

    Xu, Z. G.; Liu, W. M.; Shen, W. D.; Yang, D. Y.; Liu, T. T.

    2017-11-01

    Assembly-level function-to-form mapping is the most effective procedure towards design automation. The research work mainly includes: the assembly-level function definitions, product network model and the two-step mapping mechanisms. The function-to-form mapping is divided into two steps, i.e. mapping of function-to-behavior, called the first-step mapping, and the second-step mapping, i.e. mapping of behavior-to-structure. After the first step mapping, the three dimensional transmission chain (or 3D sketch) is studied, and the feasible design computing tools are developed. The mapping procedure is relatively easy to be implemented interactively, but, it is quite difficult to finish it automatically. So manual, semi-automatic, automatic and interactive modification of the mapping model are studied. A mechanical hand F-F mapping process is illustrated to verify the design methodologies.

  4. Stochastic derivative-free optimization using a trust region framework

    DOE PAGES

    Larson, Jeffrey; Billups, Stephen C.

    2016-02-17

    This study presents a trust region algorithm to minimize a function f when one has access only to noise-corrupted function values f¯. The model-based algorithm dynamically adjusts its step length, taking larger steps when the model and function agree and smaller steps when the model is less accurate. The method does not require the user to specify a fixed pattern of points used to build local models and does not repeatedly sample points. If f is sufficiently smooth and the noise is independent and identically distributed with mean zero and finite variance, we prove that our algorithm produces iterates suchmore » that the corresponding function gradients converge in probability to zero. As a result, we present a prototype of our algorithm that, while simplistic in its management of previously evaluated points, solves benchmark problems in fewer function evaluations than do existing stochastic approximation methods.« less

  5. Effects of step length and step frequency on lower-limb muscle function in human gait.

    PubMed

    Lim, Yoong Ping; Lin, Yi-Chung; Pandy, Marcus G

    2017-05-24

    The aim of this study was to quantify the effects of step length and step frequency on lower-limb muscle function in walking. Three-dimensional gait data were used in conjunction with musculoskeletal modeling techniques to evaluate muscle function over a range of walking speeds using prescribed combinations of step length and step frequency. The body was modeled as a 10-segment, 21-degree-of-freedom skeleton actuated by 54 muscle-tendon units. Lower-limb muscle forces were calculated using inverse dynamics and static optimization. We found that five muscles - GMAX, GMED, VAS, GAS, and SOL - dominated vertical support and forward progression independent of changes made to either step length or step frequency, and that, overall, changes in step length had a greater influence on lower-limb joint motion, net joint moments and muscle function than step frequency. Peak forces developed by the uniarticular hip and knee extensors, as well as the normalized fiber lengths at which these muscles developed their peak forces, correlated more closely with changes in step length than step frequency. Increasing step length resulted in larger contributions from the hip and knee extensors and smaller contributions from gravitational forces (limb posture) to vertical support. These results provide insight into why older people with weak hip and knee extensors walk more slowly by reducing step length rather than step frequency and also help to identify the key muscle groups that ought to be targeted in exercise programs designed to improve gait biomechanics in older adults. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. [Application of ordinary Kriging method in entomologic ecology].

    PubMed

    Zhang, Runjie; Zhou, Qiang; Chen, Cuixian; Wang, Shousong

    2003-01-01

    Geostatistics is a statistic method based on regional variables and using the tool of variogram to analyze the spatial structure and the patterns of organism. In simulating the variogram within a great range, though optimal simulation cannot be obtained, the simulation method of a dialogue between human and computer can be used to optimize the parameters of the spherical models. In this paper, the method mentioned above and the weighted polynomial regression were utilized to simulate the one-step spherical model, the two-step spherical model and linear function model, and the available nearby samples were used to draw on the ordinary Kriging procedure, which provided a best linear unbiased estimate of the constraint of the unbiased estimation. The sum of square deviation between the estimating and measuring values of varying theory models were figured out, and the relative graphs were shown. It was showed that the simulation based on the two-step spherical model was the best simulation, and the one-step spherical model was better than the linear function model.

  7. Evolution of optimal Hill coefficients in nonlinear public goods games.

    PubMed

    Archetti, Marco; Scheuring, István

    2016-10-07

    In evolutionary game theory, the effect of public goods like diffusible molecules has been modelled using linear, concave, sigmoid and step functions. The observation that biological systems are often sigmoid input-output functions, as described by the Hill equation, suggests that a sigmoid function is more realistic. The Michaelis-Menten model of enzyme kinetics, however, predicts a concave function, and while mechanistic explanations of sigmoid kinetics exist, we lack an adaptive explanation: what is the evolutionary advantage of a sigmoid benefit function? We analyse public goods games in which the shape of the benefit function can evolve, in order to determine the optimal and evolutionarily stable Hill coefficients. We find that, while the dynamics depends on whether output is controlled at the level of the individual or the population, intermediate or high Hill coefficients often evolve, leading to sigmoid input-output functions that for some parameters are so steep to resemble a step function (an on-off switch). Our results suggest that, even when the shape of the benefit function is unknown, biological public goods should be modelled using a sigmoid or step function rather than a linear or concave function. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. The treatment of climate science in Integrated Assessment Modelling: integration of climate step function response in an energy system integrated assessment model.

    NASA Astrophysics Data System (ADS)

    Dessens, Olivier

    2016-04-01

    Integrated Assessment Models (IAMs) are used as crucial inputs to policy-making on climate change. These models simulate aspect of the economy and climate system to deliver future projections and to explore the impact of mitigation and adaptation policies. The IAMs' climate representation is extremely important as it can have great influence on future political action. The step-function-response is a simple climate model recently developed by the UK Met Office and is an alternate method of estimating the climate response to an emission trajectory directly from global climate model step simulations. Good et al., (2013) have formulated a method of reconstructing general circulation models (GCMs) climate response to emission trajectories through an idealized experiment. This method is called the "step-response approach" after and is based on an idealized abrupt CO2 step experiment results. TIAM-UCL is a technology-rich model that belongs to the family of, partial-equilibrium, bottom-up models, developed at University College London to represent a wide spectrum of energy systems in 16 regions of the globe (Anandarajah et al. 2011). The model uses optimisation functions to obtain cost-efficient solutions, in meeting an exogenously defined set of energy-service demands, given certain technological and environmental constraints. Furthermore, it employs linear programming techniques making the step function representation of the climate change response adapted to the model mathematical formulation. For the first time, we have introduced the "step-response approach" method developed at the UK Met Office in an IAM, the TIAM-UCL energy system, and we investigate the main consequences of this modification on the results of the model in term of climate and energy system responses. The main advantage of this approach (apart from the low computational cost it entails) is that its results are directly traceable to the GCM involved and closely connected to well-known methods of analysing GCMs with the step-experiments. Acknowledgments: This work is supported by the FP7 HELIX project (www.helixclimate.eu) References: Anandarajah, G., Pye, S., Usher, W., Kesicki, F., & Mcglade, C. (2011). TIAM-UCL Global model documentation. https://www.ucl.ac.uk/energy-models/models/tiam-ucl/tiam-ucl-manual Good, P., Gregory, J. M., Lowe, J. A., & Andrews, T. (2013). Abrupt CO2 experiments as tools for predicting and understanding CMIP5 representative concentration pathway projections. Climate Dynamics, 40(3-4), 1041-1053.

  9. Hydrothermal atomic force microscopy observations of barite step growth rates as a function of the aqueous barium-to-sulfate ratio

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bracco, Jacquelyn N.; Gooijer, Yiscka; Higgins, Steven R.

    The rate of growth of ionic minerals from solutions with varying aqueous cation:anion ratios may result in significant errors in mineralization rates predicted by commonly-used affinity-based rate equations. To assess the potential influence of solute stoichiometry on barite growth, step velocities on the barite (001) surface have been measured at 108 °C using hydrothermal atomic force microscopy (HAFM) at moderate supersaturation and as a function of the aqueous barium:sulfate ratio (r). Barite growth hillocks at r ~ 1 were bounded bymore » $$\\langle$$120$$\\rangle$$ steps, however at r < 1, kink site densities increased, steps followed a direction vicinal to $$\\langle$$120$$\\rangle$$, and the [010] steps developed. At r > 1, steps roughened and rounded as the kink site density increased. Step velocities peaked at r = 1 and decreased roughly symmetrically as a function of r, indicating the attachment rates of barium and sulfate ions are similar under these conditions. We hypothesize that the differences in our observations at high and low r arise from differences in the attachment rate constants for the obtuse and acute $$\\langle$$120$$\\rangle$$ steps. Based on results at low r, the data suggests the attachment rate constant for barium ions is similar for obtuse and acute steps. Based on results at high r, the data suggests the attachment rate constant for sulfate is greater for obtuse steps than acute steps. In conclusion, utilizing a step growth model developed by Stack and Grantham (2010) the experimental step velocities as a function of r were readily fit while attempts to fit the data using a model developed by Zhang and Nancollas (1998) were less successful.« less

  10. Hydrothermal atomic force microscopy observations of barite step growth rates as a function of the aqueous barium-to-sulfate ratio

    DOE PAGES

    Bracco, Jacquelyn N.; Gooijer, Yiscka; Higgins, Steven R.

    2016-03-19

    The rate of growth of ionic minerals from solutions with varying aqueous cation:anion ratios may result in significant errors in mineralization rates predicted by commonly-used affinity-based rate equations. To assess the potential influence of solute stoichiometry on barite growth, step velocities on the barite (001) surface have been measured at 108 °C using hydrothermal atomic force microscopy (HAFM) at moderate supersaturation and as a function of the aqueous barium:sulfate ratio (r). Barite growth hillocks at r ~ 1 were bounded bymore » $$\\langle$$120$$\\rangle$$ steps, however at r < 1, kink site densities increased, steps followed a direction vicinal to $$\\langle$$120$$\\rangle$$, and the [010] steps developed. At r > 1, steps roughened and rounded as the kink site density increased. Step velocities peaked at r = 1 and decreased roughly symmetrically as a function of r, indicating the attachment rates of barium and sulfate ions are similar under these conditions. We hypothesize that the differences in our observations at high and low r arise from differences in the attachment rate constants for the obtuse and acute $$\\langle$$120$$\\rangle$$ steps. Based on results at low r, the data suggests the attachment rate constant for barium ions is similar for obtuse and acute steps. Based on results at high r, the data suggests the attachment rate constant for sulfate is greater for obtuse steps than acute steps. In conclusion, utilizing a step growth model developed by Stack and Grantham (2010) the experimental step velocities as a function of r were readily fit while attempts to fit the data using a model developed by Zhang and Nancollas (1998) were less successful.« less

  11. Percutaneous Transcatheter One-Step Mechanical Aortic Disc Valve Prosthesis Implantation: A Preliminary Feasibility Study in Swine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sochman, Jan; Peregrin, Jan H.; Rocek, Miloslav

    Purpose. To evaluate the feasibility of one-step implantation of a new type of stent-based mechanical aortic disc valve prosthesis (MADVP) above and across the native aortic valve and its short-term function in swine with both functional and dysfunctional native valves. Methods. The MADVP consisted of a folding disc valve made of silicone elastomer attached to either a nitinol Z-stent (Z model) or a nitinol cross-braided stent (SX model). Implantation of 10 MADVPs (6 Z and 4 SX models) was attempted in 10 swine: 4 (2 Z and 2 SX models) with a functional native valve and 6 (4 Z andmore » 2 SX models) with aortic regurgitation induced either by intentional valve injury or by MADVP placement across the native valve. MADVP function was observed for up to 3 hr after implantation. Results. MADVP implantation was successful in 9 swine. One animal died of induced massive regurgitation prior to implantation. Four MADVPs implanted above functioning native valves exhibited good function. In 5 swine with regurgitation, MADVP implantation corrected the induced native valve dysfunction and the device's continuous good function was observed in 4 animals. One MADVP (SX model) placed across native valve gradually migrated into the left ventricle. Conclusion. The tested MADVP can be implanted above and across the native valve in a one-step procedure and can replace the function of the regurgitating native valve. Further technical development and testing are warranted, preferably with a manufactured MADVP.« less

  12. Using step and path selection functions for estimating resistance to movement: Pumas as a case study

    Treesearch

    Katherine A. Zeller; Kevin McGarigal; Samuel A. Cushman; Paul Beier; T. Winston Vickers; Walter M. Boyce

    2015-01-01

    GPS telemetry collars and their ability to acquire accurate and consistently frequent locations have increased the use of step selection functions (SSFs) and path selection functions (PathSFs) for studying animal movement and estimating resistance. However, previously published SSFs and PathSFs often do not accommodate multiple scales or multiscale modeling....

  13. Generating linear regression model to predict motor functions by use of laser range finder during TUG.

    PubMed

    Adachi, Daiki; Nishiguchi, Shu; Fukutani, Naoto; Hotta, Takayuki; Tashiro, Yuto; Morino, Saori; Shirooka, Hidehiko; Nozaki, Yuma; Hirata, Hinako; Yamaguchi, Moe; Yorozu, Ayanori; Takahashi, Masaki; Aoyama, Tomoki

    2017-05-01

    The purpose of this study was to investigate which spatial and temporal parameters of the Timed Up and Go (TUG) test are associated with motor function in elderly individuals. This study included 99 community-dwelling women aged 72.9 ± 6.3 years. Step length, step width, single support time, variability of the aforementioned parameters, gait velocity, cadence, reaction time from starting signal to first step, and minimum distance between the foot and a marker placed to 3 in front of the chair were measured using our analysis system. The 10-m walk test, five times sit-to-stand (FTSTS) test, and one-leg standing (OLS) test were used to assess motor function. Stepwise multivariate linear regression analysis was used to determine which TUG test parameters were associated with each motor function test. Finally, we calculated a predictive model for each motor function test using each regression coefficient. In stepwise linear regression analysis, step length and cadence were significantly associated with the 10-m walk test, FTSTS and OLS test. Reaction time was associated with the FTSTS test, and step width was associated with the OLS test. Each predictive model showed a strong correlation with the 10-m walk test and OLS test (P < 0.01), which was not significant higher correlation than TUG test time. We showed which TUG test parameters were associated with each motor function test. Moreover, the TUG test time regarded as the lower extremity function and mobility has strong predictive ability in each motor function test. Copyright © 2017 The Japanese Orthopaedic Association. Published by Elsevier B.V. All rights reserved.

  14. A novel and simple test of gait adaptability predicts gold standard measures of functional mobility in stroke survivors.

    PubMed

    Hollands, K L; Pelton, T A; van der Veen, S; Alharbi, S; Hollands, M A

    2016-01-01

    Although there is evidence that stroke survivors have reduced gait adaptability, the underlying mechanisms and the relationship to functional recovery are largely unknown. We explored the relationships between walking adaptability and clinical measures of balance, motor recovery and functional ability in stroke survivors. Stroke survivors (n=42) stepped to targets, on a 6m walkway, placed to elicit step lengthening, shortening and narrowing on paretic and non-paretic sides. The number of targets missed during six walks and target stepping speed was recorded. Fugl-Meyer (FM), Berg Balance Scale (BBS), self-selected walking speed (SWWS) and single support (SS) and step length (SL) symmetry (using GaitRite when not walking to targets) were also assessed. Stepwise multiple-linear regression was used to model the relationships between: total targets missed, number missed with paretic and non-paretic legs, target stepping speed, and each clinical measure. Regression revealed a significant model for each outcome variable that included only one independent variable. Targets missed by the paretic limb, was a significant predictor of FM (F(1,40)=6.54, p=0.014,). Speed of target stepping was a significant predictor of each of BBS (F(1,40)=26.36, p<0.0001), SSWS (F(1,40)=37.00, p<0.0001). No variables were significant predictors of SL or SS asymmetry. Speed of target stepping was significantly predictive of BBS and SSWS and paretic targets missed predicted FM, suggesting that fast target stepping requires good balance and accurate stepping demands good paretic leg function. The relationships between these parameters indicate gait adaptability is a clinically meaningful target for measurement and treatment of functionally adaptive walking ability in stroke survivors. Copyright © 2015 Elsevier B.V. All rights reserved.

  15. Magnesite Step Growth Rates as a Function of the Aqueous Magnesium:Carbonate Ratio

    DOE PAGES

    Bracco, Jacquelyn N.; Stack, Andrew G.; Higgins, Steven R.

    2014-10-01

    Step velocities of monolayer-height steps on the (101 ⁻4) magnesite surface have been measured as functions of the aqueous magnesium-to-carbonate ratio and saturation index (SI) using a hydrothermal atomic force microscope (HAFM). At SI ≤ 1.9 and 80-90 °C, step velocities were found to be invariant with changes in the magnesium-to-carbonate ratio, an observation in contrast with standard models for growth and dissolution of ionically-bonded, multi-component crystals. However, at high saturation indices (SI = 2.15), step velocities displayed a ratio dependence, maximized at magnesium-to-carbonate ratios slightly greater than 1:1. Traditional affinity-based models were unable to describe growth rates at themore » higher saturation index. Step velocities also could not be modeled solely through nucleation of kink sites, in contrast to other minerals whose bonding between constituent ions is also dominantly ionic in nature, such as calcite and barite. Instead, they could be described only by a model that incorporates both kink nucleation and propagation. Based on observed step morphological changes at these higher saturation indices, the step velocity maximum at SI = 2.15 is likely due to the rate of attachment to propagating kink sites overcoming the rate of detachment from kink sites as the latter becomes less significant under far from equilibrium conditions.« less

  16. Bread dough rheology: Computing with a damage function model

    NASA Astrophysics Data System (ADS)

    Tanner, Roger I.; Qi, Fuzhong; Dai, Shaocong

    2015-01-01

    We describe an improved damage function model for bread dough rheology. The model has relatively few parameters, all of which can easily be found from simple experiments. Small deformations in the linear region are described by a gel-like power-law memory function. A set of large non-reversing deformations - stress relaxation after a step of shear, steady shearing and elongation beginning from rest, and biaxial stretching, is used to test the model. With the introduction of a revised strain measure which includes a Mooney-Rivlin term, all of these motions can be well described by the damage function described in previous papers. For reversing step strains, larger amplitude oscillatory shearing and recoil reasonable predictions have been found. The numerical methods used are discussed and we give some examples.

  17. Estimation and model selection of semiparametric multivariate survival functions under general censorship.

    PubMed

    Chen, Xiaohong; Fan, Yanqin; Pouzo, Demian; Ying, Zhiliang

    2010-07-01

    We study estimation and model selection of semiparametric models of multivariate survival functions for censored data, which are characterized by possibly misspecified parametric copulas and nonparametric marginal survivals. We obtain the consistency and root- n asymptotic normality of a two-step copula estimator to the pseudo-true copula parameter value according to KLIC, and provide a simple consistent estimator of its asymptotic variance, allowing for a first-step nonparametric estimation of the marginal survivals. We establish the asymptotic distribution of the penalized pseudo-likelihood ratio statistic for comparing multiple semiparametric multivariate survival functions subject to copula misspecification and general censorship. An empirical application is provided.

  18. Estimation and model selection of semiparametric multivariate survival functions under general censorship

    PubMed Central

    Chen, Xiaohong; Fan, Yanqin; Pouzo, Demian; Ying, Zhiliang

    2013-01-01

    We study estimation and model selection of semiparametric models of multivariate survival functions for censored data, which are characterized by possibly misspecified parametric copulas and nonparametric marginal survivals. We obtain the consistency and root-n asymptotic normality of a two-step copula estimator to the pseudo-true copula parameter value according to KLIC, and provide a simple consistent estimator of its asymptotic variance, allowing for a first-step nonparametric estimation of the marginal survivals. We establish the asymptotic distribution of the penalized pseudo-likelihood ratio statistic for comparing multiple semiparametric multivariate survival functions subject to copula misspecification and general censorship. An empirical application is provided. PMID:24790286

  19. Estimating V0[subscript 2]max Using a Personalized Step Test

    ERIC Educational Resources Information Center

    Webb, Carrie; Vehrs, Pat R.; George, James D.; Hager, Ronald

    2014-01-01

    The purpose of this study was to develop a step test with a personalized step rate and step height to predict cardiorespiratory fitness in 80 college-aged males and females using the self-reported perceived functional ability scale and data collected during the step test. Multiple linear regression analysis yielded a model (R = 0.90, SEE = 3.43…

  20. A lattice Boltzmann model with an amending function for simulating nonlinear partial differential equations

    NASA Astrophysics Data System (ADS)

    Chen, Lin-Jie; Ma, Chang-Feng

    2010-01-01

    This paper proposes a lattice Boltzmann model with an amending function for one-dimensional nonlinear partial differential equations (NPDEs) in the form ut + αuux + βunux + γuxx + δuxxx + ζuxxxx = 0. This model is different from existing models because it lets the time step be equivalent to the square of the space step and derives higher accuracy and nonlinear terms in NPDEs. With the Chapman-Enskog expansion, the governing evolution equation is recovered correctly from the continuous Boltzmann equation. The numerical results agree well with the analytical solutions.

  1. Piezoelectric Actuator Modeling Using MSC/NASTRAN and MATLAB

    NASA Technical Reports Server (NTRS)

    Reaves, Mercedes C.; Horta, Lucas G.

    2003-01-01

    This paper presents a procedure for modeling structures containing piezoelectric actuators using MSCMASTRAN and MATLAB. The paper describes the utility and functionality of one set of validated modeling tools. The tools described herein use MSCMASTRAN to model the structure with piezoelectric actuators and a thermally induced strain to model straining of the actuators due to an applied voltage field. MATLAB scripts are used to assemble the dynamic equations and to generate frequency response functions. The application of these tools is discussed using a cantilever aluminum beam with a surface mounted piezoelectric actuator as a sample problem. Software in the form of MSCINASTRAN DMAP input commands, MATLAB scripts, and a step-by-step procedure to solve the example problem are provided. Analysis results are generated in terms of frequency response functions from deflection and strain data as a function of input voltage to the actuator.

  2. A structural equation model relating impaired sensorimotor function, fear of falling and gait patterns in older people.

    PubMed

    Menz, Hylton B; Lord, Stephen R; Fitzpatrick, Richard C

    2007-02-01

    Many falls in older people occur while walking, however the mechanisms responsible for gait instability are poorly understood. Therefore, the aim of this study was to develop a plausible model describing the relationships between impaired sensorimotor function, fear of falling and gait patterns in older people. Temporo-spatial gait parameters and acceleration patterns of the head and pelvis were obtained from 100 community-dwelling older people aged between 75 and 93 years while walking on an irregular walkway. A theoretical model was developed to explain the relationships between these variables, assuming that head stability is a primary output of the postural control system when walking. This model was then tested using structural equation modeling, a statistical technique which enables the testing of a set of regression equations simultaneously. The structural equation model indicated that: (i) reduced step length has a significant direct and indirect association with reduced head stability; (ii) impaired sensorimotor function is significantly associated with reduced head stability, but this effect is largely indirect, mediated by reduced step length, and; (iii) fear of falling is significantly associated with reduced step length, but has little direct influence on head stability. These findings provide useful insights into the possible mechanisms underlying gait characteristics and risk of falling in older people. Particularly important is the indication that fear-related step length shortening may be maladaptive.

  3. Tree-Based Global Model Tests for Polytomous Rasch Models

    ERIC Educational Resources Information Center

    Komboz, Basil; Strobl, Carolin; Zeileis, Achim

    2018-01-01

    Psychometric measurement models are only valid if measurement invariance holds between test takers of different groups. Global model tests, such as the well-established likelihood ratio (LR) test, are sensitive to violations of measurement invariance, such as differential item functioning and differential step functioning. However, these…

  4. Consistency functional map propagation for repetitive patterns

    NASA Astrophysics Data System (ADS)

    Wang, Hao

    2017-09-01

    Repetitive patterns appear frequently in both man-made and natural environments. Automatically and robustly detecting such patterns from an image is a challenging problem. We study repetitive pattern alignment by embedding segmentation cue with a functional map model. However, this model cannot tackle the repetitive patterns directly due to the large photometric and geometric variations. Thus, a consistency functional map propagation (CFMP) algorithm that extends the functional map with dynamic propagation is proposed to address this issue. This propagation model is acquired in two steps. The first one aligns the patterns from a local region, transferring segmentation functions among patterns. It can be cast as an L norm optimization problem. The latter step updates the template segmentation for the next round of pattern discovery by merging the transferred segmentation functions. Extensive experiments and comparative analyses have demonstrated an encouraging performance of the proposed algorithm in detection and segmentation of repetitive patterns.

  5. Ancient numerical daemons of conceptual hydrological modeling: 2. Impact of time stepping schemes on model analysis and prediction

    NASA Astrophysics Data System (ADS)

    Kavetski, Dmitri; Clark, Martyn P.

    2010-10-01

    Despite the widespread use of conceptual hydrological models in environmental research and operations, they remain frequently implemented using numerically unreliable methods. This paper considers the impact of the time stepping scheme on model analysis (sensitivity analysis, parameter optimization, and Markov chain Monte Carlo-based uncertainty estimation) and prediction. It builds on the companion paper (Clark and Kavetski, 2010), which focused on numerical accuracy, fidelity, and computational efficiency. Empirical and theoretical analysis of eight distinct time stepping schemes for six different hydrological models in 13 diverse basins demonstrates several critical conclusions. (1) Unreliable time stepping schemes, in particular, fixed-step explicit methods, suffer from troublesome numerical artifacts that severely deform the objective function of the model. These deformations are not rare isolated instances but can arise in any model structure, in any catchment, and under common hydroclimatic conditions. (2) Sensitivity analysis can be severely contaminated by numerical errors, often to the extent that it becomes dominated by the sensitivity of truncation errors rather than the model equations. (3) Robust time stepping schemes generally produce "better behaved" objective functions, free of spurious local optima, and with sufficient numerical continuity to permit parameter optimization using efficient quasi Newton methods. When implemented within a multistart framework, modern Newton-type optimizers are robust even when started far from the optima and provide valuable diagnostic insights not directly available from evolutionary global optimizers. (4) Unreliable time stepping schemes lead to inconsistent and biased inferences of the model parameters and internal states. (5) Even when interactions between hydrological parameters and numerical errors provide "the right result for the wrong reason" and the calibrated model performance appears adequate, unreliable time stepping schemes make the model unnecessarily fragile in predictive mode, undermining validation assessments and operational use. Erroneous or misleading conclusions of model analysis and prediction arising from numerical artifacts in hydrological models are intolerable, especially given that robust numerics are accepted as mainstream in other areas of science and engineering. We hope that the vivid empirical findings will encourage the conceptual hydrological community to close its Pandora's box of numerical problems, paving the way for more meaningful model application and interpretation.

  6. STEP and STEPSPL: Computer programs for aerodynamic model structure determination and parameter estimation

    NASA Technical Reports Server (NTRS)

    Batterson, J. G.

    1986-01-01

    The successful parametric modeling of the aerodynamics for an airplane operating at high angles of attack or sideslip is performed in two phases. First the aerodynamic model structure must be determined and second the associated aerodynamic parameters (stability and control derivatives) must be estimated for that model. The purpose of this paper is to document two versions of a stepwise regression computer program which were developed for the determination of airplane aerodynamic model structure and to provide two examples of their use on computer generated data. References are provided for the application of the programs to real flight data. The two computer programs that are the subject of this report, STEP and STEPSPL, are written in FORTRAN IV (ANSI l966) compatible with a CDC FTN4 compiler. Both programs are adaptations of a standard forward stepwise regression algorithm. The purpose of the adaptation is to facilitate the selection of a adequate mathematical model of the aerodynamic force and moment coefficients of an airplane from flight test data. The major difference between STEP and STEPSPL is in the basis for the model. The basis for the model in STEP is the standard polynomial Taylor's series expansion of the aerodynamic function about some steady-state trim condition. Program STEPSPL utilizes a set of spline basis functions.

  7. Monte Carlo modeling of single-molecule cytoplasmic dynein.

    PubMed

    Singh, Manoranjan P; Mallik, Roop; Gross, Steven P; Yu, Clare C

    2005-08-23

    Molecular motors are responsible for active transport and organization in the cell, underlying an enormous number of crucial biological processes. Dynein is more complicated in its structure and function than other motors. Recent experiments have found that, unlike other motors, dynein can take different size steps along microtubules depending on load and ATP concentration. We use Monte Carlo simulations to model the molecular motor function of cytoplasmic dynein at the single-molecule level. The theory relates dynein's enzymatic properties to its mechanical force production. Our simulations reproduce the main features of recent single-molecule experiments that found a discrete distribution of dynein step sizes, depending on load and ATP concentration. The model reproduces the large steps found experimentally under high ATP and no load by assuming that the ATP binding affinities at the secondary sites decrease as the number of ATP bound to these sites increases. Additionally, to capture the essential features of the step-size distribution at very low ATP concentration and no load, the ATP hydrolysis of the primary site must be dramatically reduced when none of the secondary sites have ATP bound to them. We make testable predictions that should guide future experiments related to dynein function.

  8. Strategies for developing competency models.

    PubMed

    Marrelli, Anne F; Tondora, Janis; Hoge, Michael A

    2005-01-01

    There is an emerging trend within healthcare to introduce competency-based approaches in the training, assessment, and development of the workforce. The trend is evident in various disciplines and specialty areas within the field of behavioral health. This article is designed to inform those efforts by presenting a step-by-step process for developing a competency model. An introductory overview of competencies, competency models, and the legal implications of competency development is followed by a description of the seven steps involved in creating a competency model for a specific function, role, or position. This modeling process is drawn from advanced work on competencies in business and industry.

  9. User's Manual and Final Report for Hot-SMAC GUI Development

    NASA Technical Reports Server (NTRS)

    Yarrington, Phil

    2001-01-01

    A new software package called Higher Order Theory-Structural/Micro Analysis Code (HOT-SMAC) has been developed as an effective alternative to the finite element approach for Functionally Graded Material (FGM) modeling. HOT-SMAC is a self-contained package including pre- and post-processing through an intuitive graphical user interface, along with the well-established Higher Order Theory for Functionally Graded Materials (HOTFGM) thermomechanical analysis engine. This document represents a Getting Started/User's Manual for HOT-SMAC and a final report for its development. First, the features of the software are presented in a simple step-by-step example where a HOT-SMAC model representing a functionally graded material is created, mechanical and thermal boundary conditions are applied, the model is analyzed and results are reviewed. In a second step-by-step example, a HOT-SMAC model of an actively cooled metallic channel with ceramic thermal barrier coating is built and analyzed. HOT-SMAC results from this model are compared to recently published results (NASA/TM-2001-210702) for two grid densities. Finally, a prototype integration of HOTSMAC with the commercially available HyperSizer(R) structural analysis and sizing software is presented. In this integration, local strain results from HyperSizer's structural analysis are fed to a detailed HOT-SMAC model of the flange-to-facesheet bond region of a stiffened panel. HOT-SMAC is then used to determine the peak shear and peel (normal) stresses between the facesheet and bonded flange of the panel and determine the "free edge" effects.

  10. Calibration of a texture-based model of a ground-water flow system, western San Joaquin Valley, California

    USGS Publications Warehouse

    Phillips, Steven P.; Belitz, Kenneth

    1991-01-01

    The occurrence of selenium in agricultural drain water from the western San Joaquin Valley, California, has focused concern on the semiconfined ground-water flow system, which is underlain by the Corcoran Clay Member of the Tulare Formation. A two-step procedure is used to calibrate a preliminary model of the system for the purpose of determining the steady-state hydraulic properties. Horizontal and vertical hydraulic conductivities are modeled as functions of the percentage of coarse sediment, hydraulic conductivities of coarse-textured (Kcoarse) and fine-textured (Kfine) end members, and averaging methods used to calculate equivalent hydraulic conductivities. The vertical conductivity of the Corcoran (Kcorc) is an additional parameter to be evaluated. In the first step of the calibration procedure, the model is run by systematically varying the following variables: (1) Kcoarse/Kfine, (2) Kcoarse/Kcorc, and (3) choice of averaging methods in the horizontal and vertical directions. Root mean square error and bias values calculated from the model results are functions of these variables. These measures of error provide a means for evaluating model sensitivity and for selecting values of Kcoarse, Kfine, and Kcorc for use in the second step of the calibration procedure. In the second step, recharge rates are evaluated as functions of Kcoarse, Kcorc, and a combination of averaging methods. The associated Kfine values are selected so that the root mean square error is minimized on the basis of the results from the first step. The results of the two-step procedure indicate that the spatial distribution of hydraulic conductivity that best produces the measured hydraulic head distribution is created through the use of arithmetic averaging in the horizontal direction and either geometric or harmonic averaging in the vertical direction. The equivalent hydraulic conductivities resulting from either combination of averaging methods compare favorably to field- and laboratory-based values.

  11. Accessing FMS Functionality: The Impact of Design on Learning

    NASA Technical Reports Server (NTRS)

    Fennell, Karl; Sherry, Lance; Roberts, Ralph, Jr.

    2004-01-01

    In modern commercial and military aircraft, the Flight Management System (FMS) lies at the heart of the functionality of the airplane. The nature of the FMS has also caused great difficulties learning and accessing this functionality. This study examines actual Air Force pilots who were qualified on the newly introduced advanced FMS and shows that the design of the system itself is a primary source of difficulty learning the system. Twenty representative tasks were selected which the pilots could be expected to accomplish on an ' actual flight. These tasks were analyzed using the RAFIV stage model (Sherry, Polson, et al. 2002). This analysis demonstrates that a great burden is placed on remembering complex reformulation of the task to function mapping. 65% of the tasks required retaining one access steps in memory to accomplish the task, 20% required two memorized access steps, and 15% required zero memorized access steps. The probability that a participant would make an access error on the tasks was: two memorized access steps - 74%, one memorized access step - 13%, and zero memorized access steps - 6%. Other factors were analyzed as well, including experience with the system and frequency of use. This completed the picture of a system with many memorized steps causing difficulty with the new system, especially when trying to fine where to access the correct function.

  12. A comparison of simple global kinetic models for coal devolatilization with the CPD model

    DOE PAGES

    Richards, Andrew P.; Fletcher, Thomas H.

    2016-08-01

    Simulations of coal combustors and gasifiers generally cannot incorporate the complexities of advanced pyrolysis models, and hence there is interest in evaluating simpler models over ranges of temperature and heating rate that are applicable to the furnace of interest. In this paper, six different simple model forms are compared to predictions made by the Chemical Percolation Devolatilization (CPD) model. The model forms included three modified one-step models, a simple two-step model, and two new modified two-step models. These simple model forms were compared over a wide range of heating rates (5 × 10 3 to 10 6 K/s) at finalmore » temperatures up to 1600 K. Comparisons were made of total volatiles yield as a function of temperature, as well as the ultimate volatiles yield. Advantages and disadvantages for each simple model form are discussed. In conclusion, a modified two-step model with distributed activation energies seems to give the best agreement with CPD model predictions (with the fewest tunable parameters).« less

  13. The Soil Model Development and Intercomparison Panel (SoilMIP) of the International Soil Modeling Consortium (ISMC)

    NASA Astrophysics Data System (ADS)

    Vanderborght, Jan; Priesack, Eckart

    2017-04-01

    The Soil Model Development and Intercomparison Panel (SoilMIP) is an initiative of the International Soil Modeling Consortium. Its mission is to foster the further development of soil models that can predict soil functions and their changes (i) due to soil use and land management and (ii) due to external impacts of climate change and pollution. Since soil functions and soil threats are diverse but linked with each other, the overall aim is to develop holistic models that represent the key functions of the soil system and the links between them. These models should be scaled up and integrated in terrestrial system models that describe the feedbacks between processes in the soil and the other terrestrial compartments. We propose and illustrate a few steps that could be taken to achieve these goals. A first step is the development of scenarios that compare simulations by models that predict the same or different soil services. Scenarios can be considered at three different levels of comparisons: scenarios that compare the numerics (accuracy but also speed) of models, scenarios that compare the effect of differences in process descriptions, and scenarios that compare simulations with experimental data. A second step involves the derivation of metrics or summary statistics that effectively compare model simulations and disentangle parameterization from model concept differences. These metrics can be used to evaluate how more complex model simulations can be represented by simpler models using an appropriate parameterization. A third step relates to the parameterization of models. Application of simulation models implies that appropriate model parameters have to be defined for a range of environmental conditions and locations. Spatial modelling approaches are used to derive parameter distributions. Considering that soils and their properties emerge from the interaction between physical, chemical and biological processes, the combination of spatial models with process models would lead to consistent parameter distributions correlations and could potentially represent self-organizing processes in soils and landscapes.

  14. Analysis of 3D poroelastodynamics using BEM based on modified time-step scheme

    NASA Astrophysics Data System (ADS)

    Igumnov, L. A.; Petrov, A. N.; Vorobtsov, I. V.

    2017-10-01

    The development of 3d boundary elements modeling of dynamic partially saturated poroelastic media using a stepping scheme is presented in this paper. Boundary Element Method (BEM) in Laplace domain and the time-stepping scheme for numerical inversion of the Laplace transform are used to solve the boundary value problem. The modified stepping scheme with a varied integration step for quadrature coefficients calculation using the symmetry of the integrand function and integral formulas of Strongly Oscillating Functions was applied. The problem with force acting on a poroelastic prismatic console end was solved using the developed method. A comparison of the results obtained by the traditional stepping scheme with the solutions obtained by this modified scheme shows that the computational efficiency is better with usage of combined formulas.

  15. Interacting steps with finite-range interactions: Analytical approximation and numerical results

    NASA Astrophysics Data System (ADS)

    Jaramillo, Diego Felipe; Téllez, Gabriel; González, Diego Luis; Einstein, T. L.

    2013-05-01

    We calculate an analytical expression for the terrace-width distribution P(s) for an interacting step system with nearest- and next-nearest-neighbor interactions. Our model is derived by mapping the step system onto a statistically equivalent one-dimensional system of classical particles. The validity of the model is tested with several numerical simulations and experimental results. We explore the effect of the range of interactions q on the functional form of the terrace-width distribution and pair correlation functions. For physically plausible interactions, we find modest changes when next-nearest neighbor interactions are included and generally negligible changes when more distant interactions are allowed. We discuss methods for extracting from simulated experimental data the characteristic scale-setting terms in assumed potential forms.

  16. Inverting pump-probe spectroscopy for state tomography of excitonic systems.

    PubMed

    Hoyer, Stephan; Whaley, K Birgitta

    2013-04-28

    We propose a two-step protocol for inverting ultrafast spectroscopy experiments on a molecular aggregate to extract the time-evolution of the excited state density matrix. The first step is a deconvolution of the experimental signal to determine a pump-dependent response function. The second step inverts this response function to obtain the quantum state of the system, given a model for how the system evolves following the probe interaction. We demonstrate this inversion analytically and numerically for a dimer model system, and evaluate the feasibility of scaling it to larger molecular aggregates such as photosynthetic protein-pigment complexes. Our scheme provides a direct alternative to the approach of determining all Hamiltonian parameters and then simulating excited state dynamics.

  17. Therapeutic Implications for Striatal-Enriched Protein Tyrosine Phosphatase (STEP) in Neuropsychiatric Disorders

    PubMed Central

    Goebel-Goody, Susan M.; Baum, Matthew; Paspalas, Constantinos D.; Fernandez, Stephanie M.; Carty, Niki C.; Kurup, Pradeep

    2012-01-01

    Striatal-enriched protein tyrosine phosphatase (STEP) is a brain-specific phosphatase that modulates key signaling molecules involved in synaptic plasticity and neuronal function. Targets include extracellular-regulated kinase 1 and 2 (ERK1/2), stress-activated protein kinase p38 (p38), the Src family tyrosine kinase Fyn, N-methyl-d-aspartate receptors (NMDARs), and α-amino-3-hydroxy-5-methyl-4-isoxazolepropionic acid receptors (AMPARs). STEP-mediated dephosphorylation of ERK1/2, p38, and Fyn leads to inactivation of these enzymes, whereas STEP-mediated dephosphorylation of surface NMDARs and AMPARs promotes their endocytosis. Accordingly, the current model of STEP function posits that it opposes long-term potentiation and promotes long-term depression. Phosphorylation, cleavage, dimerization, ubiquitination, and local translation all converge to maintain an appropriate balance of STEP in the central nervous system. Accumulating evidence over the past decade indicates that STEP dysregulation contributes to the pathophysiology of several neuropsychiatric disorders, including Alzheimer's disease, schizophrenia, fragile X syndrome, epileptogenesis, alcohol-induced memory loss, Huntington's disease, drug abuse, stroke/ischemia, and inflammatory pain. This comprehensive review discusses STEP expression and regulation and highlights how disrupted STEP function contributes to the pathophysiology of diverse neuropsychiatric disorders. PMID:22090472

  18. LENMODEL: A forward model for calculating length distributions and fission-track ages in apatite

    NASA Astrophysics Data System (ADS)

    Crowley, Kevin D.

    1993-05-01

    The program LENMODEL is a forward model for annealing of fission tracks in apatite. It provides estimates of the track-length distribution, fission-track age, and areal track density for any user-supplied thermal history. The program approximates the thermal history, in which temperature is represented as a continuous function of time, by a series of isothermal steps of various durations. Equations describing the production of tracks as a function of time and annealing of tracks as a function of time and temperature are solved for each step. The step calculations are summed to obtain estimates for the entire thermal history. Computational efficiency is maximized by performing the step calculations backwards in model time. The program incorporates an intuitive and easy-to-use graphical interface. Thermal history is input to the program using a mouse. Model options are specified by selecting context-sensitive commands from a bar menu. The program allows for considerable selection of equations and parameters used in the calculations. The program was written for PC-compatible computers running DOS TM 3.0 and above (and Windows TM 3.0 or above) with VGA or SVGA graphics and a Microsoft TM-compatible mouse. Single copies of a runtime version of the program are available from the author by written request as explained in the last section of this paper.

  19. Comparison of 1-step and 2-step methods of fitting microbiological models.

    PubMed

    Jewell, Keith

    2012-11-15

    Previous conclusions that a 1-step fitting method gives more precise coefficients than the traditional 2-step method are confirmed by application to three different data sets. It is also shown that, in comparison to 2-step fits, the 1-step method gives better fits to the data (often substantially) with directly interpretable regression diagnostics and standard errors. The improvement is greatest at extremes of environmental conditions and it is shown that 1-step fits can indicate inappropriate functional forms when 2-step fits do not. 1-step fits are better at estimating primary parameters (e.g. lag, growth rate) as well as concentrations, and are much more data efficient, allowing the construction of more robust models on smaller data sets. The 1-step method can be straightforwardly applied to any data set for which the 2-step method can be used and additionally to some data sets where the 2-step method fails. A 2-step approach is appropriate for visual assessment in the early stages of model development, and may be a convenient way to generate starting values for a 1-step fit, but the 1-step approach should be used for any quantitative assessment. Copyright © 2012 Elsevier B.V. All rights reserved.

  20. Individual tree-diameter growth model for the Northeastern United States

    Treesearch

    Richard M. Teck; Donald E. Hilt

    1991-01-01

    Describes a distance-independent individual-tree diameter growth model for the Northeastern United States. Diameter growth is predicted in two steps using a two parameter, sigmoidal growth function modified by a one parameter exponential decay function with species-specific coefficients. Coefficients are presented for 28 species groups. The model accounts for...

  1. Ultramap: the all in One Photogrammetric Solution

    NASA Astrophysics Data System (ADS)

    Wiechert, A.; Gruber, M.; Karner, K.

    2012-07-01

    This paper describes in detail the dense matcher developed since years by Vexcel Imaging in Graz for Microsoft's Bing Maps project. This dense matcher was exclusively developed for and used by Microsoft for the production of the 3D city models of Virtual Earth. It will now be made available to the public with the UltraMap software release mid-2012. That represents a revolutionary step in digital photogrammetry. The dense matcher generates digital surface models (DSM) and digital terrain models (DTM) automatically out of a set of overlapping UltraCam images. The models have an outstanding point density of several hundred points per square meter and sub-pixel accuracy and are generated automatically. The dense matcher consists of two steps. The first step rectifies overlapping image areas to speed up the dense image matching process. This rectification step ensures a very efficient processing and detects occluded areas by applying a back-matching step. In this dense image matching process a cost function consisting of a matching score as well as a smoothness term is minimized. In the second step the resulting range image patches are fused into a DSM by optimizing a global cost function. The whole process is optimized for multi-core CPUs and optionally uses GPUs if available. UltraMap 3.0 features also an additional step which is presented in this paper, a complete automated true-ortho and ortho workflow. For this, the UltraCam images are combined with the DSM or DTM in an automated rectification step and that results in high quality true-ortho or ortho images as a result of a highly automated workflow. The paper presents the new workflow and first results.

  2. Modeling multivariate time series on manifolds with skew radial basis functions.

    PubMed

    Jamshidi, Arta A; Kirby, Michael J

    2011-01-01

    We present an approach for constructing nonlinear empirical mappings from high-dimensional domains to multivariate ranges. We employ radial basis functions and skew radial basis functions for constructing a model using data that are potentially scattered or sparse. The algorithm progresses iteratively, adding a new function at each step to refine the model. The placement of the functions is driven by a statistical hypothesis test that accounts for correlation in the multivariate range variables. The test is applied on training and validation data and reveals nonstatistical or geometric structure when it fails. At each step, the added function is fit to data contained in a spatiotemporally defined local region to determine the parameters--in particular, the scale of the local model. The scale of the function is determined by the zero crossings of the autocorrelation function of the residuals. The model parameters and the number of basis functions are determined automatically from the given data, and there is no need to initialize any ad hoc parameters save for the selection of the skew radial basis functions. Compactly supported skew radial basis functions are employed to improve model accuracy, order, and convergence properties. The extension of the algorithm to higher-dimensional ranges produces reduced-order models by exploiting the existence of correlation in the range variable data. Structure is tested not just in a single time series but between all pairs of time series. We illustrate the new methodologies using several illustrative problems, including modeling data on manifolds and the prediction of chaotic time series.

  3. Effect of nucleation on instability of step meandering during step-flow growth on vicinal 3C-SiC (0001) surfaces

    NASA Astrophysics Data System (ADS)

    Li, Yuan; Chen, Xuejiang; Su, Juan

    2017-06-01

    A three-dimensional kinetic Monte Carlo (KMC) model has been developed to study the step instability caused by nucleation during the step-flow growth of 3C-SiC. In the model, a lattice mesh was established to fix the position of atoms and bond partners based on the crystal lattice of 3C-SiC. The events considered in the model were adsorption and diffusion of adatoms on the terraces, attachment, detachment and interlayer transport of adatoms at the step edges, and nucleation of adatoms. Then the effects of nucleation on the instability of step meandering and the coalescence of both islands and steps were simulated by the model. The results showed that the instability of step meandering caused by nucleation was affected by the growth temperature. And the effects of nucleation on the instability was also analyzed. Moreover, the surface roughness as a function of time for different temperatures was discussed. Finally, a phase diagram was presented to predict in which conditions the effects of nucleation on step meandering become significant and the three different regimes, the step-flow (SF), 2D nucleation (2DN), and 3D layer by layer (3DLBL) were determined.

  4. The Role of Striatal-Enriched Protein Tyrosine Phosphatase (STEP) in Cognition

    PubMed Central

    Fitzpatrick, Christopher James; Lombroso, Paul J.

    2011-01-01

    Striatal-enriched protein tyrosine phosphatase (STEP) has recently been implicated in several neuropsychiatric disorders with significant cognitive impairments, including Alzheimer’s disease, schizophrenia, and fragile X syndrome. A model has emerged by which STEP normally opposes the development of synaptic strengthening and that disruption in STEP activity leads to aberrant synaptic function. We review the mechanisms by which STEP contributes to the etiology of these and other neuropsychiatric disorders. These findings suggest that disruptions in STEP activity may be a common mechanism for cognitive impairments in diverse illnesses. PMID:21863137

  5. A hybrid Pade-Galerkin technique for differential equations

    NASA Technical Reports Server (NTRS)

    Geer, James F.; Andersen, Carl M.

    1993-01-01

    A three-step hybrid analysis technique, which successively uses the regular perturbation expansion method, the Pade expansion method, and then a Galerkin approximation, is presented and applied to some model boundary value problems. In the first step of the method, the regular perturbation method is used to construct an approximation to the solution in the form of a finite power series in a small parameter epsilon associated with the problem. In the second step of the method, the series approximation obtained in step one is used to construct a Pade approximation in the form of a rational function in the parameter epsilon. In the third step, the various powers of epsilon which appear in the Pade approximation are replaced by new (unknown) parameters (delta(sub j)). These new parameters are determined by requiring that the residual formed by substituting the new approximation into the governing differential equation is orthogonal to each of the perturbation coordinate functions used in step one. The technique is applied to model problems involving ordinary or partial differential equations. In general, the technique appears to provide good approximations to the solution even when the perturbation and Pade approximations fail to do so. The method is discussed and topics for future investigations are indicated.

  6. Conformational Sampling in Template-Free Protein Loop Structure Modeling: An Overview

    PubMed Central

    Li, Yaohang

    2013-01-01

    Accurately modeling protein loops is an important step to predict three-dimensional structures as well as to understand functions of many proteins. Because of their high flexibility, modeling the three-dimensional structures of loops is difficult and is usually treated as a “mini protein folding problem” under geometric constraints. In the past decade, there has been remarkable progress in template-free loop structure modeling due to advances of computational methods as well as stably increasing number of known structures available in PDB. This mini review provides an overview on the recent computational approaches for loop structure modeling. In particular, we focus on the approaches of sampling loop conformation space, which is a critical step to obtain high resolution models in template-free methods. We review the potential energy functions for loop modeling, loop buildup mechanisms to satisfy geometric constraints, and loop conformation sampling algorithms. The recent loop modeling results are also summarized. PMID:24688696

  7. Conformational sampling in template-free protein loop structure modeling: an overview.

    PubMed

    Li, Yaohang

    2013-01-01

    Accurately modeling protein loops is an important step to predict three-dimensional structures as well as to understand functions of many proteins. Because of their high flexibility, modeling the three-dimensional structures of loops is difficult and is usually treated as a "mini protein folding problem" under geometric constraints. In the past decade, there has been remarkable progress in template-free loop structure modeling due to advances of computational methods as well as stably increasing number of known structures available in PDB. This mini review provides an overview on the recent computational approaches for loop structure modeling. In particular, we focus on the approaches of sampling loop conformation space, which is a critical step to obtain high resolution models in template-free methods. We review the potential energy functions for loop modeling, loop buildup mechanisms to satisfy geometric constraints, and loop conformation sampling algorithms. The recent loop modeling results are also summarized.

  8. Overcoming the detection bandwidth limit in precision spectroscopy: The analytical apparatus function for a stepped frequency scan

    NASA Astrophysics Data System (ADS)

    Rohart, François

    2017-01-01

    In a previous paper [Rohart et al., Phys Rev A 2014;90(042506)], the influence of detection-bandwidth properties on observed line-shapes in precision spectroscopy was theoretically modeled for the first time using the basic model of a continuous sweeping of the laser frequency. Specific experiments confirmed general theoretical trends but also revealed several insufficiencies of the model in case of stepped frequency scans. As a consequence in as much as up-to-date experiments use step-by-step frequency-swept lasers, a new model of the influence of the detection-bandwidth is developed, including a realistic timing of signal sampling and frequency changes. Using Fourier transform techniques, the resulting time domain apparatus function gets a simple analytical form that can be easily implemented in line-shape fitting codes without any significant increase of computation durations. This new model is then considered in details for detection systems characterized by 1st and 2nd order bandwidths, underlining the importance of the ratio of detection time constant to frequency step duration, namely for the measurement of line frequencies. It also allows a straightforward analysis of corresponding systematic deviations on retrieved line frequencies and broadenings. Finally, a special attention is paid to consequences of a finite detection-bandwidth in Doppler Broadening Thermometry, namely to experimental adjustments required for a spectroscopic determination of the Boltzmann constant at the 1-ppm level of accuracy. In this respect, the interest of implementing a Butterworth 2nd order filter is emphasized.

  9. A computational kinetic model of diffusion for molecular systems.

    PubMed

    Teo, Ivan; Schulten, Klaus

    2013-09-28

    Regulation of biomolecular transport in cells involves intra-protein steps like gating and passage through channels, but these steps are preceded by extra-protein steps, namely, diffusive approach and admittance of solutes. The extra-protein steps develop over a 10-100 nm length scale typically in a highly particular environment, characterized through the protein's geometry, surrounding electrostatic field, and location. In order to account for solute energetics and mobility of solutes in this environment at a relevant resolution, we propose a particle-based kinetic model of diffusion based on a Markov State Model framework. Prerequisite input data consist of diffusion coefficient and potential of mean force maps generated from extensive molecular dynamics simulations of proteins and their environment that sample multi-nanosecond durations. The suggested diffusion model can describe transport processes beyond microsecond duration, relevant for biological function and beyond the realm of molecular dynamics simulation. For this purpose the systems are represented by a discrete set of states specified by the positions, volumes, and surface elements of Voronoi grid cells distributed according to a density function resolving the often intricate relevant diffusion space. Validation tests carried out for generic diffusion spaces show that the model and the associated Brownian motion algorithm are viable over a large range of parameter values such as time step, diffusion coefficient, and grid density. A concrete application of the method is demonstrated for ion diffusion around and through the Eschericia coli mechanosensitive channel of small conductance ecMscS.

  10. Non-Gaussian Analysis of Turbulent Boundary Layer Fluctuating Pressure on Aircraft Skin Panels

    NASA Technical Reports Server (NTRS)

    Rizzi, Stephen A.; Steinwolf, Alexander

    2005-01-01

    The purpose of the study is to investigate the probability density function (PDF) of turbulent boundary layer fluctuating pressures measured on the outer sidewall of a supersonic transport aircraft and to approximate these PDFs by analytical models. Experimental flight results show that the fluctuating pressure PDFs differ from the Gaussian distribution even for standard smooth surface conditions. The PDF tails are wider and longer than those of the Gaussian model. For pressure fluctuations in front of forward-facing step discontinuities, deviations from the Gaussian model are more significant and the PDFs become asymmetrical. There is a certain spatial pattern of the skewness and kurtosis behavior depending on the distance upstream from the step. All characteristics related to non-Gaussian behavior are highly dependent upon the distance from the step and the step height, less dependent on aircraft speed, and not dependent on the fuselage location. A Hermite polynomial transform model and a piecewise-Gaussian model fit the flight data well both for the smooth and stepped conditions. The piecewise-Gaussian approximation can be additionally regarded for convenience in usage after the model is constructed.

  11. Applications of step-selection functions in ecology and conservation.

    PubMed

    Thurfjell, Henrik; Ciuti, Simone; Boyce, Mark S

    2014-01-01

    Recent progress in positioning technology facilitates the collection of massive amounts of sequential spatial data on animals. This has led to new opportunities and challenges when investigating animal movement behaviour and habitat selection. Tools like Step Selection Functions (SSFs) are relatively new powerful models for studying resource selection by animals moving through the landscape. SSFs compare environmental attributes of observed steps (the linear segment between two consecutive observations of position) with alternative random steps taken from the same starting point. SSFs have been used to study habitat selection, human-wildlife interactions, movement corridors, and dispersal behaviours in animals. SSFs also have the potential to depict resource selection at multiple spatial and temporal scales. There are several aspects of SSFs where consensus has not yet been reached such as how to analyse the data, when to consider habitat covariates along linear paths between observations rather than at their endpoints, how many random steps should be considered to measure availability, and how to account for individual variation. In this review we aim to address all these issues, as well as to highlight weak features of this modelling approach that should be developed by further research. Finally, we suggest that SSFs could be integrated with state-space models to classify behavioural states when estimating SSFs.

  12. Determination of the nuclear level densities and radiative strength function for 43 nuclei in the mass interval 28≤A≤200

    NASA Astrophysics Data System (ADS)

    Knezevic, David; Jovancevic, Nikola; Sukhovoj, Anatoly M.; Mitsyna, Ludmila V.; Krmar, Miodrag; Cong, Vu D.; Hambsch, Franz-Josef; Oberstedt, Stephan; Revay, Zsolt; Stieghorst, Christian; Dragic, Aleksandar

    2018-03-01

    The determination of nuclear level densities and radiative strength functions is one of the most important tasks in low-energy nuclear physics. Accurate experimental values of these parameters are critical for the study of the fundamental properties of nuclear structure. The step-like structure in the dependence of the level densities p on the excitation energy of nuclei Eex is observed in the two-step gamma cascade measurements for nuclei in the 28 ≤ A ≤ 200 mass region. This characteristic structure can be explained only if a co-existence of quasi-particles and phonons, as well as their interaction in a nucleus, are taken into account in the process of gamma-decay. Here we present a new improvement to the Dubna practical model for the determination of nuclear level densities and radiative strength functions. The new practical model guarantees a good description of the available intensities of the two step gamma cascades, comparable to the experimental data accuracy.

  13. Oxygen reduction on a Pt(111) catalyst in HT-PEM fuel cells by density functional theory

    NASA Astrophysics Data System (ADS)

    Sun, Hong; Li, Jie; Almheiri, Saif; Xiao, Jianyu

    2017-08-01

    The oxygen reduction reaction plays an important role in the performance of high-temperature proton exchange membrane (HT-PEM) fuel cells. In this study, a molecular dynamics model, which is based on the density functional theory and couples the system's energy, the exchange-correlation energy functional, the charge density distribution function, and the simplified Kohn-Sham equation, was developed to simulate the oxygen reduction reaction on a Pt(111) surface. Additionally, an electrochemical reaction system on the basis of a four-electron reaction mechanism was also developed for this simulation. The reaction path of the oxygen reduction reaction, the product structure of each reaction step and the system's energy were simulated. It is found that the first step reaction of the first hydrogen ion with the oxygen molecule is the controlling step of the overall reaction. Increasing the operating temperature speeds up the first step reaction rate and slightly decreases its reaction energy barrier. Our results provide insight into the working principles of HT-PEM fuel cells.

  14. Variable selection with stepwise and best subset approaches

    PubMed Central

    2016-01-01

    While purposeful selection is performed partly by software and partly by hand, the stepwise and best subset approaches are automatically performed by software. Two R functions stepAIC() and bestglm() are well designed for stepwise and best subset regression, respectively. The stepAIC() function begins with a full or null model, and methods for stepwise regression can be specified in the direction argument with character values “forward”, “backward” and “both”. The bestglm() function begins with a data frame containing explanatory variables and response variables. The response variable should be in the last column. Varieties of goodness-of-fit criteria can be specified in the IC argument. The Bayesian information criterion (BIC) usually results in more parsimonious model than the Akaike information criterion. PMID:27162786

  15. The SMM Model as a Boundary Value Problem Using the Discrete Diffusion Equation

    NASA Technical Reports Server (NTRS)

    Campbell, Joel

    2007-01-01

    A generalized single step stepwise mutation model (SMM) is developed that takes into account an arbitrary initial state to a certain partial difference equation. This is solved in both the approximate continuum limit and the more exact discrete form. A time evolution model is developed for Y DNA or mtDNA that takes into account the reflective boundary modeling minimum microsatellite length and the original difference equation. A comparison is made between the more widely known continuum Gaussian model and a discrete model, which is based on modified Bessel functions of the first kind. A correction is made to the SMM model for the probability that two individuals are related that takes into account a reflecting boundary modeling minimum microsatellite length. This method is generalized to take into account the general n-step model and exact solutions are found. A new model is proposed for the step distribution.

  16. The SMM model as a boundary value problem using the discrete diffusion equation.

    PubMed

    Campbell, Joel

    2007-12-01

    A generalized single-step stepwise mutation model (SMM) is developed that takes into account an arbitrary initial state to a certain partial difference equation. This is solved in both the approximate continuum limit and the more exact discrete form. A time evolution model is developed for Y DNA or mtDNA that takes into account the reflective boundary modeling minimum microsatellite length and the original difference equation. A comparison is made between the more widely known continuum Gaussian model and a discrete model, which is based on modified Bessel functions of the first kind. A correction is made to the SMM model for the probability that two individuals are related that takes into account a reflecting boundary modeling minimum microsatellite length. This method is generalized to take into account the general n-step model and exact solutions are found. A new model is proposed for the step distribution.

  17. Understanding Southern Ocean SST Trends in Historical Simulations and Observations

    NASA Astrophysics Data System (ADS)

    Kostov, Yavor; Ferreira, David; Marshall, John; Armour, Kyle

    2017-04-01

    Historical simulations with CMIP5 global climate models do not reproduce the observed 1979-2014 Southern Ocean (SO) cooling, and most ensemble members predict gradual warming around Antarctica. In order to understand this discrepancy and the mechanisms behind the SO cooling, we analyze output from 19 CMIP5 models. For each ensemble member we estimate the characteristic responses of SO SST to step changes in greenhouse gas (GHG) forcing and in the seasonal indices of the Southern Annular Mode (SAM). Using these step-response functions and linear convolution theory, we reconstruct the original CMIP5 simulations of 1979-2014 SO SST trends. We recover the CMIP5 ensemble mean trend, capture the intermodel spread, and reproduce very well the behavior of individual models. We thus suggest that GHG forcing and the SAM are major drivers of the simulated 1979-2014 SO SST trends. In consistence with the seasonal signature of the Antarctic ozone hole, our results imply that the summer (DJF) and fall (MAM) SAM exert a particularly important effect on the SO SST. In some CMIP5 models the SO SST response to SAM partially counteracts the warming due to GHG forcing, while in other ensemble members the SAM-induced SO SST trends complement the warming effect of GHG forcing. The compensation between GHG and SAM-induced SO SST anomalies is model-dependent and is determined by multiple factors. Firstly, CMIP5 models have different characteristic SST step response functions to SAM. Kostov et al. (2016) relate these differences to biases in the models' climatological SO temperature gradients. Secondly, many CMIP5 historical simulations underestimate the observed positive trends in the DJF and MAM seasonal SAM indices. We show that this affects the models' ability to reproduce the observed SO cooling. Last but not least, CMIP5 models differ in their SO SST step response functions to GHG forcing. Understanding the diverse behavior of CMIP5 models helps shed light on the physical processes that drive SST trends in the real SO.

  18. Enriching step-based product information models to support product life-cycle activities

    NASA Astrophysics Data System (ADS)

    Sarigecili, Mehmet Ilteris

    The representation and management of product information in its life-cycle requires standardized data exchange protocols. Standard for Exchange of Product Model Data (STEP) is such a standard that has been used widely by the industries. Even though STEP-based product models are well defined and syntactically correct, populating product data according to these models is not easy because they are too big and disorganized. Data exchange specifications (DEXs) and templates provide re-organized information models required in data exchange of specific activities for various businesses. DEXs show us it would be possible to organize STEP-based product models in order to support different engineering activities at various stages of product life-cycle. In this study, STEP-based models are enriched and organized to support two engineering activities: materials information declaration and tolerance analysis. Due to new environmental regulations, the substance and materials information in products have to be screened closely by manufacturing industries. This requires a fast, unambiguous and complete product information exchange between the members of a supply chain. Tolerance analysis activity, on the other hand, is used to verify the functional requirements of an assembly considering the worst case (i.e., maximum and minimum) conditions for the part/assembly dimensions. Another issue with STEP-based product models is that the semantics of product data are represented implicitly. Hence, it is difficult to interpret the semantics of data for different product life-cycle phases for various application domains. OntoSTEP, developed at NIST, provides semantically enriched product models in OWL. In this thesis, we would like to present how to interpret the GD & T specifications in STEP for tolerance analysis by utilizing OntoSTEP.

  19. Reverse engineering of aircraft wing data using a partial differential equation surface model

    NASA Astrophysics Data System (ADS)

    Huband, Jacalyn Mann

    Reverse engineering is a multi-step process used in industry to determine a production representation of an existing physical object. This representation is in the form of mathematical equations that are compatible with computer-aided design and computer-aided manufacturing (CAD/CAM) equipment. The four basic steps to the reverse engineering process are data acquisition, data separation, surface or curve fitting, and CAD/CAM production. The surface fitting step determines the design representation of the object, and thus is critical to the success or failure of the reverse engineering process. Although surface fitting methods described in the literature are used to model a variety of surfaces, they are not suitable for reversing aircraft wings. In this dissertation, we develop and demonstrate a new strategy for reversing a mathematical representation of an aircraft wing. The basis of our strategy is to take an aircraft design model and determine if an inverse model can be derived. A candidate design model for this research is the partial differential equation (PDE) surface model, proposed by Bloor and Wilson and used in the Rapid Airplane Parameter Input Design (RAPID) tool at the NASA-LaRC Geolab. There are several basic mathematical problems involved in reversing the PDE surface model: (i) deriving a computational approximation of the surface function; (ii) determining a radial parametrization of the wing; (iii) choosing mathematical models or classes of functions for representation of the boundary functions; (iv) fitting the boundary data points by the chosen boundary functions; and (v) simultaneously solving for the axial parameterization and the derivative boundary functions. The study of the techniques to solve the above mathematical problems has culminated in a reverse PDE surface model and two reverse PDE surface algorithms. One reverse PDE surface algorithm recovers engineering design parameters for the RAPID tool from aircraft wing data and the other generates a PDE surface model with spline boundary functions from an arbitrary set of grid points. Our numerical tests show that the reverse PDE surface model and the reverse PDE surface algorithms can be used for the reverse engineering of aircraft wing data.

  20. Branching Patterns and Stepped Leaders in an Electric-Circuit Model for Creeping Discharge

    NASA Astrophysics Data System (ADS)

    Hidetsugu Sakaguchi,; Sahim M. Kourkouss,

    2010-06-01

    We construct a two-dimensional electric circuit model for creeping discharge. Two types of discharge, surface corona and surface leader, are modeled by a two-step function of conductance. Branched patterns of surface leaders surrounded by the surface corona appear in numerical simulation. The fractal dimension of branched discharge patterns is calculated by changing voltage and capacitance. We find that surface leaders often grow stepwise in time, as is observed in lightning leaders of thunder.

  1. A Scalable Heuristic for Viral Marketing Under the Tipping Model

    DTIC Science & Technology

    2013-09-01

    removal of high-degree nodes. The rest of the paper is organized as follows. In Section 2, we provide formal definitions of the tipping model. This is...that must be activated for it to become activate as well. A Scalable Heuristic for Viral Marketing Under the Tipping Model 3 Definition 1 (Threshold...returns a set of active nodes after one time step. Definition 2 (Activation Function) Given a threshold function, θ, an ac- tivation function Aθ maps

  2. Family Ranching and Farming: A Consensus Management Model to Improve Family Functioning and Decrease Work Stress.

    ERIC Educational Resources Information Center

    Zimmerman, Toni Schindler; Fetsch, Robert J.

    1994-01-01

    Notes that internal and external threats could squeeze ranch and farm families out of business. Offers six-step Consensus Management Model that combines strategic planning with psychoeducation/family therapy. Describes pilot test with intergenerational ranch family that indicated improvements in family functioning, including reduced stress and…

  3. Weighted Least Squares Fitting Using Ordinary Least Squares Algorithms.

    ERIC Educational Resources Information Center

    Kiers, Henk A. L.

    1997-01-01

    A general approach for fitting a model to a data matrix by weighted least squares (WLS) is studied. The approach consists of iteratively performing steps of existing algorithms for ordinary least squares fitting of the same model and is based on maximizing a function that majorizes WLS loss function. (Author/SLD)

  4. A Bayesian Beta-Mixture Model for Nonparametric IRT (BBM-IRT)

    ERIC Educational Resources Information Center

    Arenson, Ethan A.; Karabatsos, George

    2017-01-01

    Item response models typically assume that the item characteristic (step) curves follow a logistic or normal cumulative distribution function, which are strictly monotone functions of person test ability. Such assumptions can be overly-restrictive for real item response data. We propose a simple and more flexible Bayesian nonparametric IRT model…

  5. Fast and slow responses of Southern Ocean sea surface temperature to SAM in coupled climate models

    NASA Astrophysics Data System (ADS)

    Kostov, Yavor; Marshall, John; Hausmann, Ute; Armour, Kyle C.; Ferreira, David; Holland, Marika M.

    2017-03-01

    We investigate how sea surface temperatures (SSTs) around Antarctica respond to the Southern Annular Mode (SAM) on multiple timescales. To that end we examine the relationship between SAM and SST within unperturbed preindustrial control simulations of coupled general circulation models (GCMs) included in the Climate Modeling Intercomparison Project phase 5 (CMIP5). We develop a technique to extract the response of the Southern Ocean SST (55°S-70°S) to a hypothetical step increase in the SAM index. We demonstrate that in many GCMs, the expected SST step response function is nonmonotonic in time. Following a shift to a positive SAM anomaly, an initial cooling regime can transition into surface warming around Antarctica. However, there are large differences across the CMIP5 ensemble. In some models the step response function never changes sign and cooling persists, while in other GCMs the SST anomaly crosses over from negative to positive values only 3 years after a step increase in the SAM. This intermodel diversity can be related to differences in the models' climatological thermal ocean stratification in the region of seasonal sea ice around Antarctica. Exploiting this relationship, we use observational data for the time-mean meridional and vertical temperature gradients to constrain the real Southern Ocean response to SAM on fast and slow timescales.

  6. Integrating depth functions and hyper-scale terrain analysis for 3D soil organic carbon modeling in agricultural fields at regional scale

    NASA Astrophysics Data System (ADS)

    Ramirez-Lopez, L.; van Wesemael, B.; Stevens, A.; Doetterl, S.; Van Oost, K.; Behrens, T.; Schmidt, K.

    2012-04-01

    Soil Organic Carbon (SOC) represents a key component in the global C cycle and has an important influence on the global CO2 fluxes between terrestrial biosphere and atmosphere. In the context of agricultural landscapes, SOC inventories are important since soil management practices have a strong influence on CO2 fluxes and SOC stocks. However, there is lack of accurate and cost-effective methods for producing high spatial resolution of SOC information. In this respect, our work is focused on the development of a three dimensional modeling approach for SOC monitoring in agricultural fields. The study area comprises ~420 km2 and includes 4 of the 5 agro-geological regions of the Grand-Duchy of Luxembourg. The soil dataset consist of 172 profiles (1033 samples) which were not sampled specifically for this study. This dataset is a combination of profile samples collected in previous soil surveys and soil profiles sampled for other research purposes. The proposed strategy comprises two main steps. In the first step the SOC distribution within each profile (vertical distribution) is modeled. Depth functions for are fitted in order to summarize the information content in the profile. By using these functions the SOC can be interpolated at any depth within the profiles. The second step involves the use of contextual terrain (ConMap) features (Behrens et al., 2010). These features are based on the differences in elevation between a given point location in the landscape and its circular neighbourhoods at a given set of different radius. One of the main advantages of this approach is that it allows the integration of several spatial scales (eg. local and regional) for soil spatial analysis. In this work the ConMap features are derived from a digital elevation model of the area and are used as predictors for spatial modeling of the parameters of the depth functions fitted in the previous step. In this poster we present some preliminary results in which we analyze: i. The use of different depth functions, ii. The use of different machine learning approaches for modeling the parameters of the fitted depth functions using the ConMap features and iii. The influence of different spatial scales on the SOC profile distribution variability. Keywords: 3D modeling, Digital soil mapping, Depth functions, Terrain analysis. Reference Behrens, T., K. Schmidt, K., Zhu, A.X. Scholten, T. 2010. The ConMap approach for terrain-based digital soil mapping. European Journal of Soil Science, v. 61, p.133-143.

  7. Radiometric Block Adjusment and Digital Radiometric Model Generation

    NASA Astrophysics Data System (ADS)

    Pros, A.; Colomina, I.; Navarro, J. A.; Antequera, R.; Andrinal, P.

    2013-05-01

    In this paper we present a radiometric block adjustment method that is related to geometric block adjustment and to the concept of a terrain Digital Radiometric Model (DRM) as a complement to the terrain digital elevation and surface models. A DRM, in our concept, is a function that for each ground point returns a reflectance value and a Bidirectional Reflectance Distribution Function (BRDF). In a similar way to the terrain geometric reconstruction procedure, given an image block of some terrain area, we split the DRM generation in two phases: radiometric block adjustment and DRM generation. In the paper we concentrate on the radiometric block adjustment step, but we also describe a preliminary DRM generator. In the block adjustment step, after a radiometric pre-calibraton step, local atmosphere radiative transfer parameters, and ground reflectances and BRDFs at the radiometric tie points are estimated. This radiometric block adjustment is based on atmospheric radiative transfer (ART) models, pre-selected BRDF models and radiometric ground control points. The proposed concept is implemented and applied in an experimental campaign, and the obtained results are presented. The DRM and orthophoto mosaics are generated showing no radiometric differences at the seam lines.

  8. [The healthy life-style as one of components of human safety].

    PubMed

    Vasendin, V N; Tchebotarkova, S A; Kobalyeva, D A

    2012-01-01

    The technique of single-step anonymous questionnaire was applied to sampling of students of technical university to study propagation of health risk factors. The very high propagation of behavioral factors of life-style among students is noted. The model of healthy life-style is considered with emphasis on internal and external aspects of its functioning. It is established that particular steps in implementation of this model are ultimately individual.

  9. Development of a targeted transgenesis strategy in highly differentiated cells: a powerful tool for functional genomic analysis.

    PubMed

    Puttini, Stefania; Ouvrard-Pascaud, Antoine; Palais, Gael; Beggah, Ahmed T; Gascard, Philippe; Cohen-Tannoudji, Michel; Babinet, Charles; Blot-Chabaud, Marcel; Jaisser, Frederic

    2005-03-16

    Functional genomic analysis is a challenging step in the so-called post-genomic field. Identification of potential targets using large-scale gene expression analysis requires functional validation to identify those that are physiologically relevant. Genetically modified cell models are often used for this purpose allowing up- or down-expression of selected targets in a well-defined and if possible highly differentiated cell type. However, the generation of such models remains time-consuming and expensive. In order to alleviate this step, we developed a strategy aimed at the rapid and efficient generation of genetically modified cell lines with conditional, inducible expression of various target genes. Efficient knock-in of various constructs, called targeted transgenesis, in a locus selected for its permissibility to the tet inducible system, was obtained through the stimulation of site-specific homologous recombination by the meganuclease I-SceI. Our results demonstrate that targeted transgenesis in a reference inducible locus greatly facilitated the functional analysis of the selected recombinant cells. The efficient screening strategy we have designed makes possible automation of the transfection and selection steps. Furthermore, this strategy could be applied to a variety of highly differentiated cells.

  10. A Fuzzy Goal Programming for a Multi-Depot Distribution Problem

    NASA Astrophysics Data System (ADS)

    Nunkaew, Wuttinan; Phruksaphanrat, Busaba

    2010-10-01

    A fuzzy goal programming model for solving a Multi-Depot Distribution Problem (MDDP) is proposed in this research. This effective proposed model is applied for solving in the first step of Assignment First-Routing Second (AFRS) approach. Practically, a basic transportation model is firstly chosen to solve this kind of problem in the assignment step. After that the Vehicle Routing Problem (VRP) model is used to compute the delivery cost in the routing step. However, in the basic transportation model, only depot to customer relationship is concerned. In addition, the consideration of customer to customer relationship should also be considered since this relationship exists in the routing step. Both considerations of relationships are solved using Preemptive Fuzzy Goal Programming (P-FGP). The first fuzzy goal is set by a total transportation cost and the second fuzzy goal is set by a satisfactory level of the overall independence value. A case study is used for describing the effectiveness of the proposed model. Results from the proposed model are compared with the basic transportation model that has previously been used in this company. The proposed model can reduce the actual delivery cost in the routing step owing to the better result in the assignment step. Defining fuzzy goals by membership functions are more realistic than crisps. Furthermore, flexibility to adjust goals and an acceptable satisfactory level for decision maker can also be increased and the optimal solution can be obtained.

  11. Population viability and connectivity of the Louisiana black bear (Ursus americanus luteolus)

    USGS Publications Warehouse

    Laufenberg, Jared S.; Clark, Joseph D.

    2014-01-01

    From April 2010 to April 2012, global positioning system (GPS) radio collars were placed on 8 female and 23 male bears ranging from 1 to 11 years of age to develop a step-selection function model to predict routes and rates of interchange. For both males and females, the probability of a step being selected increased as the distance to natural land cover and agriculture at the end of the step decreased and as distance from roads at the end of a step increased. Of 4,000 correlated random walks, the least potential interchange was between TRB and TRC and between UARB and LARB, but the relative potential for natural interchange between UARB and TRC was high. The step-selection model predicted that dispersals between the LARB and UARB populations were infrequent but possible for males and nearly nonexistent for females. No evidence of natural female dispersal between subpopulations has been documented thus far, which is also consistent with model predictions.

  12. TRUST84. Sat-Unsat Flow in Deformable Media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Narasimhan, T.N.

    1984-11-01

    TRUST84 solves for transient and steady-state flow in variably saturated deformable media in one, two, or three dimensions. It can handle porous media, fractured media, or fractured-porous media. Boundary conditions may be an arbitrary function of time. Sources or sinks may be a function of time or of potential. The theoretical model considers a general three-dimensional field of flow in conjunction with a one-dimensional vertical deformation field. The governing equation expresses the conservation of fluid mass in an elemental volume that has a constant volume of solids. Deformation of the porous medium may be nonelastic. Permeability and the compressibility coefficientsmore » may be nonlinearly related to effective stress. Relationships between permeability and saturation with pore water pressure in the unsaturated zone may be characterized by hysteresis. The relation between pore pressure change and effective stress change may be a function of saturation. The basic calculational model of the conductive heat transfer code TRUMP is applied in TRUST84 to the flow of fluids in porous media. The model combines an integrated finite difference algorithm for numerically solving the governing equation with a mixed explicit-implicit iterative scheme in which the explicit changes in potential are first computed for all elements in the system, after which implicit corrections are made only for those elements for which the stable time-step is less than the time-step being used. Time-step sizes are automatically controlled to optimize the number of iterations, to control maximum change to potential during a time-step, and to obtain desired output information. Time derivatives, estimated on the basis of system behavior during the two previous time-steps, are used to start the iteration process and to evaluate nonlinear coefficients. Both heterogeneity and anisotropy can be handled.« less

  13. Models for microtubule cargo transport coupling the Langevin equation to stochastic stepping motor dynamics: Caring about fluctuations.

    PubMed

    Bouzat, Sebastián

    2016-01-01

    One-dimensional models coupling a Langevin equation for the cargo position to stochastic stepping dynamics for the motors constitute a relevant framework for analyzing multiple-motor microtubule transport. In this work we explore the consistence of these models focusing on the effects of the thermal noise. We study how to define consistent stepping and detachment rates for the motors as functions of the local forces acting on them in such a way that the cargo velocity and run-time match previously specified functions of the external load, which are set on the base of experimental results. We show that due to the influence of the thermal fluctuations this is not a trivial problem, even for the single-motor case. As a solution, we propose a motor stepping dynamics which considers memory on the motor force. This model leads to better results for single-motor transport than the approaches previously considered in the literature. Moreover, it gives a much better prediction for the stall force of the two-motor case, highly compatible with the experimental findings. We also analyze the fast fluctuations of the cargo position and the influence of the viscosity, comparing the proposed model to the standard one, and we show how the differences on the single-motor dynamics propagate to the multiple motor situations. Finally, we find that the one-dimensional character of the models impede an appropriate description of the fast fluctuations of the cargo position at small loads. We show how this problem can be solved by considering two-dimensional models.

  14. Inhibitor of the Tyrosine Phosphatase STEP Reverses Cognitive Deficits in a Mouse Model of Alzheimer's Disease

    PubMed Central

    Xu, Jian; Chatterjee, Manavi; Baguley, Tyler D.; Brouillette, Jonathan; Kurup, Pradeep; Ghosh, Debolina; Kanyo, Jean; Zhang, Yang; Seyb, Kathleen; Ononenyi, Chimezie; Foscue, Ethan; Anderson, George M.; Gresack, Jodi; Cuny, Gregory D.; Glicksman, Marcie A.; Greengard, Paul; Lam, TuKiet T.; Tautz, Lutz; Nairn, Angus C.; Ellman, Jonathan A.; Lombroso, Paul J.

    2014-01-01

    STEP (STriatal-Enriched protein tyrosine Phosphatase) is a neuron-specific phosphatase that regulates N-methyl-D-aspartate receptor (NMDAR) and α-amino-3-hydroxy-5-methyl-4-isoxazolepropionic acid receptor (AMPAR) trafficking, as well as ERK1/2, p38, Fyn, and Pyk2 activity. STEP is overactive in several neuropsychiatric and neurodegenerative disorders, including Alzheimer's disease (AD). The increase in STEP activity likely disrupts synaptic function and contributes to the cognitive deficits in AD. AD mice lacking STEP have restored levels of glutamate receptors on synaptosomal membranes and improved cognitive function, results that suggest STEP as a novel therapeutic target for AD. Here we describe the first large-scale effort to identify and characterize small-molecule STEP inhibitors. We identified the benzopentathiepin 8-(trifluoromethyl)-1,2,3,4,5-benzopentathiepin-6-amine hydrochloride (known as TC-2153) as an inhibitor of STEP with an IC50 of 24.6 nM. TC-2153 represents a novel class of PTP inhibitors based upon a cyclic polysulfide pharmacophore that forms a reversible covalent bond with the catalytic cysteine in STEP. In cell-based secondary assays, TC-2153 increased tyrosine phosphorylation of STEP substrates ERK1/2, Pyk2, and GluN2B, and exhibited no toxicity in cortical cultures. Validation and specificity experiments performed in wild-type (WT) and STEP knockout (KO) cortical cells and in vivo in WT and STEP KO mice suggest specificity of inhibitors towards STEP compared to highly homologous tyrosine phosphatases. Furthermore, TC-2153 improved cognitive function in several cognitive tasks in 6- and 12-mo-old triple transgenic AD (3xTg-AD) mice, with no change in beta amyloid and phospho-tau levels. PMID:25093460

  15. Permeability and kinetic coefficients for mesoscale BCF surface step dynamics: Discrete two-dimensional deposition-diffusion equation analysis

    DOE PAGES

    Zhao, Renjie; Evans, James W.; Oliveira, Tiago J.

    2016-04-08

    Here, a discrete version of deposition-diffusion equations appropriate for description of step flow on a vicinal surface is analyzed for a two-dimensional grid of adsorption sites representing the stepped surface and explicitly incorporating kinks along the step edges. Model energetics and kinetics appropriately account for binding of adatoms at steps and kinks, distinct terrace and edge diffusion rates, and possible additional barriers for attachment to steps. Analysis of adatom attachment fluxes as well as limiting values of adatom densities at step edges for nonuniform deposition scenarios allows determination of both permeability and kinetic coefficients. Behavior of these quantities is assessedmore » as a function of key system parameters including kink density, step attachment barriers, and the step edge diffusion rate.« less

  16. Permeability and kinetic coefficients for mesoscale BCF surface step dynamics: Discrete two-dimensional deposition-diffusion equation analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, Renjie; Evans, James W.; Oliveira, Tiago J.

    Here, a discrete version of deposition-diffusion equations appropriate for description of step flow on a vicinal surface is analyzed for a two-dimensional grid of adsorption sites representing the stepped surface and explicitly incorporating kinks along the step edges. Model energetics and kinetics appropriately account for binding of adatoms at steps and kinks, distinct terrace and edge diffusion rates, and possible additional barriers for attachment to steps. Analysis of adatom attachment fluxes as well as limiting values of adatom densities at step edges for nonuniform deposition scenarios allows determination of both permeability and kinetic coefficients. Behavior of these quantities is assessedmore » as a function of key system parameters including kink density, step attachment barriers, and the step edge diffusion rate.« less

  17. Modification of the nuclear landscape in the inverse problem framework using the generalized Bethe-Weizsäcker mass formula

    NASA Astrophysics Data System (ADS)

    Mavrodiev, S. Cht.; Deliyergiyev, M. A.

    We formalized the nuclear mass problem in the inverse problem framework. This approach allows us to infer the underlying model parameters from experimental observation, rather than to predict the observations from the model parameters. The inverse problem was formulated for the numerically generalized semi-empirical mass formula of Bethe and von Weizsäcker. It was solved in a step-by-step way based on the AME2012 nuclear database. The established parametrization describes the measured nuclear masses of 2564 isotopes with a maximum deviation less than 2.6MeV, starting from the number of protons and number of neutrons equal to 1. The explicit form of unknown functions in the generalized mass formula was discovered in a step-by-step way using the modified least χ2 procedure, that realized in the algorithms which were developed by Lubomir Aleksandrov to solve the nonlinear systems of equations via the Gauss-Newton method, lets us to choose the better one between two functions with same χ2. In the obtained generalized model, the corrections to the binding energy depend on nine proton (2, 8, 14, 20, 28, 50, 82, 108, 124) and ten neutron (2, 8, 14, 20, 28, 50, 82, 124, 152, 202) magic numbers as well on the asymptotic boundaries of their influence. The obtained results were compared with the predictions of other models.

  18. Data-Based Predictive Control with Multirate Prediction Step

    NASA Technical Reports Server (NTRS)

    Barlow, Jonathan S.

    2010-01-01

    Data-based predictive control is an emerging control method that stems from Model Predictive Control (MPC). MPC computes current control action based on a prediction of the system output a number of time steps into the future and is generally derived from a known model of the system. Data-based predictive control has the advantage of deriving predictive models and controller gains from input-output data. Thus, a controller can be designed from the outputs of complex simulation code or a physical system where no explicit model exists. If the output data happens to be corrupted by periodic disturbances, the designed controller will also have the built-in ability to reject these disturbances without the need to know them. When data-based predictive control is implemented online, it becomes a version of adaptive control. One challenge of MPC is computational requirements increasing with prediction horizon length. This paper develops a closed-loop dynamic output feedback controller that minimizes a multi-step-ahead receding-horizon cost function with multirate prediction step. One result is a reduced influence of prediction horizon and the number of system outputs on the computational requirements of the controller. Another result is an emphasis on portions of the prediction window that are sampled more frequently. A third result is the ability to include more outputs in the feedback path than in the cost function.

  19. Arsenic (+3 Oxidation State) Methyltransferase and the Methylation of Arsenicals

    PubMed Central

    Thomas, David J.; Li, Jiaxin; Waters, Stephen B.; Xing, Weibing; Adair, Blakely M.; Drobna, Zuzana; Devesa, Vicenta; Styblo, Miroslav

    2008-01-01

    Metabolic conversion of inorganic arsenic into methylated products is a multistep process that yields mono-, di-, and trimethylated arsenicals. In recent years, it has become apparent that formation of methylated metabolites of inorganic arsenic is not necessarily a detoxification process. Intermediates and products formed in this pathway may be more reactive and toxic than inorganic arsenic. Like all metabolic pathways, understanding the pathway for arsenic methylation involves identification of each individual step in the process and the characterization of the molecules which participate in each step. Among several arsenic methyltransferases that have been identified, arsenic (+3 oxidation state) methyltransferase is the one best characterized at the genetic and functional levels. This review focuses on phylogenetic relationships in the deuterostomal lineage for this enzyme and on the relation between genotype for arsenic (+3 oxidation state) methyltransferase and phenotype for conversion of inorganic arsenic to methylated metabolites. Two conceptual models for function of arsenic (+3 oxidation state) methyltransferase which posit different roles for cellular reductants in the conversion of inorganic arsenic to methylated metabolites are compared. Although each model accurately represents some aspects of enzyme’s role in the pathway for arsenic methylation, neither model is a fully satisfactory representation of all the steps in this metabolic pathway. Additional information on the structure and function of the enzyme will be needed to develop a more comprehensive model for this pathway. PMID:17202581

  20. Stochastic approaches for time series forecasting of boron: a case study of Western Turkey.

    PubMed

    Durdu, Omer Faruk

    2010-10-01

    In the present study, a seasonal and non-seasonal prediction of boron concentrations time series data for the period of 1996-2004 from Büyük Menderes river in western Turkey are addressed by means of linear stochastic models. The methodology presented here is to develop adequate linear stochastic models known as autoregressive integrated moving average (ARIMA) and multiplicative seasonal autoregressive integrated moving average (SARIMA) to predict boron content in the Büyük Menderes catchment. Initially, the Box-Whisker plots and Kendall's tau test are used to identify the trends during the study period. The measurements locations do not show significant overall trend in boron concentrations, though marginal increasing and decreasing trends are observed for certain periods at some locations. ARIMA modeling approach involves the following three steps: model identification, parameter estimation, and diagnostic checking. In the model identification step, considering the autocorrelation function (ACF) and partial autocorrelation function (PACF) results of boron data series, different ARIMA models are identified. The model gives the minimum Akaike information criterion (AIC) is selected as the best-fit model. The parameter estimation step indicates that the estimated model parameters are significantly different from zero. The diagnostic check step is applied to the residuals of the selected ARIMA models and the results indicate that the residuals are independent, normally distributed, and homoscadastic. For the model validation purposes, the predicted results using the best ARIMA models are compared to the observed data. The predicted data show reasonably good agreement with the actual data. The comparison of the mean and variance of 3-year (2002-2004) observed data vs predicted data from the selected best models show that the boron model from ARIMA modeling approaches could be used in a safe manner since the predicted values from these models preserve the basic statistics of observed data in terms of mean. The ARIMA modeling approach is recommended for predicting boron concentration series of a river.

  1. A step function model to evaluate the real monetary value of man-sievert with real GDP.

    PubMed

    Na, Seong H; Kim, Sun G

    2009-01-01

    For use in a cost-benefit analysis to establish optimum levels of radiation protection in Korea under the ALARA principle, we introduce a discrete step function model to evaluate man-sievert monetary value in the real economic value. The model formula, which is unique and country-specific, is composed of real GDP, the nominal risk coefficient for cancer and hereditary effects, the aversion factor against radiation exposure, and average life expectancy. Unlike previous researches on alpha-value assessment, we show different alpha values in the real term, differentiated with respect to the range of individual doses, which would be more realistic and informative for application to the radiation protection practices. GDP deflators of economy can reflect the society's situations. Finally, we suggest that the Korean model can be generalized simply to other countries without normalizing any country-specific factors.

  2. Stochastic analysis of particle movement over a dune bed

    USGS Publications Warehouse

    Lee, Baum K.; Jobson, Harvey E.

    1977-01-01

    Stochastic models are available that can be used to predict the transport and dispersion of bed-material sediment particles in an alluvial channel. These models are based on the proposition that the movement of a single bed-material sediment particle consists of a series of steps of random length separated by rest periods of random duration and, therefore, application of the models requires a knowledge of the probability distributions of the step lengths, the rest periods, the elevation of particle deposition, and the elevation of particle erosion. The procedure was tested by determining distributions from bed profiles formed in a large laboratory flume with a coarse sand as the bed material. The elevation of particle deposition and the elevation of particle erosion can be considered to be identically distributed, and their distribution can be described by either a ' truncated Gaussian ' or a ' triangular ' density function. The conditional probability distribution of the rest period given the elevation of particle deposition closely followed the two-parameter gamma distribution. The conditional probability distribution of the step length given the elevation of particle erosion and the elevation of particle deposition also closely followed the two-parameter gamma density function. For a given flow, the scale and shape parameters describing the gamma probability distributions can be expressed as functions of bed-elevation. (Woodard-USGS)

  3. A hybrid-perturbation-Galerkin technique which combines multiple expansions

    NASA Technical Reports Server (NTRS)

    Geer, James F.; Andersen, Carl M.

    1989-01-01

    A two-step hybrid perturbation-Galerkin method for the solution of a variety of differential equations type problems is found to give better results when multiple perturbation expansions are employed. The method assumes that there is parameter in the problem formulation and that a perturbation method can be sued to construct one or more expansions in this perturbation coefficient functions multiplied by computed amplitudes. In step one, regular and/or singular perturbation methods are used to determine the perturbation coefficient functions. The results of step one are in the form of one or more expansions each expressed as a sum of perturbation coefficient functions multiplied by a priori known gauge functions. In step two the classical Bubnov-Galerkin method uses the perturbation coefficient functions computed in step one to determine a set of amplitudes which replace and improve upon the gauge functions. The hybrid method has the potential of overcoming some of the drawbacks of the perturbation and Galerkin methods as applied separately, while combining some of their better features. The proposed method is applied, with two perturbation expansions in each case, to a variety of model ordinary differential equations problems including: a family of linear two-boundary-value problems, a nonlinear two-point boundary-value problem, a quantum mechanical eigenvalue problem and a nonlinear free oscillation problem. The results obtained from the hybrid methods are compared with approximate solutions obtained by other methods, and the applicability of the hybrid method to broader problem areas is discussed.

  4. Nonperturbative evaluation for anomalous dimension in 2-dimensional O (3 ) sigma model

    NASA Astrophysics Data System (ADS)

    Calle Jimenez, Sergio; Oka, Makoto; Sasaki, Kiyoshi

    2018-06-01

    We nonperturbatively calculate the wave-function renormalization in the two-dimensional O (3 ) sigma model. It is evaluated in a box with a finite spatial extent. We determine the anomalous dimension in the finite-volume scheme through an analysis of the step-scaling function. Results are compared with a perturbative evaluation, and reasonable behavior is observed.

  5. Care satisfaction, hope, and life functioning among adults with bipolar disorder: data from the first 1000 participants in the Systematic Treatment Enhancement Program.

    PubMed

    Morris, Chad D; Miklowitz, David J; Wisniewski, Stephen R; Giese, Alexis A; Thomas, Marshall R; Allen, Michael H

    2005-01-01

    The Systematic Treatment Enhancement Program for Bipolar Disorder (STEP-BD) is designed to evaluate the longitudinal outcome of patients with bipolar disorder. The STEP-BD disease-management model is built on evidence-based practices and a collaborative care approach designed to maximize specific and nonspecific treatment mechanisms. This prospective study examined the longitudinal relationships between patients' satisfaction with care, levels of hope, and life functioning in the first 1000 patients to enter STEP-BD. The study used scores from the Care Satisfaction Questionnaire, Beck Hopelessness Scale, Range of Impaired Functioning Tool, Young Mania Rating Scale, and Montgomery-Asberg Depression Rating Scale at 5 time points during a 1-year interval. Analyses tested mediational pathways between care satisfaction, hope, and life functioning, depression, and mania using mixed-effects (random and fixed) regression models. Increases in care satisfaction were associated with decreased hopelessness (P < .01) but not related to symptoms of depression or mania. Similarly, decreased hopelessness was associated with better life functioning (P < .01) but not related to symptoms of depression or mania. Depression was independently associated with poorer life functioning (P < .0001). This study provided support for the hypothesized mediational pathway between care satisfaction, hopelessness, and life functioning. Findings suggest that providing care that maximizes patient hope may be important. By so doing, patients might overcome the learned helplessness/hopelessness that often accompanies a cyclical illness and build a realistic illness-management strategy.

  6. Interstellar Neutral Helium in the Heliosphere from IBEX Observations. V. Observations in IBEX-Lo ESA Steps 1, 2, and 3

    NASA Astrophysics Data System (ADS)

    Swaczyna, Paweł; Bzowski, Maciej; Kubiak, Marzena A.; Sokół, Justyna M.; Fuselier, Stephen A.; Galli, André; Heirtzler, David; Kucharek, Harald; McComas, David J.; Möbius, Eberhard; Schwadron, Nathan A.; Wurz, P.

    2018-02-01

    Direct-sampling observations of interstellar neutral (ISN) He by the Interstellar Boundary Explorer (IBEX) provide valuable insight into the physical state of and processes operating in the interstellar medium ahead of the heliosphere. The ISN He atom signals are observed at the four lowest ESA steps of the IBEX-Lo sensor. The observed signal is a mixture of the primary and secondary components of ISN He and H. Previously, only data from one of the ESA steps have been used. Here, we extend the analysis to data collected in the three lowest ESA steps with the strongest ISN He signal, for the observation seasons 2009–2015. The instrument sensitivity is modeled as a linear function of the atom impact speed onto the sensor’s conversion surface separately for each ESA step of the instrument. We find that the sensitivity increases from lower to higher ESA steps, but within each of the ESA steps it is a decreasing function of the atom impact speed. This result may be influenced by the hydrogen contribution, which was not included in the adopted model, but seems to exist in the signal. We conclude that the currently accepted temperature of ISN He and velocity of the Sun through the interstellar medium do not need a revision, and we sketch a plan of further data analysis aiming at investigating ISN H and a better understanding of the population of ISN He originating in the outer heliosheath.

  7. Generation of transgenic mouse model using PTTG as an oncogene.

    PubMed

    Kakar, Sham S; Kakar, Cohin

    2015-01-01

    The close physiological similarity between the mouse and human has provided tools to understanding the biological function of particular genes in vivo by introduction or deletion of a gene of interest. Using a mouse as a model has provided a wealth of resources, knowledge, and technology, helping scientists to understand the biological functions, translocation, trafficking, and interaction of a candidate gene with other intracellular molecules, transcriptional regulation, posttranslational modification, and discovery of novel signaling pathways for a particular gene. Most importantly, the generation of the mouse model for a specific human disease has provided a powerful tool to understand the etiology of a disease and discovery of novel therapeutics. This chapter describes in detail the step-by-step generation of the transgenic mouse model, which can be helpful in guiding new investigators in developing successful models. For practical purposes, we will describe the generation of a mouse model using pituitary tumor transforming gene (PTTG) as the candidate gene of interest.

  8. Research for diagnosing electronic control fault of astronomical telescope's armature winding by step signal

    NASA Astrophysics Data System (ADS)

    Zhang, Yulong; Yang, Shihai; Gu, Bozhong

    2016-10-01

    This paper puts forward a electronic fault diagnose method focusing on large-diameter astronomical telescope's armature winding, and ascertains if it is the resistance or inductance which is out of order. When it comes to armature winding's electronic fault, give the angular position a step signal, and compare the outputs of five models of normal, larger-resistance, smaller-resistance, larger-inductance and smaller-inductance, so we can position the fault. Firstly, we ascertain the transfer function of the angular position to the armature voltage, to analysis the output of armature voltage when the angular position's input is step signal. Secondly, ascertain the different armature currents' characteristics after armature voltage pass through different armature models. Finally, basing on the characteristics, we design two strategies of resistance and inductance separately. The author use MATLAB/Simulink function to model and emulate with the hardware parameters of the 2.5m-caliber telescope, which China and France developed cooperatively for Russia. Meanwhile, the author add a white noise disturbance to the armature voltage, the result shows its feasibility under a certain sized disturbance.

  9. Benchmarking Model Variants in Development of a Hardware-in-the-Loop Simulation System

    NASA Technical Reports Server (NTRS)

    Aretskin-Hariton, Eliot D.; Zinnecker, Alicia M.; Kratz, Jonathan L.; Culley, Dennis E.; Thomas, George L.

    2016-01-01

    Distributed engine control architecture presents a significant increase in complexity over traditional implementations when viewed from the perspective of system simulation and hardware design and test. Even if the overall function of the control scheme remains the same, the hardware implementation can have a significant effect on the overall system performance due to differences in the creation and flow of data between control elements. A Hardware-in-the-Loop (HIL) simulation system is under development at NASA Glenn Research Center that enables the exploration of these hardware dependent issues. The system is based on, but not limited to, the Commercial Modular Aero-Propulsion System Simulation 40k (C-MAPSS40k). This paper describes the step-by-step conversion from the self-contained baseline model to the hardware in the loop model, and the validation of each step. As the control model hardware fidelity was improved during HIL system development, benchmarking simulations were performed to verify that engine system performance characteristics remained the same. The results demonstrate the goal of the effort; the new HIL configurations have similar functionality and performance compared to the baseline C-MAPSS40k system.

  10. Connectivity among subpopulations of Louisiana black bears as estimated by a step selection function

    USGS Publications Warehouse

    Clark, Joseph D.; Jared S. Laufenberg,; Maria Davidson,; Jennifer L. Murrow,

    2015-01-01

    Habitat fragmentation is a fundamental cause of population decline and increased risk of extinction for many wildlife species; animals with large home ranges and small population sizes are particularly sensitive. The Louisiana black bear (Ursus americanus luteolus) exists only in small, isolated subpopulations as a result of land clearing for agriculture, but the relative potential for inter-subpopulation movement by Louisiana black bears has not been quantified, nor have characteristics of effective travel routes between habitat fragments been identified. We placed and monitored global positioning system (GPS) radio collars on 8 female and 23 male bears located in 4 subpopulations in Louisiana, which included a reintroduced subpopulation located between 2 of the remnant subpopulations. We compared characteristics of sequential radiolocations of bears (i.e., steps) with steps that were possible but not chosen by the bears to develop step selection function models based on conditional logistic regression. The probability of a step being selected by a bear increased as the distance to natural land cover and agriculture at the end of the step decreased and as distance from roads at the end of a step increased. To characterize connectivity among subpopulations, we used the step selection models to create 4,000 hypothetical correlated random walks for each subpopulation representing potential dispersal events to estimate the proportion that intersected adjacent subpopulations (hereafter referred to as successful dispersals). Based on the models, movement paths for males intersected all adjacent subpopulations but paths for females intersected only the most proximate subpopulations. Cross-validation and genetic and independent observation data supported our findings. Our models also revealed that successful dispersals were facilitated by a reintroduced population located between 2 distant subpopulations. Successful dispersals for males were dependent on natural land cover in private ownership. The addition of hypothetical 1,000-m- or 3,000-m-wide corridors between the 4 study areas had minimal effects on connectivity among subpopulations. For females, our model suggested that habitat between subpopulations would probably have to be permanently occupied for demographic rescue to occur. Thus, the establishment of stepping-stone populations, such as the reintroduced population that we studied, may be a more effective conservation measure than long corridors without a population presence in between. 

  11. Decomposition of timed automata for solving scheduling problems

    NASA Astrophysics Data System (ADS)

    Nishi, Tatsushi; Wakatake, Masato

    2014-03-01

    A decomposition algorithm for scheduling problems based on timed automata (TA) model is proposed. The problem is represented as an optimal state transition problem for TA. The model comprises of the parallel composition of submodels such as jobs and resources. The procedure of the proposed methodology can be divided into two steps. The first step is to decompose the TA model into several submodels by using decomposable condition. The second step is to combine individual solution of subproblems for the decomposed submodels by the penalty function method. A feasible solution for the entire model is derived through the iterated computation of solving the subproblem for each submodel. The proposed methodology is applied to solve flowshop and jobshop scheduling problems. Computational experiments demonstrate the effectiveness of the proposed algorithm compared with a conventional TA scheduling algorithm without decomposition.

  12. Derivation of a variational principle for plane strain elastic-plastic silk biopolymers

    NASA Astrophysics Data System (ADS)

    He, J. H.; Liu, F. J.; Cao, J. H.; Zhang, L.

    2014-01-01

    Silk biopolymers, such as spider silk and Bombyx mori silk, behave always elastic-plastically. An elastic-plastic model is adopted and a variational principle for the small strain, rate plasticity problem is established by semi-inverse method. A trial Lagrangian is constructed where an unknown function is included which can be identified step by step.

  13. Number development and developmental dyscalculia.

    PubMed

    von Aster, Michael G; Shalev, Ruth S

    2007-11-01

    There is a growing consensus that the neuropsychological underpinnings of developmental dyscalculia (DD) are a genetically determined disorder of 'number sense', a term denoting the ability to represent and manipulate numerical magnitude nonverbally on an internal number line. However, this spatially-oriented number line develops during elementary school and requires additional cognitive components including working memory and number symbolization (language). Thus, there may be children with familial-genetic DD with deficits limited to number sense and others with DD and comorbidities such as language delay, dyslexia, or attention-deficit-hyperactivity disorder. This duality is supported by epidemiological data indicating that two-thirds of children with DD have comorbid conditions while one-third have pure DD. Clinically, they differ according to their profile of arithmetic difficulties. fMRI studies indicate that parietal areas (important for number functions), and frontal regions (dominant for executive working memory and attention functions), are under-activated in children with DD. A four-step developmental model that allows prediction of different pathways for DD is presented. The core-system representation of numerical magnitude (cardinality; step 1) provides the meaning of 'number', a precondition to acquiring linguistic (step 2), and Arabic (step 3) number symbols, while a growing working memory enables neuroplastic development of an expanding mental number line during school years (step 4). Therapeutic and educational interventions can be drawn from this model.

  14. Living environment and mobility of older adults.

    PubMed

    Cress, M Elaine; Orini, Stefania; Kinsler, Laura

    2011-01-01

    Older adults often elect to move into smaller living environments. Smaller living space and the addition of services provided by a retirement community (RC) may make living easier for the individual, but it may also reduce the amount of daily physical activity and ultimately reduce functional ability. With home size as an independent variable, the primary purpose of this study was to evaluate daily physical activity and physical function of community dwellers (CD; n = 31) as compared to residents of an RC (n = 30). In this cross-sectional study design, assessments included: the Continuous Scale Physical Functional Performance - 10 test, with a possible range of 0-100, higher scores reflecting better function; Step Activity Monitor (StepWatch 3.1); a physical activity questionnaire, the area of the home (in square meters). Groups were compared by one-way ANOVA. A general linear regression model was used to predict the number of steps per day at home. The level of significance was p < 0.05. Of the 61 volunteers (mean age: 79 ± 6.3 years; range: 65-94 years), the RC living space (68 ± 37.7 m(2)) was 62% smaller than the CD living space (182.8 ± 77.9 m(2); p = 0.001). After correcting for age, the RC took fewer total steps per day excluding exercise (p = 0.03) and had lower function (p = 0.005) than the CD. On average, RC residents take 3,000 steps less per day and have approximately 60% of the living space of a CD. Home size and physical function were primary predictors of the number of steps taken at home, as found using a general linear regression analysis. Copyright © 2010 S. Karger AG, Basel.

  15. Modelling to very high strains

    NASA Astrophysics Data System (ADS)

    Bons, P. D.; Jessell, M. W.; Griera, A.; Evans, L. A.; Wilson, C. J. L.

    2009-04-01

    Ductile strains in shear zones often reach extreme values, resulting in typical structures, such as winged porphyroclasts and several types of shear bands. The numerical simulation of the development of such structures has so far been inhibited by the low maximum strains that numerical models can normally achieve. Typical numerical models collapse at shear strains in the order of one to three. We have implemented a number of new functionalities in the numerical platform "Elle" (Jessell et al. 2001), which significantly increases the amount of strain that can be achieved and simultaneously reduces boundary effects that become increasingly disturbing at higher strain. Constant remeshing, while maintaining the polygonal phase regions, is the first step to avoid collapse of the finite-element grid required by finite-element solvers, such as Basil (Houseman et al. 2008). The second step is to apply a grain-growth routine to the boundaries of polygons that represent phase regions. This way, the development of sharp angles is avoided. A second advantage is that phase regions may merge or become separated (boudinage). Such topological changes are normally not possible in finite element deformation codes. The third step is the use of wrapping vertical model boundaries, with which optimal and unchanging model boundaries are maintained for the application of stress or velocity boundary conditions. The fourth step is to shift the model by a random amount in the vertical direction every time step. This way, the fixed horizontal boundary conditions are applied to different material points within the model every time step. Disturbing boundary effects are thus averaged out over the whole model and not localised to e.g. top and bottom of the model. Reduction of boundary effects has the additional advantage that model can be smaller and, therefore, numerically more efficient. Owing to the combination of these existing and new functionalities it is now possible to simulate the development of very high-strain structures. Jessell, M.W., Bons, P.D., Evans, L., Barr, T., Stüwe, K. 2001. Elle: a micro-process approach to the simulation of microstructures. Computers & Geosciences 27, 17-30. Houseman, G., Barr, T., Evans, L. 2008. Basil: stress and deformation in a viscous material. In: P.D. Bons, D. Koehn & M.W.Jessell (Eds.) Microdynamics Simulation. Lecture Notes in Earth Sciences 106, Springer, Berlin, 405p.

  16. Nonadiabatic Dynamics in Single-Electron Tunneling Devices with Time-Dependent Density-Functional Theory

    NASA Astrophysics Data System (ADS)

    Dittmann, Niklas; Splettstoesser, Janine; Helbig, Nicole

    2018-04-01

    We simulate the dynamics of a single-electron source, modeled as a quantum dot with on-site Coulomb interaction and tunnel coupling to an adjacent lead in time-dependent density-functional theory. Based on this system, we develop a time-nonlocal exchange-correlation potential by exploiting analogies with quantum-transport theory. The time nonlocality manifests itself in a dynamical potential step. We explicitly link the time evolution of the dynamical step to physical relaxation timescales of the electron dynamics. Finally, we discuss prospects for simulations of larger mesoscopic systems.

  17. Nonadiabatic Dynamics in Single-Electron Tunneling Devices with Time-Dependent Density-Functional Theory.

    PubMed

    Dittmann, Niklas; Splettstoesser, Janine; Helbig, Nicole

    2018-04-13

    We simulate the dynamics of a single-electron source, modeled as a quantum dot with on-site Coulomb interaction and tunnel coupling to an adjacent lead in time-dependent density-functional theory. Based on this system, we develop a time-nonlocal exchange-correlation potential by exploiting analogies with quantum-transport theory. The time nonlocality manifests itself in a dynamical potential step. We explicitly link the time evolution of the dynamical step to physical relaxation timescales of the electron dynamics. Finally, we discuss prospects for simulations of larger mesoscopic systems.

  18. Stimulated Brillouin scattering continuous wave phase conjugation in step-index fiber optics.

    PubMed

    Massey, Steven M; Spring, Justin B; Russell, Timothy H

    2008-07-21

    Continuous wave (CW) stimulated Brillouin scattering (SBS) phase conjugation in step-index optical fibers was studied experimentally and modeled as a function of fiber length. A phase conjugate fidelity over 80% was measured from SBS in a 40 m fiber using a pinhole technique. Fidelity decreases with fiber length, and a fiber with a numerical aperture (NA) of 0.06 was found to generate good phase conjugation fidelity over longer lengths than a fiber with 0.13 NA. Modeling and experiment support previous work showing the maximum interaction length which yields a high fidelity phase conjugate beam is inversely proportional to the fiber NA(2), but find that fidelity remains high over much longer fiber lengths than previous models calculated. Conditions for SBS beam cleanup in step-index fibers are discussed.

  19. A critical comparison of several low Reynolds number k-epsilon turbulence models for flow over a backward facing step

    NASA Technical Reports Server (NTRS)

    Steffen, C. J., Jr.

    1993-01-01

    Turbulent backward-facing step flow was examined using four low turbulent Reynolds number k-epsilon models and one standard high Reynolds number technique. A tunnel configuration of 1:9 (step height: exit tunnel height) was used. The models tested include: the original Jones and Launder; Chien; Launder and Sharma; and the recent Shih and Lumley formulation. The experimental reference of Driver and Seegmiller was used to make detailed comparisons between reattachment length, velocity, pressure, turbulent kinetic energy, Reynolds shear stress, and skin friction predictions. The results indicated that the use of a wall function for the standard k-epsilon technique did not reduce the calculation accuracy for this separated flow when compared to the low turbulent Reynolds number techniques.

  20. Non-Gaussian PDF Modeling of Turbulent Boundary Layer Fluctuating Pressure Excitation

    NASA Technical Reports Server (NTRS)

    Steinwolf, Alexander; Rizzi, Stephen A.

    2003-01-01

    The purpose of the study is to investigate properties of the probability density function (PDF) of turbulent boundary layer fluctuating pressures measured on the exterior of a supersonic transport aircraft. It is shown that fluctuating pressure PDFs differ from the Gaussian distribution even for surface conditions having no significant discontinuities. The PDF tails are wider and longer than those of the Gaussian model. For pressure fluctuations upstream of forward-facing step discontinuities and downstream of aft-facing step discontinuities, deviations from the Gaussian model are more significant and the PDFs become asymmetrical. Various analytical PDF distributions are used and further developed to model this behavior.

  1. Associations between cognitive and gait performance during single- and dual-task walking in people with Parkinson disease.

    PubMed

    Stegemöller, Elizabeth L; Wilson, Jonathan P; Hazamy, Audrey; Shelley, Mack C; Okun, Michael S; Altmann, Lori J P; Hass, Chris J

    2014-06-01

    Cognitive impairments in Parkinson disease (PD) manifest as deficits in speed of processing, working memory, and executive function and attention abilities. The gait impairment in PD is well documented to include reduced speed, shortened step lengths, and increased step-to-step variability. However, there is a paucity of research examining the relationship between overground walking and cognitive performance in people with PD. This study sought to examine the relationship between both the mean and variability of gait spatiotemporal parameters and cognitive performance across a broad range of cognitive domains. A cross-sectional design was used. Thirty-five participants with no dementia and diagnosed with idiopathic PD completed a battery of 12 cognitive tests that yielded 3 orthogonal factors: processing speed, working memory, and executive function and attention. Participants completed 10 trials of overground walking (single-task walking) and 5 trials of overground walking while counting backward by 3's (dual-task walking). All gait measures were impaired by the dual task. Cognitive processing speed correlated with stride length and walking speed. Executive function correlated with step width variability. There were no significant associations with working memory. Regression models relating speed of processing to gait spatiotemporal variables revealed that including dual-task costs in the model significantly improved the fit of the model. Participants with PD were tested only in the on-medication state. Different characteristics of gait are related to distinct types of cognitive processing, which may be differentially affected by dual-task walking due to the pathology of PD. © 2014 American Physical Therapy Association.

  2. Proposed hardware architectures of particle filter for object tracking

    NASA Astrophysics Data System (ADS)

    Abd El-Halym, Howida A.; Mahmoud, Imbaby Ismail; Habib, SED

    2012-12-01

    In this article, efficient hardware architectures for particle filter (PF) are presented. We propose three different architectures for Sequential Importance Resampling Filter (SIRF) implementation. The first architecture is a two-step sequential PF machine, where particle sampling, weight, and output calculations are carried out in parallel during the first step followed by sequential resampling in the second step. For the weight computation step, a piecewise linear function is used instead of the classical exponential function. This decreases the complexity of the architecture without degrading the results. The second architecture speeds up the resampling step via a parallel, rather than a serial, architecture. This second architecture targets a balance between hardware resources and the speed of operation. The third architecture implements the SIRF as a distributed PF composed of several processing elements and central unit. All the proposed architectures are captured using VHDL synthesized using Xilinx environment, and verified using the ModelSim simulator. Synthesis results confirmed the resource reduction and speed up advantages of our architectures.

  3. Kinematic Structural Modelling in Bayesian Networks

    NASA Astrophysics Data System (ADS)

    Schaaf, Alexander; de la Varga, Miguel; Florian Wellmann, J.

    2017-04-01

    We commonly capture our knowledge about the spatial distribution of distinct geological lithologies in the form of 3-D geological models. Several methods exist to create these models, each with its own strengths and limitations. We present here an approach to combine the functionalities of two modeling approaches - implicit interpolation and kinematic modelling methods - into one framework, while explicitly considering parameter uncertainties and thus model uncertainty. In recent work, we proposed an approach to implement implicit modelling algorithms into Bayesian networks. This was done to address the issues of input data uncertainty and integration of geological information from varying sources in the form of geological likelihood functions. However, one general shortcoming of implicit methods is that they usually do not take any physical constraints into consideration, which can result in unrealistic model outcomes and artifacts. On the other hand, kinematic structural modelling intends to reconstruct the history of a geological system based on physically driven kinematic events. This type of modelling incorporates simplified, physical laws into the model, at the cost of a substantial increment of usable uncertain parameters. In the work presented here, we show an integration of these two different modelling methodologies, taking advantage of the strengths of both of them. First, we treat the two types of models separately, capturing the information contained in the kinematic models and their specific parameters in the form of likelihood functions, in order to use them in the implicit modelling scheme. We then go further and combine the two modelling approaches into one single Bayesian network. This enables the direct flow of information between the parameters of the kinematic modelling step and the implicit modelling step and links the exclusive input data and likelihoods of the two different modelling algorithms into one probabilistic inference framework. In addition, we use the capabilities of Noddy to analyze the topology of structural models to demonstrate how topological information, such as the connectivity of two layers across an unconformity, can be used as a likelihood function. In an application to a synthetic case study, we show that our approach leads to a successful combination of the two different modelling concepts. Specifically, we show that we derive ensemble realizations of implicit models that now incorporate the knowledge of the kinematic aspects, representing an important step forward in the integration of knowledge and a corresponding estimation of uncertainties in structural geological models.

  4. Auxotonic to isometric contraction transitioning in a beating heart causes myosin step-size to down shift

    PubMed Central

    Sun, Xiaojing; Wang, Yihua; Ajtai, Katalin

    2017-01-01

    Myosin motors in cardiac ventriculum convert ATP free energy to the work of moving blood volume under pressure. The actin bound motor cyclically rotates its lever-arm/light-chain complex linking motor generated torque to the myosin filament backbone and translating actin against resisting force. Previous research showed that the unloaded in vitro motor is described with high precision by single molecule mechanical characteristics including unitary step-sizes of approximately 3, 5, and 8 nm and their relative step-frequencies of approximately 13, 50, and 37%. The 3 and 8 nm unitary step-sizes are dependent on myosin essential light chain (ELC) N-terminus actin binding. Step-size and step-frequency quantitation specifies in vitro motor function including duty-ratio, power, and strain sensitivity metrics. In vivo, motors integrated into the muscle sarcomere form the more complex and hierarchically functioning muscle machine. The goal of the research reported here is to measure single myosin step-size and step-frequency in vivo to assess how tissue integration impacts motor function. A photoactivatable GFP tags the ventriculum myosin lever-arm/light-chain complex in the beating heart of a live zebrafish embryo. Detected single GFP emission reports time-resolved myosin lever-arm orientation interpreted as step-size and step-frequency providing single myosin mechanical characteristics over the active cycle. Following step-frequency of cardiac ventriculum myosin transitioning from low to high force in relaxed to auxotonic to isometric contraction phases indicates that the imposition of resisting force during contraction causes the motor to down-shift to the 3 nm step-size accounting for >80% of all the steps in the near-isometric phase. At peak force, the ATP initiated actomyosin dissociation is the predominant strain inhibited transition in the native myosin contraction cycle. The proposed model for motor down-shifting and strain sensing involves ELC N-terminus actin binding. Overall, the approach is a unique bottom-up single molecule mechanical characterization of a hierarchically functional native muscle myosin. PMID:28423017

  5. A class of all digital phase locked loops - Modeling and analysis

    NASA Technical Reports Server (NTRS)

    Reddy, C. P.; Gupta, S. C.

    1973-01-01

    An all digital phase locked loop which tracks the phase of the incoming signal once per carrier cycle is proposed. The different elements and their functions, and the phase lock operation are explained in detail. The general digital loop operation is governed by a nonlinear difference equation from which a suitable model is developed. The lock range for the general model is derived. The performance of the digital loop for phase step and frequency step inputs for different levels of quantization without loop filter are studied. The analytical results are checked by simulating the actual system on the digital computer.

  6. Modeling of protein binary complexes using structural mass spectrometry data

    PubMed Central

    Kamal, J.K. Amisha; Chance, Mark R.

    2008-01-01

    In this article, we describe a general approach to modeling the structure of binary protein complexes using structural mass spectrometry data combined with molecular docking. In the first step, hydroxyl radical mediated oxidative protein footprinting is used to identify residues that experience conformational reorganization due to binding or participate in the binding interface. In the second step, a three-dimensional atomic structure of the complex is derived by computational modeling. Homology modeling approaches are used to define the structures of the individual proteins if footprinting detects significant conformational reorganization as a function of complex formation. A three-dimensional model of the complex is constructed from these binary partners using the ClusPro program, which is composed of docking, energy filtering, and clustering steps. Footprinting data are used to incorporate constraints—positive and/or negative—in the docking step and are also used to decide the type of energy filter—electrostatics or desolvation—in the successive energy-filtering step. By using this approach, we examine the structure of a number of binary complexes of monomeric actin and compare the results to crystallographic data. Based on docking alone, a number of competing models with widely varying structures are observed, one of which is likely to agree with crystallographic data. When the docking steps are guided by footprinting data, accurate models emerge as top scoring. We demonstrate this method with the actin/gelsolin segment-1 complex. We also provide a structural model for the actin/cofilin complex using this approach which does not have a crystal or NMR structure. PMID:18042684

  7. Steps in the bacterial flagellar motor.

    PubMed

    Mora, Thierry; Yu, Howard; Sowa, Yoshiyuki; Wingreen, Ned S

    2009-10-01

    The bacterial flagellar motor is a highly efficient rotary machine used by many bacteria to propel themselves. It has recently been shown that at low speeds its rotation proceeds in steps. Here we propose a simple physical model, based on the storage of energy in protein springs, that accounts for this stepping behavior as a random walk in a tilted corrugated potential that combines torque and contact forces. We argue that the absolute angular position of the rotor is crucial for understanding step properties and show this hypothesis to be consistent with the available data, in particular the observation that backward steps are smaller on average than forward steps. We also predict a sublinear speed versus torque relationship for fixed load at low torque, and a peak in rotor diffusion as a function of torque. Our model provides a comprehensive framework for understanding and analyzing stepping behavior in the bacterial flagellar motor and proposes novel, testable predictions. More broadly, the storage of energy in protein springs by the flagellar motor may provide useful general insights into the design of highly efficient molecular machines.

  8. First Steps in Computational Systems Biology: A Practical Session in Metabolic Modeling and Simulation

    ERIC Educational Resources Information Center

    Reyes-Palomares, Armando; Sanchez-Jimenez, Francisca; Medina, Miguel Angel

    2009-01-01

    A comprehensive understanding of biological functions requires new systemic perspectives, such as those provided by systems biology. Systems biology approaches are hypothesis-driven and involve iterative rounds of model building, prediction, experimentation, model refinement, and development. Developments in computer science are allowing for ever…

  9. A model-free method for mass spectrometer response correction. [for oxygen consumption and cardiac output calculation

    NASA Technical Reports Server (NTRS)

    Shykoff, Barbara E.; Swanson, Harvey T.

    1987-01-01

    A new method for correction of mass spectrometer output signals is described. Response-time distortion is reduced independently of any model of mass spectrometer behavior. The delay of the system is found first from the cross-correlation function of a step change and its response. A two-sided time-domain digital correction filter (deconvolution filter) is generated next from the same step response data using a regression procedure. Other data are corrected using the filter and delay. The mean squared error between a step response and a step is reduced considerably more after the use of a deconvolution filter than after the application of a second-order model correction. O2 consumption and CO2 production values calculated from data corrupted by a simulated dynamic process return to near the uncorrupted values after correction. Although a clean step response or the ensemble average of several responses contaminated with noise is needed for the generation of the filter, random noise of magnitude not above 0.5 percent added to the response to be corrected does not impair the correction severely.

  10. Validation of a multi-criteria evaluation model for animal welfare.

    PubMed

    Martín, P; Czycholl, I; Buxadé, C; Krieter, J

    2017-04-01

    The aim of this paper was to validate an alternative multi-criteria evaluation system to assess animal welfare on farms based on the Welfare Quality® (WQ) project, using an example of welfare assessment of growing pigs. This alternative methodology aimed to be more transparent for stakeholders and more flexible than the methodology proposed by WQ. The WQ assessment protocol for growing pigs was implemented to collect data in different farms in Schleswig-Holstein, Germany. In total, 44 observations were carried out. The aggregation system proposed in the WQ protocol follows a three-step aggregation process. Measures are aggregated into criteria, criteria into principles and principles into an overall assessment. This study focussed on the first two steps of the aggregation. Multi-attribute utility theory (MAUT) was used to produce a value of welfare for each criterion and principle. The utility functions and the aggregation function were constructed in two separated steps. The MACBETH (Measuring Attractiveness by a Categorical-Based Evaluation Technique) method was used for utility function determination and the Choquet integral (CI) was used as an aggregation operator. The WQ decision-makers' preferences were fitted in order to construct the utility functions and to determine the CI parameters. The validation of the MAUT model was divided into two steps, first, the results of the model were compared with the results of the WQ project at criteria and principle level, and second, a sensitivity analysis of our model was carried out to demonstrate the relative importance of welfare measures in the different steps of the multi-criteria aggregation process. Using the MAUT, similar results were obtained to those obtained when applying the WQ protocol aggregation methods, both at criteria and principle level. Thus, this model could be implemented to produce an overall assessment of animal welfare in the context of the WQ protocol for growing pigs. Furthermore, this methodology could also be used as a framework in order to produce an overall assessment of welfare for other livestock species. Two main findings are obtained from the sensitivity analysis, first, a limited number of measures had a strong influence on improving or worsening the level of welfare at criteria level and second, the MAUT model was not very sensitive to an improvement in or a worsening of single welfare measures at principle level. The use of weighted sums and the conversion of disease measures into ordinal scores should be reconsidered.

  11. Adaptation in Tunably Rugged Fitness Landscapes: The Rough Mount Fuji Model

    PubMed Central

    Neidhart, Johannes; Szendro, Ivan G.; Krug, Joachim

    2014-01-01

    Much of the current theory of adaptation is based on Gillespie’s mutational landscape model (MLM), which assumes that the fitness values of genotypes linked by single mutational steps are independent random variables. On the other hand, a growing body of empirical evidence shows that real fitness landscapes, while possessing a considerable amount of ruggedness, are smoother than predicted by the MLM. In the present article we propose and analyze a simple fitness landscape model with tunable ruggedness based on the rough Mount Fuji (RMF) model originally introduced by Aita et al. in the context of protein evolution. We provide a comprehensive collection of results pertaining to the topographical structure of RMF landscapes, including explicit formulas for the expected number of local fitness maxima, the location of the global peak, and the fitness correlation function. The statistics of single and multiple adaptive steps on the RMF landscape are explored mainly through simulations, and the results are compared to the known behavior in the MLM model. Finally, we show that the RMF model can explain the large number of second-step mutations observed on a highly fit first-step background in a recent evolution experiment with a microvirid bacteriophage. PMID:25123507

  12. Multiphysics modelling of the separation of suspended particles via frequency ramping of ultrasonic standing waves.

    PubMed

    Trujillo, Francisco J; Eberhardt, Sebastian; Möller, Dirk; Dual, Jurg; Knoerzer, Kai

    2013-03-01

    A model was developed to determine the local changes of concentration of particles and the formations of bands induced by a standing acoustic wave field subjected to a sawtooth frequency ramping pattern. The mass transport equation was modified to incorporate the effect of acoustic forces on the concentration of particles. This was achieved by balancing the forces acting on particles. The frequency ramping was implemented as a parametric sweep for the time harmonic frequency response in time steps of 0.1s. The physics phenomena of piezoelectricity, acoustic fields and diffusion of particles were coupled and solved in COMSOL Multiphysics™ (COMSOL AB, Stockholm, Sweden) following a three step approach. The first step solves the governing partial differential equations describing the acoustic field by assuming that the pressure field achieves a pseudo steady state. In the second step, the acoustic radiation force is calculated from the pressure field. The final step allows calculating the locally changing concentration of particles as a function of time by solving the modified equation of particle transport. The diffusivity was calculated as function of concentration following the Garg and Ruthven equation which describes the steep increase of diffusivity when the concentration approaches saturation. However, it was found that this steep increase creates numerical instabilities at high voltages (in the piezoelectricity equations) and high initial particle concentration. The model was simplified to a pseudo one-dimensional case due to computation power limitations. The predicted particle distribution calculated with the model is in good agreement with the experimental data as it follows accurately the movement of the bands in the centre of the chamber. Crown Copyright © 2012. Published by Elsevier B.V. All rights reserved.

  13. Individual Colorimetric Observer Model

    PubMed Central

    Asano, Yuta; Fairchild, Mark D.; Blondé, Laurent

    2016-01-01

    This study proposes a vision model for individual colorimetric observers. The proposed model can be beneficial in many color-critical applications such as color grading and soft proofing to assess ranges of color matches instead of a single average match. We extended the CIE 2006 physiological observer by adding eight additional physiological parameters to model individual color-normal observers. These eight parameters control lens pigment density, macular pigment density, optical densities of L-, M-, and S-cone photopigments, and λmax shifts of L-, M-, and S-cone photopigments. By identifying the variability of each physiological parameter, the model can simulate color matching functions among color-normal populations using Monte Carlo simulation. The variabilities of the eight parameters were identified through two steps. In the first step, extensive reviews of past studies were performed for each of the eight physiological parameters. In the second step, the obtained variabilities were scaled to fit a color matching dataset. The model was validated using three different datasets: traditional color matching, applied color matching, and Rayleigh matches. PMID:26862905

  14. One-step fabrication of multifunctional micromotors

    NASA Astrophysics Data System (ADS)

    Gao, Wenlong; Liu, Mei; Liu, Limei; Zhang, Hui; Dong, Bin; Li, Christopher Y.

    2015-08-01

    Although artificial micromotors have undergone tremendous progress in recent years, their fabrication normally requires complex steps or expensive equipment. In this paper, we report a facile one-step method based on an emulsion solvent evaporation process to fabricate multifunctional micromotors. By simultaneously incorporating various components into an oil-in-water droplet, upon emulsification and solidification, a sphere-shaped, asymmetric, and multifunctional micromotor is formed. Some of the attractive functions of this model micromotor include autonomous movement in high ionic strength solution, remote control, enzymatic disassembly and sustained release. This one-step, versatile fabrication method can be easily scaled up and therefore may have great potential in mass production of multifunctional micromotors for a wide range of practical applications.Although artificial micromotors have undergone tremendous progress in recent years, their fabrication normally requires complex steps or expensive equipment. In this paper, we report a facile one-step method based on an emulsion solvent evaporation process to fabricate multifunctional micromotors. By simultaneously incorporating various components into an oil-in-water droplet, upon emulsification and solidification, a sphere-shaped, asymmetric, and multifunctional micromotor is formed. Some of the attractive functions of this model micromotor include autonomous movement in high ionic strength solution, remote control, enzymatic disassembly and sustained release. This one-step, versatile fabrication method can be easily scaled up and therefore may have great potential in mass production of multifunctional micromotors for a wide range of practical applications. Electronic supplementary information (ESI) available: Videos S1-S4 and Fig. S1-S3. See DOI: 10.1039/c5nr03574k

  15. Modelling of Sub-daily Hydrological Processes Using Daily Time-Step Models: A Distribution Function Approach to Temporal Scaling

    NASA Astrophysics Data System (ADS)

    Kandel, D. D.; Western, A. W.; Grayson, R. B.

    2004-12-01

    Mismatches in scale between the fundamental processes, the model and supporting data are a major limitation in hydrologic modelling. Surface runoff generation via infiltration excess and the process of soil erosion are fundamentally short time-scale phenomena and their average behaviour is mostly determined by the short time-scale peak intensities of rainfall. Ideally, these processes should be simulated using time-steps of the order of minutes to appropriately resolve the effect of rainfall intensity variations. However, sub-daily data support is often inadequate and the processes are usually simulated by calibrating daily (or even coarser) time-step models. Generally process descriptions are not modified but rather effective parameter values are used to account for the effect of temporal lumping, assuming that the effect of the scale mismatch can be counterbalanced by tuning the parameter values at the model time-step of interest. Often this results in parameter values that are difficult to interpret physically. A similar approach is often taken spatially. This is problematic as these processes generally operate or interact non-linearly. This indicates a need for better techniques to simulate sub-daily processes using daily time-step models while still using widely available daily information. A new method applicable to many rainfall-runoff-erosion models is presented. The method is based on temporal scaling using statistical distributions of rainfall intensity to represent sub-daily intensity variations in a daily time-step model. This allows the effect of short time-scale nonlinear processes to be captured while modelling at a daily time-step, which is often attractive due to the wide availability of daily forcing data. The approach relies on characterising the rainfall intensity variation within a day using a cumulative distribution function (cdf). This cdf is then modified by various linear and nonlinear processes typically represented in hydrological and erosion models. The statistical description of sub-daily variability is thus propagated through the model, allowing the effects of variability to be captured in the simulations. This results in cdfs of various fluxes, the integration of which over a day gives respective daily totals. Using 42-plot-years of surface runoff and soil erosion data from field studies in different environments from Australia and Nepal, simulation results from this cdf approach are compared with the sub-hourly (2-minute for Nepal and 6-minute for Australia) and daily models having similar process descriptions. Significant improvements in the simulation of surface runoff and erosion are achieved, compared with a daily model that uses average daily rainfall intensities. The cdf model compares well with a sub-hourly time-step model. This suggests that the approach captures the important effects of sub-daily variability while utilizing commonly available daily information. It is also found that the model parameters are more robustly defined using the cdf approach compared with the effective values obtained at the daily scale. This suggests that the cdf approach may offer improved model transferability spatially (to other areas) and temporally (to other periods).

  16. Analysis and control of the METC fluid-bed gasifier. Quarterly report, October 1994--January 1995

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Farell, A.E.; Reddy, S.

    1995-03-01

    This document summarizes work performed for the period 10/1/94 to 2/1/95. The initial phase of the work focuses on developing a simple transfer function model of the Fluidized Bed Gasifier (FBG). This transfer function model will be developed based purely on the gasifier responses to step changes in gasifier inputs (including reactor air, convey air, cone nitrogen, FBG pressure, and coal feedrate). This transfer function model will represent a linear, dynamic model that is valid near the operating point at which the data was taken. In addition, a similar transfer function model will be developed using MGAS in order tomore » assess MGAS for use as a model of the FBG for control systems analysis.« less

  17. Stepped Care Versus Direct Face-to-Face Cognitive Behavior Therapy for Social Anxiety Disorder and Panic Disorder: A Randomized Effectiveness Trial.

    PubMed

    Nordgreen, Tine; Haug, Thomas; Öst, Lars-Göran; Andersson, Gerhard; Carlbring, Per; Kvale, Gerd; Tangen, Tone; Heiervang, Einar; Havik, Odd E

    2016-03-01

    The aim of this study was to assess the effectiveness of a cognitive behavioral therapy (CBT) stepped care model (psychoeducation, guided Internet treatment, and face-to-face CBT) compared with direct face-to-face (FtF) CBT. Patients with panic disorder or social anxiety disorder were randomized to either stepped care (n=85) or direct FtF CBT (n=88). Recovery was defined as meeting two of the following three criteria: loss of diagnosis, below cut-off for self-reported symptoms, and functional improvement. No significant differences in intention-to-treat recovery rates were identified between stepped care (40.0%) and direct FtF CBT (43.2%). The majority of the patients who recovered in the stepped care did so at the less therapist-demanding steps (26/34, 76.5%). Moderate to large within-groups effect sizes were identified at posttreatment and 1-year follow-up. The attrition rates were high: 41.2% in the stepped care condition and 27.3% in the direct FtF CBT condition. These findings indicate that the outcome of a stepped care model for anxiety disorders is comparable to that of direct FtF CBT. The rates of improvement at the two less therapist-demanding steps indicate that stepped care models might be useful for increasing patients' access to evidence-based psychological treatments for anxiety disorders. However, attrition in the stepped care condition was high, and research regarding the factors that can improve adherence should be prioritized. Copyright © 2015. Published by Elsevier Ltd.

  18. Estimating heterotrophic respiration at large scales: Challenges, approaches, and next steps

    USGS Publications Warehouse

    Bond-Lamberty, Ben; Epron, Daniel; Harden, Jennifer W.; Harmon, Mark E.; Hoffman, Forrest; Kumar, Jitendra; McGuire, Anthony David; Vargas, Rodrigo

    2016-01-01

    Heterotrophic respiration (HR), the aerobic and anaerobic processes mineralizing organic matter, is a key carbon flux but one impossible to measure at scales significantly larger than small experimental plots. This impedes our ability to understand carbon and nutrient cycles, benchmark models, or reliably upscale point measurements. Given that a new generation of highly mechanistic, genomic-specific global models is not imminent, we suggest that a useful step to improve this situation would be the development of “Decomposition Functional Types” (DFTs). Analogous to plant functional types (PFTs), DFTs would abstract and capture important differences in HR metabolism and flux dynamics, allowing modelers and experimentalists to efficiently group and vary these characteristics across space and time. We argue that DFTs should be initially informed by top-down expert opinion, but ultimately developed using bottom-up, data-driven analyses, and provide specific examples of potential dependent and independent variables that could be used. We present an example clustering analysis to show how annual HR can be broken into distinct groups associated with global variability in biotic and abiotic factors, and demonstrate that these groups are distinct from (but complementary to) already-existing PFTs. A similar analysis incorporating observational data could form the basis for future DFTs. Finally, we suggest next steps and critical priorities: collection and synthesis of existing data; more in-depth analyses combining open data with rigorous testing of analytical results; using point measurements and realistic forcing variables to constrain process-based models; and planning by the global modeling community for decoupling decomposition from fixed site data. These are all critical steps to build a foundation for DFTs in global models, thus providing the ecological and climate change communities with robust, scalable estimates of HR.

  19. CO 2 induced phase transitions in diamine-appended metal–organic frameworks

    DOE PAGES

    Vlaisavljevich, Bess; Odoh, Samuel O.; Schnell, Sondre K.; ...

    2015-06-17

    Using a combination of density functional theory and lattice models, we study the effect of CO 2 adsorption in an amine functionalized metal–organic framework. These materials exhibit a step in the adsorption isotherm indicative of a phase change. The pressure at which this step occurs is not only temperature dependent but is also metal center dependent. Likewise, the heats of adsorption vary depending on the metal center. Herein we demonstrate via quantum chemical calculations that the amines should not be considered firmly anchored to the framework and we explore the mechanism for CO 2 adsorption. An ammonium carbamate species ismore » formed via the insertion of CO 2 into the M–N amine bonds. Furthermore, we translate the quantum chemical results into isotherms using a coarse grained Monte Carlo simulation technique and show that this adsorption mechanism can explain the characteristic step observed in the experimental isotherm while a previously proposed mechanism cannot. Furthermore, metal analogues have been explored and the CO 2 binding energies show a strong metal dependence corresponding to the M–N amine bond strength. We show that this difference can be exploited to tune the pressure at which the step in the isotherm occurs. Additionally, the mmen–Ni 2(dobpdc) framework shows Langmuir like behavior, and our simulations show how this can be explained by competitive adsorption between the new model and a previously proposed model.« less

  20. Bayesian functional integral method for inferring continuous data from discrete measurements.

    PubMed

    Heuett, William J; Miller, Bernard V; Racette, Susan B; Holloszy, John O; Chow, Carson C; Periwal, Vipul

    2012-02-08

    Inference of the insulin secretion rate (ISR) from C-peptide measurements as a quantification of pancreatic β-cell function is clinically important in diseases related to reduced insulin sensitivity and insulin action. ISR derived from C-peptide concentration is an example of nonparametric Bayesian model selection where a proposed ISR time-course is considered to be a "model". An inferred value of inaccessible continuous variables from discrete observable data is often problematic in biology and medicine, because it is a priori unclear how robust the inference is to the deletion of data points, and a closely related question, how much smoothness or continuity the data actually support. Predictions weighted by the posterior distribution can be cast as functional integrals as used in statistical field theory. Functional integrals are generally difficult to evaluate, especially for nonanalytic constraints such as positivity of the estimated parameters. We propose a computationally tractable method that uses the exact solution of an associated likelihood function as a prior probability distribution for a Markov-chain Monte Carlo evaluation of the posterior for the full model. As a concrete application of our method, we calculate the ISR from actual clinical C-peptide measurements in human subjects with varying degrees of insulin sensitivity. Our method demonstrates the feasibility of functional integral Bayesian model selection as a practical method for such data-driven inference, allowing the data to determine the smoothing timescale and the width of the prior probability distribution on the space of models. In particular, our model comparison method determines the discrete time-step for interpolation of the unobservable continuous variable that is supported by the data. Attempts to go to finer discrete time-steps lead to less likely models. Copyright © 2012 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  1. Integration of biological networks and gene expression data using Cytoscape

    PubMed Central

    Cline, Melissa S; Smoot, Michael; Cerami, Ethan; Kuchinsky, Allan; Landys, Nerius; Workman, Chris; Christmas, Rowan; Avila-Campilo, Iliana; Creech, Michael; Gross, Benjamin; Hanspers, Kristina; Isserlin, Ruth; Kelley, Ryan; Killcoyne, Sarah; Lotia, Samad; Maere, Steven; Morris, John; Ono, Keiichiro; Pavlovic, Vuk; Pico, Alexander R; Vailaya, Aditya; Wang, Peng-Liang; Adler, Annette; Conklin, Bruce R; Hood, Leroy; Kuiper, Martin; Sander, Chris; Schmulevich, Ilya; Schwikowski, Benno; Warner, Guy J; Ideker, Trey; Bader, Gary D

    2013-01-01

    Cytoscape is a free software package for visualizing, modeling and analyzing molecular and genetic interaction networks. This protocol explains how to use Cytoscape to analyze the results of mRNA expression profiling, and other functional genomics and proteomics experiments, in the context of an interaction network obtained for genes of interest. Five major steps are described: (i) obtaining a gene or protein network, (ii) displaying the network using layout algorithms, (iii) integrating with gene expression and other functional attributes, (iv) identifying putative complexes and functional modules and (v) identifying enriched Gene Ontology annotations in the network. These steps provide a broad sample of the types of analyses performed by Cytoscape. PMID:17947979

  2. Probabilistic Plan Management

    DTIC Science & Technology

    2009-11-17

    set of chains , the step adds scheduled methods that have an a priori likelihood of a failure outcome (Lines 3-5). It identifies the max eul value of the...activity meeting its objective, as well as its expected contribution to the schedule. By explicitly calculating these values , PADS is able to summarize the...variables. One of the main difficulties of this model is convolving the probability density functions and value functions while solving the model; this

  3. Building functional groups of marine benthic macroinvertebrates on the basis of general community assembly mechanisms

    NASA Astrophysics Data System (ADS)

    Alexandridis, Nikolaos; Bacher, Cédric; Desroy, Nicolas; Jean, Fred

    2017-03-01

    The accurate reproduction of the spatial and temporal dynamics of marine benthic biodiversity requires the development of mechanistic models, based on the processes that shape macroinvertebrate communities. The modelled entities should, accordingly, be able to adequately represent the many functional roles that are performed by benthic organisms. With this goal in mind, we applied the emergent group hypothesis (EGH), which assumes functional equivalence within and functional divergence between groups of species. The first step of the grouping involved the selection of 14 biological traits that describe the role of benthic macroinvertebrates in 7 important community assembly mechanisms. A matrix of trait values for the 240 species that occurred in the Rance estuary (Brittany, France) in 1995 formed the basis for a hierarchical classification that generated 20 functional groups, each with its own trait values. The functional groups were first evaluated based on their ability to represent observed patterns of biodiversity. The two main assumptions of the EGH were then tested, by assessing the preservation of niche attributes among the groups and the neutrality of functional differences within them. The generally positive results give us confidence in the ability of the grouping to recreate functional diversity in the Rance estuary. A first look at the emergent groups provides insights into the potential role of community assembly mechanisms in shaping biodiversity patterns. Our next steps include the derivation of general rules of interaction and their incorporation, along with the functional groups, into mechanistic models of benthic biodiversity.

  4. Models of subjective response to in-flight motion data

    NASA Technical Reports Server (NTRS)

    Rudrapatna, A. N.; Jacobson, I. D.

    1973-01-01

    Mathematical relationships between subjective comfort and environmental variables in an air transportation system are investigated. As a first step in model building, only the motion variables are incorporated and sensitivities are obtained using stepwise multiple regression analysis. The data for these models have been collected from commercial passenger flights. Two models are considered. In the first, subjective comfort is assumed to depend on rms values of the six-degrees-of-freedom accelerations. The second assumes a Rustenburg type human response function in obtaining frequency weighted rms accelerations, which are used in a linear model. The form of the human response function is examined and the results yield a human response weighting function for different degrees of freedom.

  5. Quadratic adaptive algorithm for solving cardiac action potential models.

    PubMed

    Chen, Min-Hung; Chen, Po-Yuan; Luo, Ching-Hsing

    2016-10-01

    An adaptive integration method is proposed for computing cardiac action potential models accurately and efficiently. Time steps are adaptively chosen by solving a quadratic formula involving the first and second derivatives of the membrane action potential. To improve the numerical accuracy, we devise an extremum-locator (el) function to predict the local extremum when approaching the peak amplitude of the action potential. In addition, the time step restriction (tsr) technique is designed to limit the increase in time steps, and thus prevent the membrane potential from changing abruptly. The performance of the proposed method is tested using the Luo-Rudy phase 1 (LR1), dynamic (LR2), and human O'Hara-Rudy dynamic (ORd) ventricular action potential models, and the Courtemanche atrial model incorporating a Markov sodium channel model. Numerical experiments demonstrate that the action potential generated using the proposed method is more accurate than that using the traditional Hybrid method, especially near the peak region. The traditional Hybrid method may choose large time steps near to the peak region, and sometimes causes the action potential to become distorted. In contrast, the proposed new method chooses very fine time steps in the peak region, but large time steps in the smooth region, and the profiles are smoother and closer to the reference solution. In the test on the stiff Markov ionic channel model, the Hybrid blows up if the allowable time step is set to be greater than 0.1ms. In contrast, our method can adjust the time step size automatically, and is stable. Overall, the proposed method is more accurate than and as efficient as the traditional Hybrid method, especially for the human ORd model. The proposed method shows improvement for action potentials with a non-smooth morphology, and it needs further investigation to determine whether the method is helpful during propagation of the action potential. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Reconstructing biochemical pathways from time course data.

    PubMed

    Srividhya, Jeyaraman; Crampin, Edmund J; McSharry, Patrick E; Schnell, Santiago

    2007-03-01

    Time series data on biochemical reactions reveal transient behavior, away from chemical equilibrium, and contain information on the dynamic interactions among reacting components. However, this information can be difficult to extract using conventional analysis techniques. We present a new method to infer biochemical pathway mechanisms from time course data using a global nonlinear modeling technique to identify the elementary reaction steps which constitute the pathway. The method involves the generation of a complete dictionary of polynomial basis functions based on the law of mass action. Using these basis functions, there are two approaches to model construction, namely the general to specific and the specific to general approach. We demonstrate that our new methodology reconstructs the chemical reaction steps and connectivity of the glycolytic pathway of Lactococcus lactis from time course experimental data.

  7. Designing Illustrations for CBVE Technical Procedures.

    ERIC Educational Resources Information Center

    Laugen, Ronald C.

    A model was formulated for developing functional illustrations for text-based competency-based vocational education (CBVE) instructional materials. The proposed model contained four prescriptive steps that address the events of instruction to be provided or supported and the locations, content, and learning cues for each illustration. Usefulness…

  8. QM/QM approach to model energy disorder in amorphous organic semiconductors.

    PubMed

    Friederich, Pascal; Meded, Velimir; Symalla, Franz; Elstner, Marcus; Wenzel, Wolfgang

    2015-02-10

    It is an outstanding challenge to model the electronic properties of organic amorphous materials utilized in organic electronics. Computation of the charge carrier mobility is a challenging problem as it requires integration of morphological and electronic degrees of freedom in a coherent methodology and depends strongly on the distribution of polaron energies in the system. Here we represent a QM/QM model to compute the polaron energies combining density functional methods for molecules in the vicinity of the polaron with computationally efficient density functional based tight binding methods in the rest of the environment. For seven widely used amorphous organic semiconductor materials, we show that the calculations are accelerated up to 1 order of magnitude without any loss in accuracy. Considering that the quantum chemical step is the efficiency bottleneck of a workflow to model the carrier mobility, these results are an important step toward accurate and efficient disordered organic semiconductors simulations, a prerequisite for accelerated materials screening and consequent component optimization in the organic electronics industry.

  9. Solvable continuous-time random walk model of the motion of tracer particles through porous media.

    PubMed

    Fouxon, Itzhak; Holzner, Markus

    2016-08-01

    We consider the continuous-time random walk (CTRW) model of tracer motion in porous medium flows based on the experimentally determined distributions of pore velocity and pore size reported by Holzner et al. [M. Holzner et al., Phys. Rev. E 92, 013015 (2015)PLEEE81539-375510.1103/PhysRevE.92.013015]. The particle's passing through one channel is modeled as one step of the walk. The step (channel) length is random and the walker's velocity at consecutive steps of the walk is conserved with finite probability, mimicking that at the turning point there could be no abrupt change of velocity. We provide the Laplace transform of the characteristic function of the walker's position and reductions for different cases of independence of the CTRW's step duration τ, length l, and velocity v. We solve our model with independent l and v. The model incorporates different forms of the tail of the probability density of small velocities that vary with the model parameter α. Depending on that parameter, all types of anomalous diffusion can hold, from super- to subdiffusion. In a finite interval of α, ballistic behavior with logarithmic corrections holds, which was observed in a previously introduced CTRW model with independent l and τ. Universality of tracer diffusion in the porous medium is considered.

  10. Modelling reveals kinetic advantages of co-transcriptional splicing.

    PubMed

    Aitken, Stuart; Alexander, Ross D; Beggs, Jean D

    2011-10-01

    Messenger RNA splicing is an essential and complex process for the removal of intron sequences. Whereas the composition of the splicing machinery is mostly known, the kinetics of splicing, the catalytic activity of splicing factors and the interdependency of transcription, splicing and mRNA 3' end formation are less well understood. We propose a stochastic model of splicing kinetics that explains data obtained from high-resolution kinetic analyses of transcription, splicing and 3' end formation during induction of an intron-containing reporter gene in budding yeast. Modelling reveals co-transcriptional splicing to be the most probable and most efficient splicing pathway for the reporter transcripts, due in part to a positive feedback mechanism for co-transcriptional second step splicing. Model comparison is used to assess the alternative representations of reactions. Modelling also indicates the functional coupling of transcription and splicing, because both the rate of initiation of transcription and the probability that step one of splicing occurs co-transcriptionally are reduced, when the second step of splicing is abolished in a mutant reporter.

  11. A State Event Detection Algorithm for Numerically Simulating Hybrid Systems with Model Singularities

    DTIC Science & Technology

    2007-01-01

    the case of non- constant step sizes. Therefore the event dynamics after the predictor and corrector phases are, respectively, gpk +1 = g( xk + hk+1{ m...the Extrapolation Polynomial Using a Taylor series expansion of the predicted event function eq.(6) gpk +1 = gk + hk+1 dgp dt ∣∣∣∣ (x,t)=(xk,tk) + h2k...1 2! d2gp dt2 ∣∣∣∣ (x,t)=(xk,tk) + . . . , (8) we can determine the value of gpk +1 as a function of the, yet undetermined, step size hk+1. Recalling

  12. A class of all digital phase locked loops - Modelling and analysis.

    NASA Technical Reports Server (NTRS)

    Reddy, C. P.; Gupta, S. C.

    1972-01-01

    An all digital phase locked loop which tracks the phase of the incoming signal once per carrier cycle is proposed. The different elements and their functions, and the phase lock operation are explained in detail. The general digital loop operation is governed by a non-linear difference equation from which a suitable model is developed. The lock range for the general model is derived. The performance of the digital loop for phase step, and frequency step inputs for different levels of quantization without loop filter, are studied. The analytical results are checked by simulating the actual system on the digital computer.

  13. Model of multistep electron transfer in a single-mode polar medium

    NASA Astrophysics Data System (ADS)

    Feskov, S. V.; Yudanov, V. V.

    2017-09-01

    A mathematical model of multistep photoinduced electron transfer (PET) in a polar medium with a single relaxation time (Debye solvent) is developed. The model includes the polarization nonequilibrity formed in the vicinity of the donor-acceptor molecular system at the initial steps of photoreaction and its influence on the subsequent steps of PET. It is established that the results from numerical simulation of transient luminescence spectra of photoexcited donor-acceptor complexes (DAC) conform to calculated data obtained on the basis of the familiar experimental technique used to measure the relaxation function of solvent polarization in the vicinity of DAC in the picosecond and subpicosecond ranges.

  14. The Relaxation of Vicinal (001) with ZigZag [110] Steps

    NASA Astrophysics Data System (ADS)

    Hawkins, Micah; Hamouda, Ajmi Bh; González-Cabrera, Diego Luis; Einstein, Theodore L.

    2012-02-01

    This talk presents a kinetic Monte Carlo study of the relaxation dynamics of [110] steps on a vicinal (001) simple cubic surface. This system is interesting because [110] steps have different elementary excitation energetics and favor step diffusion more than close-packed [100] steps. In this talk we show how this leads to relaxation dynamics showing greater fluctuations on a shorter time scale for [110] steps as well as 2-bond breaking processes being rate determining in contrast to 3-bond breaking processes for [100] steps. The existence of a steady state is shown via the convergence of terrace width distributions at times much longer than the relaxation time. In this time regime excellent fits to the modified generalized Wigner distribution (as well as to the Berry-Robnik model when steps can overlap) were obtained. Also, step-position correlation function data show diffusion-limited increase for small distances along the step as well as greater average step displacement for zigzag steps compared to straight steps for somewhat longer distances along the step. Work supported by NSF-MRSEC Grant DMR 05-20471 as well as a DOE-CMCSN Grant.

  15. [From stone-craved genes to Michelangelo: significance and different aspects of gene-environment interaction].

    PubMed

    Lazary, Judit

    2017-12-01

    Although genetic studies have improved a lot in recent years, without clinical relevance sometimes their significance is devalued. Reviewing the major milestones of psychogenomics it can be seen that break-through success is just a question of time. Investigations of direct effect of genetic variants on phenotypes have not yielded positive findings. However, an important step was taken by adapting the gene-environment interaction model. In this model genetic vulnerability stepped into the place of "stone craved" pathology. Further progress happened when studies of environmental factors were combined with genetic function (epigenetics). This model provided the possibility for investigation of therapeutic interventions as environmental factors and it was proven that effective treatments exert a modifying effect on gene expression. Moreover, recent developments focus on therapeutic manipulation of gene function (e.g. chemogenetics). Instead of "stone craved" genes up-to-date dynamically interacting gene function became the basis of psychogenomics in which correction of the expression is a potential therapeutic tool. Keeping in mind these trends and developments, there is no doubt that genetics will be a fundamental part of daily clinical routine in the future.

  16. Optimized statistical parametric mapping procedure for NIRS data contaminated by motion artifacts : Neurometric analysis of body schema extension.

    PubMed

    Suzuki, Satoshi

    2017-09-01

    This study investigated the spatial distribution of brain activity on body schema (BS) modification induced by natural body motion using two versions of a hand-tracing task. In Task 1, participants traced Japanese Hiragana characters using the right forefinger, requiring no BS expansion. In Task 2, participants performed the tracing task with a long stick, requiring BS expansion. Spatial distribution was analyzed using general linear model (GLM)-based statistical parametric mapping of near-infrared spectroscopy data contaminated with motion artifacts caused by the hand-tracing task. Three methods were utilized in series to counter the artifacts, and optimal conditions and modifications were investigated: a model-free method (Step 1), a convolution matrix method (Step 2), and a boxcar-function-based Gaussian convolution method (Step 3). The results revealed four methodological findings: (1) Deoxyhemoglobin was suitable for the GLM because both Akaike information criterion and the variance against the averaged hemodynamic response function were smaller than for other signals, (2) a high-pass filter with a cutoff frequency of .014 Hz was effective, (3) the hemodynamic response function computed from a Gaussian kernel function and its first- and second-derivative terms should be included in the GLM model, and (4) correction of non-autocorrelation and use of effective degrees of freedom were critical. Investigating z-maps computed according to these guidelines revealed that contiguous areas of BA7-BA40-BA21 in the right hemisphere became significantly activated ([Formula: see text], [Formula: see text], and [Formula: see text], respectively) during BS modification while performing the hand-tracing task.

  17. Development of a multi-criteria evaluation system to assess growing pig welfare.

    PubMed

    Martín, P; Traulsen, I; Buxadé, C; Krieter, J

    2017-03-01

    The aim of this paper was to present an alternative multi-criteria evaluation model to assess animal welfare on farms based on the Welfare Quality® (WQ) project, using an example of welfare assessment of growing pigs. The WQ assessment protocol follows a three-step aggregation process. Measures are aggregated into criteria, criteria into principles and principles into an overall assessment. This study focussed on the first step of the aggregation. Multi-attribute utility theory (MAUT) was used to produce a value of welfare for each criterion. The utility functions and the aggregation function were constructed in two separated steps. The Measuring Attractiveness by a Categorical Based Evaluation Technique (MACBETH) method was used for utility function determination and the Choquet Integral (CI) was used as an aggregation operator. The WQ decision-makers' preferences were fitted in order to construct the utility functions and to determine the CI parameters. The methods were tested with generated data sets for farms of growing pigs. Using the MAUT, similar results were obtained to the ones obtained applying the WQ protocol aggregation methods. It can be concluded that due to the use of an interactive approach such as MACBETH, this alternative methodology is more transparent and more flexible than the methodology proposed by WQ, which allows the possibility to modify the model according, for instance, to new scientific knowledge.

  18. Computational mate choice: theory and empirical evidence.

    PubMed

    Castellano, Sergio; Cadeddu, Giorgia; Cermelli, Paolo

    2012-06-01

    The present review is based on the thesis that mate choice results from information-processing mechanisms governed by computational rules and that, to understand how females choose their mates, we should identify which are the sources of information and how they are used to make decisions. We describe mate choice as a three-step computational process and for each step we present theories and review empirical evidence. The first step is a perceptual process. It describes the acquisition of evidence, that is, how females use multiple cues and signals to assign an attractiveness value to prospective mates (the preference function hypothesis). The second step is a decisional process. It describes the construction of the decision variable (DV), which integrates evidence (private information by direct assessment), priors (public information), and value (perceived utility) of prospective mates into a quantity that is used by a decision rule (DR) to produce a choice. We make the assumption that females are optimal Bayesian decision makers and we derive a formal model of DV that can explain the effects of preference functions, mate copying, social context, and females' state and condition on the patterns of mate choice. The third step of mating decision is a deliberative process that depends on the DRs. We identify two main categories of DRs (absolute and comparative rules), and review the normative models of mate sampling tactics associated to them. We highlight the limits of the normative approach and present a class of computational models (sequential-sampling models) that are based on the assumption that DVs accumulate noisy evidence over time until a decision threshold is reached. These models force us to rethink the dichotomy between comparative and absolute decision rules, between discrimination and recognition, and even between rational and irrational choice. Since they have a robust biological basis, we think they may represent a useful theoretical tool for behavioural ecologist interested in integrating proximate and ultimate causes of mate choice. Copyright © 2012 Elsevier B.V. All rights reserved.

  19. Effectively-truncated large-scale shell-model calculations and nuclei around 100Sn

    NASA Astrophysics Data System (ADS)

    Gargano, A.; Coraggio, L.; Itaco, N.

    2017-09-01

    This paper presents a short overview of a procedure we have recently introduced, dubbed the double-step truncation method, which is aimed to reduce the computational complexity of large-scale shell-model calculations. Within this procedure, one starts with a realistic shell-model Hamiltonian defined in a large model space, and then, by analyzing the effective single particle energies of this Hamiltonian as a function of the number of valence protons and/or neutrons, reduced model spaces are identified containing only the single-particle orbitals relevant to the description of the spectroscopic properties of a certain class of nuclei. As a final step, new effective shell-model Hamiltonians defined within the reduced model spaces are derived by way of a unitary transformation of the original large-scale Hamiltonian. A detailed account of this transformation is given and the merit of the double-step truncation method is illustrated by discussing few selected results for 96Mo, described as four protons and four neutrons outside 88Sr. Some new preliminary results for light odd-tin isotopes from A = 101 to 107 are also reported.

  20. Step-by-Step Simulation of Radiation Chemistry Using Green Functions for Diffusion-Influenced Reactions

    NASA Technical Reports Server (NTRS)

    Plante, Ianik; Cucinotta, Francis A.

    2011-01-01

    Radiolytic species are formed approximately 1 ps after the passage of ionizing radiation through matter. After their formation, they diffuse and chemically react with other radiolytic species and neighboring biological molecules, leading to various oxidative damage. Therefore, the simulation of radiation chemistry is of considerable importance to understand how radiolytic species damage biological molecules [1]. The step-by-step simulation of chemical reactions is difficult, because the radiolytic species are distributed non-homogeneously in the medium. Consequently, computational approaches based on Green functions for diffusion-influenced reactions should be used [2]. Recently, Green functions for more complex type of reactions have been published [3-4]. We have developed exact random variate generators of these Green functions [5], which will allow us to use them in radiation chemistry codes. Moreover, simulating chemistry using the Green functions is which is computationally very demanding, because the probabilities of reactions between each pair of particles should be evaluated at each timestep [2]. This kind of problem is well adapted for General Purpose Graphic Processing Units (GPGPU), which can handle a large number of similar calculations simultaneously. These new developments will allow us to include more complex reactions in chemistry codes, and to improve the calculation time. This code should be of importance to link radiation track structure simulations and DNA damage models.

  1. Unified gas-kinetic scheme with multigrid convergence for rarefied flow study

    NASA Astrophysics Data System (ADS)

    Zhu, Yajun; Zhong, Chengwen; Xu, Kun

    2017-09-01

    The unified gas kinetic scheme (UGKS) is based on direct modeling of gas dynamics on the mesh size and time step scales. With the modeling of particle transport and collision in a time-dependent flux function in a finite volume framework, the UGKS can connect the flow physics smoothly from the kinetic particle transport to the hydrodynamic wave propagation. In comparison with the direct simulation Monte Carlo (DSMC) method, the current equation-based UGKS can implement implicit techniques in the updates of macroscopic conservative variables and microscopic distribution functions. The implicit UGKS significantly increases the convergence speed for steady flow computations, especially in the highly rarefied and near continuum regimes. In order to further improve the computational efficiency, for the first time, a geometric multigrid technique is introduced into the implicit UGKS, where the prediction step for the equilibrium state and the evolution step for the distribution function are both treated with multigrid acceleration. More specifically, a full approximate nonlinear system is employed in the prediction step for fast evaluation of the equilibrium state, and a correction linear equation is solved in the evolution step for the update of the gas distribution function. As a result, convergent speed has been greatly improved in all flow regimes from rarefied to the continuum ones. The multigrid implicit UGKS (MIUGKS) is used in the non-equilibrium flow study, which includes microflow, such as lid-driven cavity flow and the flow passing through a finite-length flat plate, and high speed one, such as supersonic flow over a square cylinder. The MIUGKS shows 5-9 times efficiency increase over the previous implicit scheme. For the low speed microflow, the efficiency of MIUGKS is several orders of magnitude higher than the DSMC. Even for the hypersonic flow at Mach number 5 and Knudsen number 0.1, the MIUGKS is still more than 100 times faster than the DSMC method for obtaining a convergent steady state solution.

  2. Estimating heterotrophic respiration at large scales: challenges, approaches, and next steps

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bond-Lamberty, Benjamin; Epron, Daniel; Harden, Jennifer W.

    2016-06-27

    Heterotrophic respiration (HR), the aerobic and anaerobic processes mineralizing organic matter, is a key carbon flux but one impossible to measure at scales significantly larger than small experimental plots. This impedes our ability to understand carbon and nutrient cycles, benchmark models, or reliably upscale point measurements. Given that a new generation of highly mechanistic, genomic-specific global models is not imminent, we suggest that a useful step to improve this situation would be the development of "Decomposition Functional Types" (DFTs). Analogous to plant functional types (PFTs), DFTs would abstract and capture important differences in HR metabolism and flux dynamics, allowing modelsmore » to efficiently group and vary these characteristics across space and time. We argue that DFTs should be initially informed by top-down expert opinion, but ultimately developed using bottom-up, data-driven analyses, and provide specific examples of potential dependent and independent variables that could be used. We present and discuss an example clustering analysis to show how model-produced annual HR can be broken into distinct groups associated with global variability in biotic and abiotic factors, and demonstrate that these groups are distinct from already-existing PFTs. A similar analysis, incorporating observational data, could form a basis for future DFTs. Finally, we suggest next steps and critical priorities: collection and synthesis of existing data; more in-depth analyses combining open data with high-performance computing; rigorous testing of analytical results; and planning by the global modeling community for decoupling decomposition from fixed site data. These are all critical steps to build a foundation for DFTs in global models, thus providing the ecological and climate change communities with robust, scalable estimates of HR at large scales.« less

  3. Estimating heterotrophic respiration at large scales: Challenges, approaches, and next steps

    DOE PAGES

    Bond-Lamberty, Ben; Epron, Daniel; Harden, Jennifer; ...

    2016-06-27

    Heterotrophic respiration (HR), the aerobic and anaerobic processes mineralizing organic matter, is a key carbon flux but one impossible to measure at scales significantly larger than small experimental plots. This impedes our ability to understand carbon and nutrient cycles, benchmark models, or reliably upscale point measurements. Given that a new generation of highly mechanistic, genomic-specific global models is not imminent, we suggest that a useful step to improve this situation would be the development of Decomposition Functional Types (DFTs). Analogous to plant functional types (PFTs), DFTs would abstract and capture important differences in HR metabolism and flux dynamics, allowing modelersmore » and experimentalists to efficiently group and vary these characteristics across space and time. We argue that DFTs should be initially informed by top-down expert opinion, but ultimately developed using bottom-up, data-driven analyses, and provide specific examples of potential dependent and independent variables that could be used. We present an example clustering analysis to show how annual HR can be broken into distinct groups associated with global variability in biotic and abiotic factors, and demonstrate that these groups are distinct from (but complementary to) already-existing PFTs. A similar analysis incorporating observational data could form the basis for future DFTs. Finally, we suggest next steps and critical priorities: collection and synthesis of existing data; more in-depth analyses combining open data with rigorous testing of analytical results; using point measurements and realistic forcing variables to constrain process-based models; and planning by the global modeling community for decoupling decomposition from fixed site data. Lastly, these are all critical steps to build a foundation for DFTs in global models, thus providing the ecological and climate change communities with robust, scalable estimates of HR.« less

  4. Descriptive vs. mechanistic network models in plant development in the post-genomic era.

    PubMed

    Davila-Velderrain, J; Martinez-Garcia, J C; Alvarez-Buylla, E R

    2015-01-01

    Network modeling is now a widespread practice in systems biology, as well as in integrative genomics, and it constitutes a rich and diverse scientific research field. A conceptually clear understanding of the reasoning behind the main existing modeling approaches, and their associated technical terminologies, is required to avoid confusions and accelerate the transition towards an undeniable necessary more quantitative, multidisciplinary approach to biology. Herein, we focus on two main network-based modeling approaches that are commonly used depending on the information available and the intended goals: inference-based methods and system dynamics approaches. As far as data-based network inference methods are concerned, they enable the discovery of potential functional influences among molecular components. On the other hand, experimentally grounded network dynamical models have been shown to be perfectly suited for the mechanistic study of developmental processes. How do these two perspectives relate to each other? In this chapter, we describe and compare both approaches and then apply them to a given specific developmental module. Along with the step-by-step practical implementation of each approach, we also focus on discussing their respective goals, utility, assumptions, and associated limitations. We use the gene regulatory network (GRN) involved in Arabidopsis thaliana Root Stem Cell Niche patterning as our illustrative example. We show that descriptive models based on functional genomics data can provide important background information consistent with experimentally supported functional relationships integrated in mechanistic GRN models. The rationale of analysis and modeling can be applied to any other well-characterized functional developmental module in multicellular organisms, like plants and animals.

  5. A new methodology based on sensitivity analysis to simplify the recalibration of functional-structural plant models in new conditions.

    PubMed

    Mathieu, Amélie; Vidal, Tiphaine; Jullien, Alexandra; Wu, QiongLi; Chambon, Camille; Bayol, Benoit; Cournède, Paul-Henry

    2018-06-19

    Functional-structural plant models (FSPMs) describe explicitly the interactions between plants and their environment at organ to plant scale. However, the high level of description of the structure or model mechanisms makes this type of model very complex and hard to calibrate. A two-step methodology to facilitate the calibration process is proposed here. First, a global sensitivity analysis method was applied to the calibration loss function. It provided first-order and total-order sensitivity indexes that allow parameters to be ranked by importance in order to select the most influential ones. Second, the Akaike information criterion (AIC) was used to quantify the model's quality of fit after calibration with different combinations of selected parameters. The model with the lowest AIC gives the best combination of parameters to select. This methodology was validated by calibrating the model on an independent data set (same cultivar, another year) with the parameters selected in the second step. All the parameters were set to their nominal value; only the most influential ones were re-estimated. Sensitivity analysis applied to the calibration loss function is a relevant method to underline the most significant parameters in the estimation process. For the studied winter oilseed rape model, 11 out of 26 estimated parameters were selected. Then, the model could be recalibrated for a different data set by re-estimating only three parameters selected with the model selection method. Fitting only a small number of parameters dramatically increases the efficiency of recalibration, increases the robustness of the model and helps identify the principal sources of variation in varying environmental conditions. This innovative method still needs to be more widely validated but already gives interesting avenues to improve the calibration of FSPMs.

  6. A double-inverted pendulum model for studying the adaptability of postural control to frequency during human stepping in place.

    PubMed

    Breniere, Y; Ribreau, C

    1998-10-01

    In order to analyze the influence of gravity and body characteristics on the control of center of mass (CM) oscillations in stepping in place, equations of motion in oscillating systems were developed using a double-inverted pendulum model which accounts for both the head-arms-trunk (HAT) segment and the two-legged system. The principal goal of this work is to propose an equivalent model which makes use of the usual anthropometric data for the human body, in order to study the ability of postural control to adapt to the step frequency in this particular paradigm of human gait. This model allows the computation of CM-to-CP amplitude ratios, when the center of foot pressure (CP) oscillates, as a parametric function of the stepping in place frequency, whose parameters are gravity and major body characteristics. Motion analysis from a force plate was used to test the model by comparing experimental and simulated values of variations of the CM-to-CP amplitude ratio in the frontal plane versus the frequency. With data from the literature, the model is used to calculate the intersegmental torque which stabilizes the HAT when the Leg segment is subjected to a harmonic torque with an imposed frequency.

  7. Classification of longitudinal data through a semiparametric mixed-effects model based on lasso-type estimators.

    PubMed

    Arribas-Gil, Ana; De la Cruz, Rolando; Lebarbier, Emilie; Meza, Cristian

    2015-06-01

    We propose a classification method for longitudinal data. The Bayes classifier is classically used to determine a classification rule where the underlying density in each class needs to be well modeled and estimated. This work is motivated by a real dataset of hormone levels measured at the early stages of pregnancy that can be used to predict normal versus abnormal pregnancy outcomes. The proposed model, which is a semiparametric linear mixed-effects model (SLMM), is a particular case of the semiparametric nonlinear mixed-effects class of models (SNMM) in which finite dimensional (fixed effects and variance components) and infinite dimensional (an unknown function) parameters have to be estimated. In SNMM's maximum likelihood estimation is performed iteratively alternating parametric and nonparametric procedures. However, if one can make the assumption that the random effects and the unknown function interact in a linear way, more efficient estimation methods can be used. Our contribution is the proposal of a unified estimation procedure based on a penalized EM-type algorithm. The Expectation and Maximization steps are explicit. In this latter step, the unknown function is estimated in a nonparametric fashion using a lasso-type procedure. A simulation study and an application on real data are performed. © 2015, The International Biometric Society.

  8. Systems engineering principles for the design of biomedical signal processing systems.

    PubMed

    Faust, Oliver; Acharya U, Rajendra; Sputh, Bernhard H C; Min, Lim Choo

    2011-06-01

    Systems engineering aims to produce reliable systems which function according to specification. In this paper we follow a systems engineering approach to design a biomedical signal processing system. We discuss requirements capturing, specification definition, implementation and testing of a classification system. These steps are executed as formal as possible. The requirements, which motivate the system design, are based on diabetes research. The main requirement for the classification system is to be a reliable component of a machine which controls diabetes. Reliability is very important, because uncontrolled diabetes may lead to hyperglycaemia (raised blood sugar) and over a period of time may cause serious damage to many of the body systems, especially the nerves and blood vessels. In a second step, these requirements are refined into a formal CSP‖ B model. The formal model expresses the system functionality in a clear and semantically strong way. Subsequently, the proven system model was translated into an implementation. This implementation was tested with use cases and failure cases. Formal modeling and automated model checking gave us deep insight in the system functionality. This insight enabled us to create a reliable and trustworthy implementation. With extensive tests we established trust in the reliability of the implementation. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  9. Two steps forward, one step back? A commentary on the disease-specific core sets of the International Classification of Functioning, Disability and Health (ICF).

    PubMed

    McIntyre, Anne; Tempest, Stephanie

    2007-09-30

    The International Classification of Functioning, Disability and Health (ICF) has been received favourably by health care professionals, disability rights organizations and proponents of the social model of disability. The success of the ICF largely depends on its uptake in practice and is considered unwieldy in its full format. To enhance the application of the ICF in practice, disease and site-specific core sets have been developed. The objective of this paper is to stimulate thought and discussion about the place of the ICF core sets in rehabilitation practice. The authors' review of the literature uses the ICF core sets (especially stroke), to debate if the ICF is at risk of taking two steps forward, one step back in its holistic portrayal of health. ICF disease specific core sets could be seen as taking two steps forward to enhance the user friendliness of the ICF and evidence-based practice in rehabilitation. However, there is a danger of taking one step back in reverting to a disease-specific classification. It is too early to conclude the efficacy of the disease-specific core sets, but there is an opportunity to debate where the next steps may lead.

  10. Head Transplantation in Mouse Model.

    PubMed

    Ren, Xiao-Ping; Ye, Yi-Jie; Li, Peng-Wei; Shen, Zi-Long; Han, Ke-Cheng; Song, Yang

    2015-08-01

    The mouse model of allo-head and body reconstruction (AHBR) has recently been established to further the clinical development of this strategy for patients who are suffering from mortal bodily trauma or disease, yet whose mind remains healthy. Animal model studies are indispensable for developing such novel surgical practices. The goal of this work was to establish head transplant mouse model, then the next step through the feasible biological model to investigate immune rejection and brain function in next step, thereby promoting the goal of translation of AHBR to the clinic in the future. Our approach involves retaining adequate blood perfusion in the transplanted head throughout the surgical procedure by establishing donor-to-recipient cross-circulation by cannulating and anastomosing the carotid artery on one side of the body and the jugular vein on the other side. Neurological function was preserved by this strategy as indicated by electroencephalogram and intact cranial nerve reflexes. The results of this study support the feasibility of this method for avoiding brain ischemia during transplantation, thereby allowing for the possibility of long-term studies of head transplantation. © 2015 John Wiley & Sons Ltd.

  11. Constructing biological pathway models with hybrid functional Petri nets.

    PubMed

    Doi, Atsushi; Fujita, Sachie; Matsuno, Hiroshi; Nagasaki, Masao; Miyano, Satoru

    2004-01-01

    In many research projects on modeling and analyzing biological pathways, the Petri net has been recognized as a promising method for representing biological pathways. From the pioneering works by Reddy et al., 1993, and Hofestädt, 1994, that model metabolic pathways by traditional Petri net, several enhanced Petri nets such as colored Petri net, stochastic Petri net, and hybrid Petri net have been used for modeling biological phenomena. Recently, Matsuno et al., 2003b, introduced the hybrid functional Petri net (HFPN) in order to give a more intuitive and natural modeling method for biological pathways than these existing Petri nets. Although the paper demonstrates the effectiveness of HFPN with two examples of gene regulation mechanism for circadian rhythms and apoptosis signaling pathway, there has been no detailed explanation about the method of HFPN construction for these examples. The purpose of this paper is to describe method to construct biological pathways with the HFPN step-by-step. The method is demonstrated by the well-known glycolytic pathway controlled by the lac operon gene regulatory mechanism.

  12. Constructing biological pathway models with hybrid functional petri nets.

    PubMed

    Doi, Atsushi; Fujita, Sachie; Matsuno, Hiroshi; Nagasaki, Masao; Miyano, Satoru

    2011-01-01

    In many research projects on modeling and analyzing biological pathways, the Petri net has been recognized as a promising method for representing biological pathways. From the pioneering works by Reddy et al., 1993, and Hofestädt, 1994, that model metabolic pathways by traditional Petri net, several enhanced Petri nets such as colored Petri net, stochastic Petri net, and hybrid Petri net have been used for modeling biological phenomena. Recently, Matsuno et al., 2003b, introduced the hybrid functional Petri net (HFPN) in order to give a more intuitive and natural modeling method for biological pathways than these existing Petri nets. Although the paper demonstrates the effectiveness of HFPN with two examples of gene regulation mechanism for circadian rhythms and apoptosis signaling pathway, there has been no detailed explanation about the method of HFPN construction for these examples. The purpose of this paper is to describe method to construct biological pathways with the HFPN step-by-step. The method is demonstrated by the well-known glycolytic pathway controlled by the lac operon gene regulatory mechanism.

  13. Identifying Model-Based Reconfiguration Goals through Functional Deficiencies

    NASA Technical Reports Server (NTRS)

    Benazera, Emmanuel; Trave-Massuyes, Louise

    2004-01-01

    Model-based diagnosis is now advanced to the point autonomous systems face some uncertain and faulty situations with success. The next step toward more autonomy is to have the system recovering itself after faults occur, a process known as model-based reconfiguration. After faults occur, given a prediction of the nominal behavior of the system and the result of the diagnosis operation, this paper details how to automatically determine the functional deficiencies of the system. These deficiencies are characterized in the case of uncertain state estimates. A methodology is then presented to determine the reconfiguration goals based on the deficiencies. Finally, a recovery process interleaves planning and model predictive control to restore the functionalities in prioritized order.

  14. Pinyon, Version 0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ward, Logan; Hackenberg, Robert

    2017-02-13

    Pinyon is a tool that stores steps involved in creating a model derived from a collection of data. The main function of Pinyon is to store descriptions of calculations used to analyze or visualize the data in a database, and allow users to view the results of these calculations via a web interface. Additionally, users may also use the web interface to make adjustments to the calculations and rerun the entire collection of analysis steps automatically.

  15. Water oxidation chemistry of photosystem II.

    PubMed Central

    Vrettos, John S; Brudvig, Gary W

    2002-01-01

    The O(2)-evolving complex of photosystem II catalyses the light-driven four-electron oxidation of water to dioxygen in photosynthesis. In this article, the steps leading to photosynthetic O(2) evolution are discussed. Emphasis is given to the proton-coupled electron-transfer steps involved in oxidation of the manganese cluster by oxidized tyrosine Z (Y(*)(Z)), the function of Ca(2+) and the mechanism by which water is activated for formation of an O-O bond. Based on a consideration of the biophysical studies of photosystem II and inorganic manganese model chemistry, a mechanism for photosynthetic O(2) evolution is presented in which the O-O bond-forming step occurs via nucleophilic attack on an electron-deficient Mn(V)=O species by a calcium-bound water molecule. The proposed mechanism includes specific roles for the tetranuclear manganese cluster, calcium, chloride, Y(Z) and His190 of the D1 polypeptide. Recent studies of the ion selectivity of the calcium site in the O(2)-evolving complex and of a functional inorganic manganese model system that test key aspects of this mechanism are also discussed. PMID:12437878

  16. A kinetic study of struvite precipitation recycling technology with NaOH/Mg(OH)2 addition.

    PubMed

    Yu, Rongtai; Ren, Hongqiang; Wang, Yanru; Ding, Lili; Geng, Jingji; Xu, Ke; Zhang, Yan

    2013-09-01

    Struvite precipitation recycling technology is received wide attention in removal ammonium and phosphate out of wastewater. While past study focused on process efficiency, and less on kinetics. The kinetic study is essential for the design and optimization in the application of struvite precipitation recycling technology. The kinetics of struvite with NaOH/Mg(OH)2 addition were studied by thermogravimetry analysis with three rates (5, 10, 20 °C/min), using Friedman method and Ozawa-Flynn-Wall method, respectively. Degradation process of struvite with NaOH/Mg(OH)2 addition was three steps. The stripping of ammonia from struvite was mainly occurred at the first step. In the first step, the activation energy was about 70 kJ/mol, which has gradually declined as the reaction progress. By model fitting studies, the proper mechanism function for struvite decomposition process with NaOH/Mg(OH)2 addition was revealed. The mechanism function was f(α)=α(α)-(1-α)(n), a Prout-Tompkins nth order (Bna) model. Copyright © 2013 Elsevier Ltd. All rights reserved.

  17. Robot-Applied Resistance Augments the Effects of Body Weight-Supported Treadmill Training on Stepping and Synaptic Plasticity in a Rodent Model of Spinal Cord Injury.

    PubMed

    Hinahon, Erika; Estrada, Christina; Tong, Lin; Won, Deborah S; de Leon, Ray D

    2017-08-01

    The application of resistive forces has been used during body weight-supported treadmill training (BWSTT) to improve walking function after spinal cord injury (SCI). Whether this form of training actually augments the effects of BWSTT is not yet known. To determine if robotic-applied resistance augments the effects of BWSTT using a controlled experimental design in a rodent model of SCI. Spinally contused rats were treadmill trained using robotic resistance against horizontal (n = 9) or vertical (n = 8) hind limb movements. Hind limb stepping was tested before and after 6 weeks of training. Two control groups, one receiving standard training (ie, without resistance; n = 9) and one untrained (n = 8), were also tested. At the terminal experiment, the spinal cords were prepared for immunohistochemical analysis of synaptophysin. Six weeks of training with horizontal resistance increased step length, whereas training with vertical resistance enhanced step height and movement velocity. None of these changes occurred in the group that received standard (ie, no resistance) training or in the untrained group. Only standard training increased the number of step cycles and shortened cycle period toward normal values. Synaptophysin expression in the ventral horn was highest in rats trained with horizontal resistance and in untrained rats and was positively correlated with step length. Adding robotic-applied resistance to BWSTT produced gains in locomotor function over BWSTT alone. The impact of resistive forces on spinal connections may depend on the nature of the resistive forces and the synaptic milieu that is present after SCI.

  18. From proteomics to systems biology: MAPA, MASS WESTERN, PROMEX, and COVAIN as a user-oriented platform.

    PubMed

    Weckwerth, Wolfram; Wienkoop, Stefanie; Hoehenwarter, Wolfgang; Egelhofer, Volker; Sun, Xiaoliang

    2014-01-01

    Genome sequencing and systems biology are revolutionizing life sciences. Proteomics emerged as a fundamental technique of this novel research area as it is the basis for gene function analysis and modeling of dynamic protein networks. Here a complete proteomics platform suited for functional genomics and systems biology is presented. The strategy includes MAPA (mass accuracy precursor alignment; http://www.univie.ac.at/mosys/software.html ) as a rapid exploratory analysis step; MASS WESTERN for targeted proteomics; COVAIN ( http://www.univie.ac.at/mosys/software.html ) for multivariate statistical analysis, data integration, and data mining; and PROMEX ( http://www.univie.ac.at/mosys/databases.html ) as a database module for proteogenomics and proteotypic peptides for targeted analysis. Moreover, the presented platform can also be utilized to integrate metabolomics and transcriptomics data for the analysis of metabolite-protein-transcript correlations and time course analysis using COVAIN. Examples for the integration of MAPA and MASS WESTERN data, proteogenomic and metabolic modeling approaches for functional genomics, phosphoproteomics by integration of MOAC (metal-oxide affinity chromatography) with MAPA, and the integration of metabolomics, transcriptomics, proteomics, and physiological data using this platform are presented. All software and step-by-step tutorials for data processing and data mining can be downloaded from http://www.univie.ac.at/mosys/software.html.

  19. WebStart WEPS: Remote data access and model execution functionality added to WEPS

    USDA-ARS?s Scientific Manuscript database

    The Wind Erosion Prediction System (WEPS) is a daily time step, process based wind erosion model developed by the United States Department of Agriculture - Agricultural Research Service (USDA-ARS). WEPS simulates climate and management driven changes to the surface/vegetation/soil state on a daily b...

  20. On Fences, Forms and Mathematical Modeling

    ERIC Educational Resources Information Center

    Lege, Jerry

    2009-01-01

    The white picket fence is an integral component of the iconic American townscape. But, for mathematics students, it can be a mathematical challenge. Picket fences in a variety of styles serve as excellent sources to model constant, step, absolute value, and sinusoidal functions. "Principles and Standards for School Mathematics" (NCTM 2000)…

  1. The effects of the one-step replica symmetry breaking on the Sherrington-Kirkpatrick spin glass model in the presence of random field with a joint Gaussian probability density function for the exchange interactions and random fields

    NASA Astrophysics Data System (ADS)

    Hadjiagapiou, Ioannis A.; Velonakis, Ioannis N.

    2018-07-01

    The Sherrington-Kirkpatrick Ising spin glass model, in the presence of a random magnetic field, is investigated within the framework of the one-step replica symmetry breaking. The two random variables (exchange integral interaction Jij and random magnetic field hi) are drawn from a joint Gaussian probability density function characterized by a correlation coefficient ρ, assuming positive and negative values. The thermodynamic properties, the three different phase diagrams and system's parameters are computed with respect to the natural parameters of the joint Gaussian probability density function at non-zero and zero temperatures. The low temperature negative entropy controversy, a result of the replica symmetry approach, has been partly remedied in the current study, leading to a less negative result. In addition, the present system possesses two successive spin glass phase transitions with characteristic temperatures.

  2. Simulation of gait and gait initiation associated with body oscillating behavior in the gravity environment on the moon, mars and Phobos.

    PubMed

    Brenière, Y

    2001-04-01

    A double-inverted pendulum model of body oscillations in the frontal plane during stepping [Brenière and Ribreau (1998) Biol Cybern 79: 337-345] proposed an equivalent model for studying the body oscillating behavior induced by step frequency in the form of: (1) a kinetic body parameter, the natural body frequency (NBF), which contains gravity and which is invariable for humans, (2) a parametric function of frequency, whose parameter is the NBF, which explicates the amplitude ratio of center of mass to center of foot pressure oscillation, and (3) a function of frequency which simulates the equivalent torque necessary for the control of the head-arms-trunk segment oscillations. Here, this equivalent model is used to simulate the duration of gait initiation, i.e., the duration necessary to initiate and execute the first step of gait in subgravity, as well as to calculate the step frequencies that would impose the same minimum and maximum amplitudes of the oscillating responses of the body center of mass, whatever the gravity value. In particular, this simulation is tested under the subgravity conditions of the Moon, Mars, and Phobos, where gravity is 1/6, 3/8, and 1/1600 times that on the Earth, respectively. More generally, the simulation allows us to establish and discuss the conditions for gait adaptability that result from the biomechanical constraints particular to each gravity system.

  3. One-step model of photoemission from single-crystal surfaces

    DOE PAGES

    Karkare, Siddharth; Wan, Weishi; Feng, Jun; ...

    2017-02-28

    In our paper, we present a three-dimensional one-step photoemission model that can be used to calculate the quantum efficiency and momentum distributions of electrons photoemitted from ordered single-crystal surfaces close to the photoemission threshold. Using Ag(111) as an example, we also show that the model can not only calculate the quantum efficiency from the surface state accurately without using any ad hoc parameters, but also provides a theoretical quantitative explanation of the vectorial photoelectric effect. This model in conjunction with other band structure and wave function calculation techniques can be effectively used to screen single-crystal photoemitters for use as electronmore » sources for particle accelerator and ultrafast electron diffraction applications.« less

  4. Modeling study on the cleavage step of the self-splicing reaction in group I introns

    NASA Technical Reports Server (NTRS)

    Setlik, R. F.; Garduno-Juarez, R.; Manchester, J. I.; Shibata, M.; Ornstein, R. L.; Rein, R.

    1993-01-01

    A three-dimensional model of the Tetrahymena thermophila group I intron is used to further explore the catalytic mechanism of the transphosphorylation reaction of the cleavage step. Based on the coordinates of the catalytic core model proposed by Michel and Westhof (Michel, F., Westhof, E. J. Mol. Biol. 216, 585-610 (1990)), we first converted their ligation step model into a model of the cleavage step by the substitution of several bases and the removal of helix P9. Next, an attempt to place a trigonal bipyramidal transition state model in the active site revealed that this modified model for the cleavage step could not accommodate the transition state due to insufficient space. A lowering of P1 helix relative to surrounding helices provided the additional space required. Simultaneously, it provided a better starting geometry to model the molecular contacts proposed by Pyle et al. (Pyle, A. M., Murphy, F. L., Cech, T. R. Nature 358, 123-128. (1992)), based on mutational studies involving the J8/7 segment. Two hydrated Mg2+ complexes were placed in the active site of the ribozyme model, using the crystal structure of the functionally similar Klenow fragment (Beese, L.S., Steitz, T.A. EMBO J. 10, 25-33 (1991)) as a guide. The presence of two metal ions in the active site of the intron differs from previous models, which incorporate one metal ion in the catalytic site to fulfill the postulated roles of Mg2+ in catalysis. The reaction profile is simulated based on a trigonal bipyramidal transition state, and the role of the hydrated Mg2+ complexes in catalysis is further explored using molecular orbital calculations.

  5. The electrical resistivity of rough thin films: A model based on electron reflection at discrete step edges

    NASA Astrophysics Data System (ADS)

    Zhou, Tianji; Zheng, Pengyuan; Pandey, Sumeet C.; Sundararaman, Ravishankar; Gall, Daniel

    2018-04-01

    The effect of the surface roughness on the electrical resistivity of metallic thin films is described by electron reflection at discrete step edges. A Landauer formalism for incoherent scattering leads to a parameter-free expression for the resistivity contribution from surface mound-valley undulations that is additive to the resistivity associated with bulk and surface scattering. In the classical limit where the electron reflection probability matches the ratio of the step height h divided by the film thickness d, the additional resistivity Δρ = √{3 /2 } /(g0d) × ω/ξ, where g0 is the specific ballistic conductance and ω/ξ is the ratio of the root-mean-square surface roughness divided by the lateral correlation length of the surface morphology. First-principles non-equilibrium Green's function density functional theory transport simulations on 1-nm-thick Cu(001) layers validate the model, confirming that the electron reflection probability is equal to h/d and that the incoherent formalism matches the coherent scattering simulations for surface step separations ≥2 nm. Experimental confirmation is done using 4.5-52 nm thick epitaxial W(001) layers, where ω = 0.25-1.07 nm and ξ = 10.5-21.9 nm are varied by in situ annealing. Electron transport measurements at 77 and 295 K indicate a linear relationship between Δρ and ω/(ξd), confirming the model predictions. The model suggests a stronger resistivity size effect than predictions of existing models by Fuchs [Math. Proc. Cambridge Philos. Soc. 34, 100 (1938)], Sondheimer [Adv. Phys. 1, 1 (1952)], Rossnagel and Kuan [J. Vac. Sci. Technol., B 22, 240 (2004)], or Namba [Jpn. J. Appl. Phys., Part 1 9, 1326 (1970)]. It provides a quantitative explanation for the empirical parameters in these models and may explain the recently reported deviations of experimental resistivity values from these models.

  6. Interactive Design Strategy for a Multi-Functional PAMAM Dendrimer-Based Nano-Therapeutic Using Computational Models and Experimental Analysis

    PubMed Central

    Lee, Inhan; Williams, Christopher R.; Athey, Brian D.; Baker, James R.

    2010-01-01

    Molecular dynamics simulations of nano-therapeutics as a final product and of all intermediates in the process of generating a multi-functional nano-therapeutic based on a poly(amidoamine) (PAMAM) dendrimer were performed along with chemical analyses of each of them. The actual structures of the dendrimers were predicted, based on potentiometric titration, gel permeation chromatography, and NMR. The chemical analyses determined the numbers of functional molecules, based on the actual structure of the dendrimer. Molecular dynamics simulations calculated the configurations of the intermediates and the radial distributions of functional molecules, based on their numbers. This interactive process between the simulation results and the chemical analyses provided a further strategy to design the next reaction steps and to gain insight into the products at each chemical reaction step. PMID:20700476

  7. One-step fabrication of multifunctional micromotors.

    PubMed

    Gao, Wenlong; Liu, Mei; Liu, Limei; Zhang, Hui; Dong, Bin; Li, Christopher Y

    2015-09-07

    Although artificial micromotors have undergone tremendous progress in recent years, their fabrication normally requires complex steps or expensive equipment. In this paper, we report a facile one-step method based on an emulsion solvent evaporation process to fabricate multifunctional micromotors. By simultaneously incorporating various components into an oil-in-water droplet, upon emulsification and solidification, a sphere-shaped, asymmetric, and multifunctional micromotor is formed. Some of the attractive functions of this model micromotor include autonomous movement in high ionic strength solution, remote control, enzymatic disassembly and sustained release. This one-step, versatile fabrication method can be easily scaled up and therefore may have great potential in mass production of multifunctional micromotors for a wide range of practical applications.

  8. Identifying pleiotropic genes in genome-wide association studies from related subjects using the linear mixed model and Fisher combination function.

    PubMed

    Yang, James J; Williams, L Keoki; Buu, Anne

    2017-08-24

    A multivariate genome-wide association test is proposed for analyzing data on multivariate quantitative phenotypes collected from related subjects. The proposed method is a two-step approach. The first step models the association between the genotype and marginal phenotype using a linear mixed model. The second step uses the correlation between residuals of the linear mixed model to estimate the null distribution of the Fisher combination test statistic. The simulation results show that the proposed method controls the type I error rate and is more powerful than the marginal tests across different population structures (admixed or non-admixed) and relatedness (related or independent). The statistical analysis on the database of the Study of Addiction: Genetics and Environment (SAGE) demonstrates that applying the multivariate association test may facilitate identification of the pleiotropic genes contributing to the risk for alcohol dependence commonly expressed by four correlated phenotypes. This study proposes a multivariate method for identifying pleiotropic genes while adjusting for cryptic relatedness and population structure between subjects. The two-step approach is not only powerful but also computationally efficient even when the number of subjects and the number of phenotypes are both very large.

  9. A heuristic neural network initialization scheme for modeling nonlinear functions in engineering mechanics: continuous development

    NASA Astrophysics Data System (ADS)

    Pei, Jin-Song; Mai, Eric C.

    2007-04-01

    This paper introduces a continuous effort towards the development of a heuristic initialization methodology for constructing multilayer feedforward neural networks to model nonlinear functions. In this and previous studies that this work is built upon, including the one presented at SPIE 2006, the authors do not presume to provide a universal method to approximate arbitrary functions, rather the focus is given to the development of a rational and unambiguous initialization procedure that applies to the approximation of nonlinear functions in the specific domain of engineering mechanics. The applications of this exploratory work can be numerous including those associated with potential correlation and interpretation of the inner workings of neural networks, such as damage detection. The goal of this study is fulfilled by utilizing the governing physics and mathematics of nonlinear functions and the strength of the sigmoidal basis function. A step-by-step graphical procedure utilizing a few neural network prototypes as "templates" to approximate commonly seen memoryless nonlinear functions of one or two variables is further developed in this study. Decomposition of complex nonlinear functions into a summation of some simpler nonlinear functions is utilized to exploit this prototype-based initialization methodology. Training examples are presented to demonstrate the rationality and effciency of the proposed methodology when compared with the popular Nguyen-Widrow initialization algorithm. Future work is also identfied.

  10. Improving plant functional groups for dynamic models of biodiversity: at the crossroads between functional and community ecology

    PubMed Central

    Isabelle, Boulangeat; Pauline, Philippe; Sylvain, Abdulhak; Roland, Douzet; Luc, Garraud; Sébastien, Lavergne; Sandra, Lavorel; Jérémie, Van Es; Pascal, Vittoz; Wilfried, Thuiller

    2013-01-01

    The pace of on-going climate change calls for reliable plant biodiversity scenarios. Traditional dynamic vegetation models use plant functional types that are summarized to such an extent that they become meaningless for biodiversity scenarios. Hybrid dynamic vegetation models of intermediate complexity (hybrid-DVMs) have recently been developed to address this issue. These models, at the crossroads between phenomenological and process-based models, are able to involve an intermediate number of well-chosen plant functional groups (PFGs). The challenge is to build meaningful PFGs that are representative of plant biodiversity, and consistent with the parameters and processes of hybrid-DVMs. Here, we propose and test a framework based on few selected traits to define a limited number of PFGs, which are both representative of the diversity (functional and taxonomic) of the flora in the Ecrins National Park, and adapted to hybrid-DVMs. This new classification scheme, together with recent advances in vegetation modeling, constitutes a step forward for mechanistic biodiversity modeling. PMID:24403847

  11. PV_LIB Toolbox

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2012-09-11

    While an organized source of reference information on PV performance modeling is certainly valuable, there is nothing to match the availability of actual examples of modeling algorithms being used in practice. To meet this need, Sandia has developed a PV performance modeling toolbox (PV_LIB) for Matlab. It contains a set of well-documented, open source functions and example scripts showing the functions being used in practical examples. This toolbox is meant to help make the multi-step process of modeling a PV system more transparent and provide the means for model users to validate and understand the models they use and ormore » develop. It is fully integrated into Matlab's help and documentation utilities. The PV_LIB Toolbox provides more than 30 functions that are sorted into four categories« less

  12. Optical pattern recognition algorithms on neural-logic equivalent models and demonstration of their prospects and possible implementations

    NASA Astrophysics Data System (ADS)

    Krasilenko, Vladimir G.; Nikolsky, Alexander I.; Zaitsev, Alexandr V.; Voloshin, Victor M.

    2001-03-01

    Historic information regarding the appearance and creation of fundamentals of algebra-logical apparatus-`equivalental algebra' for description of neuro-nets paradigms and algorithms is considered which is unification of theory of neuron nets (NN), linear algebra and the most generalized neuro-biology extended for matrix case. A survey is given of `equivalental models' of neuron nets and associative memory is suggested new, modified matrix-tenzor neurological equivalental models (MTNLEMS) are offered with double adaptive-equivalental weighing (DAEW) for spatial-non- invariant recognition (SNIR) and space-invariant recognition (SIR) of 2D images (patterns). It is shown, that MTNLEMS DAEW are the most generalized, they can describe the processes in NN both within the frames of known paradigms and within new `equivalental' paradigm of non-interaction type, and the computing process in NN under using the offered MTNLEMs DAEW is reduced to two-step and multi-step algorithms and step-by-step matrix-tenzor procedures (for SNIR) and procedures of defining of space-dependent equivalental functions from two images (for SIR).

  13. Accelerated molecular dynamics and protein conformational change: a theoretical and practical guide using a membrane embedded model neurotransmitter transporter.

    PubMed

    Gedeon, Patrick C; Thomas, James R; Madura, Jeffry D

    2015-01-01

    Molecular dynamics simulation provides a powerful and accurate method to model protein conformational change, yet timescale limitations often prevent direct assessment of the kinetic properties of interest. A large number of molecular dynamic steps are necessary for rare events to occur, which allow a system to overcome energy barriers and conformationally transition from one potential energy minimum to another. For many proteins, the energy landscape is further complicated by a multitude of potential energy wells, each separated by high free-energy barriers and each potentially representative of a functionally important protein conformation. To overcome these obstacles, accelerated molecular dynamics utilizes a robust bias potential function to simulate the transition between different potential energy minima. This straightforward approach more efficiently samples conformational space in comparison to classical molecular dynamics simulation, does not require advanced knowledge of the potential energy landscape and converges to the proper canonical distribution. Here, we review the theory behind accelerated molecular dynamics and discuss the approach in the context of modeling protein conformational change. As a practical example, we provide a detailed, step-by-step explanation of how to perform an accelerated molecular dynamics simulation using a model neurotransmitter transporter embedded in a lipid cell membrane. Changes in protein conformation of relevance to the substrate transport cycle are then examined using principle component analysis.

  14. Effects of rewiring strategies on information spreading in complex dynamic networks

    NASA Astrophysics Data System (ADS)

    Ally, Abdulla F.; Zhang, Ning

    2018-04-01

    Recent advances in networks and communication services have attracted much interest to understand information spreading in social networks. Consequently, numerous studies have been devoted to provide effective and accurate models for mimicking information spreading. However, knowledge on how to spread information faster and more widely remains a contentious issue. Yet, most existing works are based on static networks which limit the reality of dynamism of entities that participate in information spreading. Using the SIR epidemic model, this study explores and compares effects of two rewiring models (Fermi-Dirac and Linear functions) on information spreading in scale free and small world networks. Our results show that for all the rewiring strategies, the spreading influence replenishes with time but stabilizes in a steady state at later time-steps. This means that information spreading takes-off during the initial spreading steps, after which the spreading prevalence settles toward its equilibrium, with majority of the population having recovered and thus, no longer affecting the spreading. Meanwhile, rewiring strategy based on Fermi-Dirac distribution function in one way or another impedes the spreading process, however, the structure of the networks mimic the spreading, even with a low spreading rate. The worst case can be when the spreading rate is extremely small. The results emphasize that despite a big role of such networks in mimicking the spreading, the role of the parameters cannot be simply ignored. Apparently, the probability of giant degree neighbors being informed grows much faster with the rewiring strategy of linear function compared to that of Fermi-Dirac distribution function. Clearly, rewiring model based on linear function generates the fastest spreading across the networks. Therefore, if we are interested in speeding up the spreading process in stochastic modeling, linear function may play a pivotal role.

  15. Spatiotemporal groundwater level modeling using hybrid artificial intelligence-meshless method

    NASA Astrophysics Data System (ADS)

    Nourani, Vahid; Mousavi, Shahram

    2016-05-01

    Uncertainties of the field parameters, noise of the observed data and unknown boundary conditions are the main factors involved in the groundwater level (GL) time series which limit the modeling and simulation of GL. This paper presents a hybrid artificial intelligence-meshless model for spatiotemporal GL modeling. In this way firstly time series of GL observed in different piezometers were de-noised using threshold-based wavelet method and the impact of de-noised and noisy data was compared in temporal GL modeling by artificial neural network (ANN) and adaptive neuro-fuzzy inference system (ANFIS). In the second step, both ANN and ANFIS models were calibrated and verified using GL data of each piezometer, rainfall and runoff considering various input scenarios to predict the GL at one month ahead. In the final step, the simulated GLs in the second step of modeling were considered as interior conditions for the multiquadric radial basis function (RBF) based solve of governing partial differential equation of groundwater flow to estimate GL at any desired point within the plain where there is not any observation. In order to evaluate and compare the GL pattern at different time scales, the cross-wavelet coherence was also applied to GL time series of piezometers. The results showed that the threshold-based wavelet de-noising approach can enhance the performance of the modeling up to 13.4%. Also it was found that the accuracy of ANFIS-RBF model is more reliable than ANN-RBF model in both calibration and validation steps.

  16. A two-step super-Gaussian independent component analysis approach for fMRI data.

    PubMed

    Ge, Ruiyang; Yao, Li; Zhang, Hang; Long, Zhiying

    2015-09-01

    Independent component analysis (ICA) has been widely applied to functional magnetic resonance imaging (fMRI) data analysis. Although ICA assumes that the sources underlying data are statistically independent, it usually ignores sources' additional properties, such as sparsity. In this study, we propose a two-step super-GaussianICA (2SGICA) method that incorporates the sparse prior of the sources into the ICA model. 2SGICA uses the super-Gaussian ICA (SGICA) algorithm that is based on a simplified Lewicki-Sejnowski's model to obtain the initial source estimate in the first step. Using a kernel estimator technique, the source density is acquired and fitted to the Laplacian function based on the initial source estimates. The fitted Laplacian prior is used for each source at the second SGICA step. Moreover, the automatic target generation process for initial value generation is used in 2SGICA to guarantee the stability of the algorithm. An adaptive step size selection criterion is also implemented in the proposed algorithm. We performed experimental tests on both simulated data and real fMRI data to investigate the feasibility and robustness of 2SGICA and made a performance comparison between InfomaxICA, FastICA, mean field ICA (MFICA) with Laplacian prior, sparse online dictionary learning (ODL), SGICA and 2SGICA. Both simulated and real fMRI experiments showed that the 2SGICA was most robust to noises, and had the best spatial detection power and the time course estimation among the six methods. Copyright © 2015. Published by Elsevier Inc.

  17. The role of shock induced trailing-edge separation in limit cycle oscillations

    NASA Technical Reports Server (NTRS)

    Cunningham, Atlee M., Jr.

    1989-01-01

    The potential role of shock induced trailing edge separation (SITES) in limit cycle oscillations (LCO) was established. It was shown that the flip-flop characteristics of transition to and from SITES as well as its hysteresis could couple with wing modes with torsional motion and low damping. This connection led to the formulation of a very simple nonlinear math model using the linear equations of motion with a nonlinear step forcing function with hysteresis. A finite difference solution with time was developed and calculations were made for the F-111 TACT were used to determine the step forcing function due to SITES transition. Since no data were available for the hysteresis, a parameter study was conducted allowing the hysteresis effect to vary. Very small hysteresis effects, which were within expected bounds, were required to obtain reasonable response levels that essentially agreed with flight test results. Also in agreement with wind tunnel tests, LCO calculations for the 1/6 scale F-111 model showed that the model should have not experienced LCO.

  18. Studies on thermokinetic of Chlorella pyrenoidosa devolatilization via different models.

    PubMed

    Chen, Zhihua; Lei, Jianshen; Li, Yunbei; Su, Xianfa; Hu, Zhiquan; Guo, Dabin

    2017-11-01

    The thermokinetics of Chlorella pyrenoidosa (CP) devolatilization were investigated based on iso-conversional model and different distributed activation energy models (DAEM). Iso-conversional process result showed that CP devolatilization roughly followed a single-step with mechanism function of f(α)=(1-α) 3 , and kinetic parameters pair of E 0 =180.5kJ/mol and A 0 =1.5E+13s -1 . Logistic distribution was the most suitable activation energy distribution function for CP devolatilization. Although reaction order n=3.3 was in accordance with iso-conversional process, Logistic DAEM could not detail the weight loss features since it presented as single-step reaction. The un-uniform feature of activation energy distribution in Miura-Maki DAEM, and weight fraction distribution in discrete DAEM reflected weight loss features. Due to the un-uniform distribution of activation and weight fraction, Miura-Maki DAEM and discreted DAEM could describe weight loss features. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Quantification of soil water retention parameters using multi-section TDR-waveform analysis

    NASA Astrophysics Data System (ADS)

    Baviskar, S. M.; Heimovaara, T. J.

    2017-06-01

    Soil water retention parameters are important for describing flow in variably saturated soils. TDR is one of the standard methods used for determining water content in soil samples. In this study, we present an approach to estimate water retention parameters of a sample which is initially saturated and subjected to an incremental decrease in boundary head causing it to drain in a multi-step fashion. TDR waveforms are measured along the height of the sample at assumed different hydrostatic conditions at daily interval. The cumulative discharge outflow drained from the sample is also recorded. The saturated water content is obtained using volumetric analysis after the final step involved in multi-step drainage. The equation obtained by coupling the unsaturated parametric function and the apparent dielectric permittivity is fitted to a TDR wave propagation forward model. The unsaturated parametric function is used to spatially interpolate the water contents along TDR probe. The cumulative discharge outflow data is fitted with cumulative discharge estimated using the unsaturated parametric function. The weight of water inside the sample estimated at the first and final boundary head in multi-step drainage is fitted with the corresponding weights calculated using unsaturated parametric function. A Bayesian optimization scheme is used to obtain optimized water retention parameters for these different objective functions. This approach can be used for samples with long heights and is especially suitable for characterizing sands with a uniform particle size distribution at low capillary heads.

  20. Case-Deletion Diagnostics for Nonlinear Structural Equation Models

    ERIC Educational Resources Information Center

    Lee, Sik-Yum; Lu, Bin

    2003-01-01

    In this article, a case-deletion procedure is proposed to detect influential observations in a nonlinear structural equation model. The key idea is to develop the diagnostic measures based on the conditional expectation of the complete-data log-likelihood function in the EM algorithm. An one-step pseudo approximation is proposed to reduce the…

  1. Classes in the Balance: Latent Class Analysis and the Balance Scale Task

    ERIC Educational Resources Information Center

    Boom, Jan; ter Laak, Jan

    2007-01-01

    Latent class analysis (LCA) has been successfully applied to tasks measuring higher cognitive functioning, suggesting the existence of distinct strategies used in such tasks. With LCA it became possible to classify post hoc. This important step forward in modeling and analyzing cognitive strategies is relevant to the overlapping waves model for…

  2. Pollution potential leaching index as a tool to assess water leaching risk of arsenic in excavated urban soils.

    PubMed

    Li, Jining; Kosugi, Tomoya; Riya, Shohei; Hashimoto, Yohey; Hou, Hong; Terada, Akihiko; Hosomi, Masaaki

    2018-01-01

    Leaching of hazardous trace elements from excavated urban soils during construction of cities has received considerable attention in recent years in Japan. A new concept, the pollution potential leaching index (PPLI), was applied to assess the risk of arsenic (As) leaching from excavated soils. Sequential leaching tests (SLT) with two liquid-to-solid (L/S) ratios (10 and 20Lkg -1 ) were conducted to determine the PPLI values, which represent the critical cumulative L/S ratios at which the average As concentrations in the cumulative leachates are reduced to critical values (10 or 5µgL -1 ). Two models (a logarithmic function model and an empirical two-site first-order leaching model) were compared to estimate the PPLI values. The fractionations of As before and after SLT were extracted according to a five-step sequential extraction procedure. Ten alkaline excavated soils were obtained from different construction projects in Japan. Although their total As contents were low (from 6.75 to 79.4mgkg -1 ), the As leaching was not negligible. Different L/S ratios at each step of the SLT had little influence on the cumulative As release or PPLI values. Experimentally determined PPLI values were in agreement with those from model estimations. A five-step SLT with an L/S of 10Lkg -1 at each step, combined with a logarithmic function fitting was suggested for the easy estimation of PPLI. Results of the sequential extraction procedure showed that large portions of more labile As fractions (non-specifically and specifically sorbed fractions) were removed during long-term leaching and so were small, but non-negligible, portions of strongly bound As fractions. Copyright © 2017 Elsevier Inc. All rights reserved.

  3. Model-based Utility Functions

    NASA Astrophysics Data System (ADS)

    Hibbard, Bill

    2012-05-01

    Orseau and Ring, as well as Dewey, have recently described problems, including self-delusion, with the behavior of agents using various definitions of utility functions. An agent's utility function is defined in terms of the agent's history of interactions with its environment. This paper argues, via two examples, that the behavior problems can be avoided by formulating the utility function in two steps: 1) inferring a model of the environment from interactions, and 2) computing utility as a function of the environment model. Basing a utility function on a model that the agent must learn implies that the utility function must initially be expressed in terms of specifications to be matched to structures in the learned model. These specifications constitute prior assumptions about the environment so this approach will not work with arbitrary environments. But the approach should work for agents designed by humans to act in the physical world. The paper also addresses the issue of self-modifying agents and shows that if provided with the possibility to modify their utility functions agents will not choose to do so, under some usual assumptions.

  4. Combined TGA-MS kinetic analysis of multistep processes. Thermal decomposition and ceramification of polysilazane and polysiloxane preceramic polymers.

    PubMed

    García-Garrido, C; Sánchez-Jiménez, P E; Pérez-Maqueda, L A; Perejón, A; Criado, José M

    2016-10-26

    The polymer-to-ceramic transformation kinetics of two widely employed ceramic precursors, 1,3,5,7-tetramethyl-1,3,5,7-tetravinylcyclotetrasiloxane (TTCS) and polyureamethylvinylsilazane (CERASET), have been investigated using coupled thermogravimetry and mass spectrometry (TG-MS), Raman, XRD and FTIR. The thermally induced decomposition of the pre-ceramic polymer is the critical step in the synthesis of polymer derived ceramics (PDCs) and accurate kinetic modeling is key to attaining a complete understanding of the underlying process and to attempt any behavior predictions. However, obtaining a precise kinetic description of processes of such complexity, consisting of several largely overlapping physico-chemical processes comprising the cleavage of the starting polymeric network and the release of organic moieties, is extremely difficult. Here, by using the evolved gases detected by MS as a guide it has been possible to determine the number of steps that compose the overall process, which was subsequently resolved using a semiempirical deconvolution method based on the Frasier-Suzuki function. Such a function is more appropriate that the more usual Gaussian or Lorentzian functions since it takes into account the intrinsic asymmetry of kinetic curves. Then, the kinetic parameters of each constituent step were independently determined using both model-free and model-fitting procedures, and it was found that the processes obey mostly diffusion models which can be attributed to the diffusion of the released gases through the solid matrix. The validity of the obtained kinetic parameters was tested not only by the successful reconstruction of the original experimental curves, but also by predicting the kinetic curves of the overall processes yielded by different thermal schedules and by a mixed TTCS-CERASET precursor.

  5. Bifurcation analysis of a discrete-time ratio-dependent predator-prey model with Allee Effect

    NASA Astrophysics Data System (ADS)

    Cheng, Lifang; Cao, Hongjun

    2016-09-01

    A discrete-time predator-prey model with Allee effect is investigated in this paper. We consider the strong and the weak Allee effect (the population growth rate is negative and positive at low population density, respectively). From the stability analysis and the bifurcation diagrams, we get that the model with Allee effect (strong or weak) growth function and the model with logistic growth function have somewhat similar bifurcation structures. If the predator growth rate is smaller than its death rate, two species cannot coexist due to having no interior fixed points. When the predator growth rate is greater than its death rate and other parameters are fixed, the model can have two interior fixed points. One is always unstable, and the stability of the other is determined by the integral step size, which decides the species coexistence or not in some extent. If we increase the value of the integral step size, then the bifurcated period doubled orbits or invariant circle orbits may arise. So the numbers of the prey and the predator deviate from one stable state and then circulate along the period orbits or quasi-period orbits. When the integral step size is increased to a critical value, chaotic orbits may appear with many uncertain period-windows, which means that the numbers of prey and predator will be chaotic. In terms of bifurcation diagrams and phase portraits, we know that the complexity degree of the model with strong Allee effect decreases, which is related to the fact that the persistence of species can be determined by the initial species densities.

  6. Beyond signal functions in global obstetric care: Using a clinical cascade to measure emergency obstetric readiness

    PubMed Central

    Dettinger, Julia; Calkins, Kimberly; Kibore, Minnie; Gachuno, Onesmus; Walker, Dilys

    2018-01-01

    Background Globally, the rate of reduction in delivery-associated maternal and perinatal mortality has been slow compared to improvements in post-delivery mortality in children under five. Improving clinical readiness for basic obstetric emergencies is crucial for reducing facility-based maternal deaths. Emergency readiness is commonly assessed using tracers derived from the maternal signal functions model. Objective-method We compare emergency readiness using the signal functions model and a novel clinical cascade. The cascades model readiness as the proportion of facilities with resources to identify the emergency (stage 1), treat it (stage 2) and monitor-modify therapy (stage 3). Data were collected from 44 Kenyan clinics as part of an implementation trial. Findings Although most facilities (77.0%) stock maternal signal function tracer drugs, far fewer have resources to practically identify and treat emergencies. In hypertensive emergencies for example, 38.6% of facilities have resources to identify the emergency (Stage 1 readiness, including sphygmomanometer, stethoscope, urine collection device, protein test). 6.8% have the resources to treat the emergency (Stage 2, consumables (IV Kit, fluids), durable goods (IV pole) and drugs (magnesium sulfate and hydralazine). No facilities could monitor or modify therapy (Stage 3). Across five maternal emergencies, the signal functions overestimate readiness by 54.5%. A consistent, step-wise pattern of readiness loss across signal functions and care stage emerged and was profoundly consistent at 33.0%. Significance Comparing estimates from the maternal signal functions and cascades illustrates four themes. First, signal functions overestimate practical readiness by 55%. Second, the cascade’s intuitive indicators can support cross-sector health system or program planners to more precisely measure and improve emergency care. Third, adding few variables to existing readiness inventories permits step-wise modeling of readiness loss and can inform more precise interventions. Fourth, the novel aggregate readiness loss indicator provides an innovative and intuitive approach for modeling health system emergency readiness. Additional testing in diverse contexts is warranted. PMID:29474397

  7. Leaching of biocides from building facades: Upscaling of a local two-region leaching model to the city scale

    NASA Astrophysics Data System (ADS)

    Coutu, S.; Rota, C.; Rossi, L.; Barry, D. A.

    2011-12-01

    Facades are protected by paints that contain biocides as protection against degradation. These biocides are leached by rainfall (albeit at low concentrations). At the city scale, however, the surface area of building facades is significant, and leached biocides are a potential environmental risk to receiving waters. A city-scale biocide-leaching model was developed based on two main steps. In the first step, laboratory experiments on a single facade were used to calibrate and validate a 1D, two-region phenomenological model of biocide leaching. The same data set was analyzed independently by another research group who found empirically that biocide leachate breakthrough curves were well represented by a sum of two exponentials. Interestingly, the two-region model was found analytically to reproduce this functional form as a special case. The second step in the method is site-specific, and involves upscaling the validated single facade model to a particular city. In this step, (i) GIS-based estimates of facade heights and areas are deduced using the city's cadastral data, (ii) facade flow is estimated using local meteorological data (rainfall, wind direction) and (iii) paint application rates are modeled as a stochastic process based on manufacturers' recommendations. The methodology was applied to Lausanne, Switzerland, a city of about 200,000 inhabitants. Approximately 30% of the annually applied mass of biocides was estimated to be released to the environment.

  8. Pointwise influence matrices for functional-response regression.

    PubMed

    Reiss, Philip T; Huang, Lei; Wu, Pei-Shien; Chen, Huaihou; Colcombe, Stan

    2017-12-01

    We extend the notion of an influence or hat matrix to regression with functional responses and scalar predictors. For responses depending linearly on a set of predictors, our definition is shown to reduce to the conventional influence matrix for linear models. The pointwise degrees of freedom, the trace of the pointwise influence matrix, are shown to have an adaptivity property that motivates a two-step bivariate smoother for modeling nonlinear dependence on a single predictor. This procedure adapts to varying complexity of the nonlinear model at different locations along the function, and thereby achieves better performance than competing tensor product smoothers in an analysis of the development of white matter microstructure in the brain. © 2017, The International Biometric Society.

  9. The motivation for drug abuse treatment: testing cognitive and 12-step theories.

    PubMed

    Bell, D C; Montoya, I D; Richard, A J; Dayton, C A

    1998-11-01

    The purpose of this paper is to evaluate two models of behavior change: cognitive theory and 12-step theory. Research subjects were drawn from three separate, but parallel, samples of adults. The first sample consisted of out-of-treatment chronic drug users, the second consisted of drug users who had applied for treatment at a publicly funded multiple-provider drug treatment facility, and the third consisted of drug users who had applied for treatment at an intensive outpatient program for crack cocaine users. Cognitive theory was supported. Study participants applying for drug abuse treatment reported a higher level of perceived problem severity and a higher level of cognitive functioning than out-of-treatment drug users. Two hypotheses drawn from 12-step theory were not supported. Treatment applicants had more positive emotional functioning than out-of-treatment drug users, and one treatment-seeking sample had higher self-esteem.

  10. Biomass-to-electricity: analysis and optimization of the complete pathway steam explosion--enzymatic hydrolysis--anaerobic digestion with ICE vs SOFC as biogas users.

    PubMed

    Santarelli, M; Barra, S; Sagnelli, F; Zitella, P

    2012-11-01

    The paper deals with the energy analysis and optimization of a complete biomass-to-electricity energy pathway, starting from raw biomass towards the production of renewable electricity. The first step (biomass-to-biogas) is based on a real pilot plant located in Environment Park S.p.A. (Torino, Italy) with three main steps ((1) impregnation; (2) steam explosion; (3) enzymatic hydrolysis), completed by a two-step anaerobic fermentation. In the second step (biogas-to-electricity), the paper considers two technologies: internal combustion engines and a stack of solid oxide fuel cells. First, the complete pathway has been modeled and validated through experimental data. After, the model has been used for an analysis and optimization of the complete thermo-chemical and biological process, with the objective function of maximization of the energy balance at minimum consumption. The comparison between ICE and SOFC shows the better performance of the integrated plants based on SOFC. Copyright © 2012 Elsevier Ltd. All rights reserved.

  11. HIA, the next step: Defining models and roles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Putters, Kim

    If HIA is to be an effective instrument for optimising health interests in the policy making process it has to recognise the different contests in which policy is made and the relevance of both technical rationality and political rationality. Policy making may adopt a rational perspective in which there is a systematic and orderly progression from problem formulation to solution or a network perspective in which there are multiple interdependencies, extensive negotiation and compromise, and the steps from problem to formulation are not followed sequentially or in any particular order. Policy problems may be simple with clear causal pathways andmore » responsibilities or complex with unclear causal pathways and disputed responsibilities. Network analysis is required to show which stakeholders are involved, their support for health issues and the degree of consensus. From this analysis three models of HIA emerge. The first is the phases model which is fitted to simple problems and a rational perspective of policymaking. This model involves following structured steps. The second model is the rounds (Echternach) model that is fitted to complex problems and a network perspective of policymaking. This model is dynamic and concentrates on network solutions taking these steps in no particular order. The final model is the 'garbage can' model fitted to contexts which combine simple and complex problems. In this model HIA functions as a problem solver and signpost keeping all possible solutions and stakeholders in play and allowing solutions to emerge over time. HIA models should be the beginning rather than the conclusion of discussion the worlds of HIA and policymaking.« less

  12. Comparative Analysis of Models of the Earth's Gravity: 3. Accuracy of Predicting EAS Motion

    NASA Astrophysics Data System (ADS)

    Kuznetsov, E. D.; Berland, V. E.; Wiebe, Yu. S.; Glamazda, D. V.; Kajzer, G. T.; Kolesnikov, V. I.; Khremli, G. P.

    2002-05-01

    This paper continues a comparative analysis of modern satellite models of the Earth's gravity which we started in [6, 7]. In the cited works, the uniform norms of spherical functions were compared with their gradients for individual harmonics of the geopotential expansion [6] and the potential differences were compared with the gravitational accelerations obtained in various models of the Earth's gravity [7]. In practice, it is important to know how consistently the EAS motion is represented by various geopotential models. Unless otherwise stated, a model version in which the equations of motion are written using the classical Encke scheme and integrated together with the variation equations by the implicit one-step Everhart's algorithm [1] was used. When calculating coordinates and velocities on the integration step (at given instants of time), the approximate Everhart formula was employed.

  13. Developing the snow component of a distributed hydrological model: a step-wise approach based on multi-objective analysis

    NASA Astrophysics Data System (ADS)

    Dunn, S. M.; Colohan, R. J. E.

    1999-09-01

    A snow component has been developed for the distributed hydrological model, DIY, using an approach that sequentially evaluates the behaviour of different functions as they are implemented in the model. The evaluation is performed using multi-objective functions to ensure that the internal structure of the model is correct. The development of the model, using a sub-catchment in the Cairngorm Mountains in Scotland, demonstrated that the degree-day model can be enhanced for hydroclimatic conditions typical of those found in Scotland, without increasing meteorological data requirements. An important element of the snow model is a function to account for wind re-distribution. This causes large accumulations of snow in small pockets, which are shown to be important in sustaining baseflows in the rivers during the late spring and early summer, long after the snowpack has melted from the bulk of the catchment. The importance of the wind function would not have been identified using a single objective function of total streamflow to evaluate the model behaviour.

  14. CO2 induced phase transitions in diamine-appended metal–organic frameworks† †Electronic supplementary information (ESI) available: Data for images and coordinates. See DOI: 10.1039/c5sc01828e Click here for additional data file. Click here for additional data file.

    PubMed Central

    Vlaisavljevich, Bess; Odoh, Samuel O.; Schnell, Sondre K.; Dzubak, Allison L.; Lee, Kyuho; Planas, Nora; Neaton, Jeffrey B.

    2015-01-01

    Using a combination of density functional theory and lattice models, we study the effect of CO2 adsorption in an amine functionalized metal–organic framework. These materials exhibit a step in the adsorption isotherm indicative of a phase change. The pressure at which this step occurs is not only temperature dependent but is also metal center dependent. Likewise, the heats of adsorption vary depending on the metal center. Herein we demonstrate via quantum chemical calculations that the amines should not be considered firmly anchored to the framework and we explore the mechanism for CO2 adsorption. An ammonium carbamate species is formed via the insertion of CO2 into the M–Namine bonds. Furthermore, we translate the quantum chemical results into isotherms using a coarse grained Monte Carlo simulation technique and show that this adsorption mechanism can explain the characteristic step observed in the experimental isotherm while a previously proposed mechanism cannot. Furthermore, metal analogues have been explored and the CO2 binding energies show a strong metal dependence corresponding to the M–Namine bond strength. We show that this difference can be exploited to tune the pressure at which the step in the isotherm occurs. Additionally, the mmen–Ni2(dobpdc) framework shows Langmuir like behavior, and our simulations show how this can be explained by competitive adsorption between the new model and a previously proposed model. PMID:28717499

  15. Characterization of Mo/Si multilayer growth on stepped topographies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boogaard, A. J. R. vcan den; Louis, E.; Zoethout, E.

    2011-08-31

    Mo/Si multilayer mirrors with nanoscale bilayer thicknesses have been deposited on stepped substrate topographies, using various deposition angles. The multilayer morphology at the stepedge region was studied by cross section transmission electron microscopy. A transition from a continuous- to columnar layer morphology is observed near the step-edge, as a function of the local angle of incidence of the deposition flux. Taking into account the corresponding kinetics and anisotropy in layer growth, a continuum model has been developed to give a detailed description of the height profiles of the individual continuous layers. Complementary optical characterization of the multilayer system using amore » microscope operating in the extreme ultraviolet wavelength range, revealed that the influence of the step-edge on the planar multilayer structure is restricted to a region within 300 nm from the step-edge.« less

  16. Single-step electrochemical functionalization of double-walled carbon nanotube (DWCNT) membranes and the demonstration of ionic rectification

    PubMed Central

    2013-01-01

    Carbon nanotube (CNT) membranes allow the mimicking of natural ion channels for applications in drug delivery and chemical separation. Double-walled carbon nanotube membranes were simply functionalized with dye in a single step instead of the previous two-step functionalization. Non-faradic electrochemical impedance spectra indicated that the functionalized gatekeeper by single-step modification can be actuated to mimic the protein channel under bias. This functional chemistry was proven by a highly efficient ion rectification, wherein the highest experimental rectification factor of ferricyanide was up to 14.4. One-step functionalization by electrooxidation of amine provides a simple and promising functionalization chemistry for the application of CNT membranes. PMID:23758999

  17. Multiple sclerosis severity and concern about falling: Physical, cognitive and psychological mediating factors.

    PubMed

    van Vliet, Rob; Hoang, Phu; Lord, Stephen; Gandevia, Simon; Delbaere, Kim

    2015-01-01

    Concern about falling can have devastating physical and psychological consequences in people with multiple sclerosis (MS). However, little is known about physical and cognitive determinants for increased concern about falling inthis group. To investigate direct and indirect relationships between MS severity and concern about falling using structural equation modelling (SEM). Two hundred and ten community-dwelling people (21-73 years) with MS Disease Steps 0-5 completed several physical, cognitive and psychological assessments. Concern about falling was assessed using the Falls Efficacy Scale-International. Concern about falling was significantly associated with MS Disease Step and also balance, muscle strength, disability, previous falls, and executive functioning. SEM revealed a strong direct path between MS Disease Step and concern about falling (r = 0.31, p <  0.01), as well as indirect paths explained by impaired physical ability (r = 0.25, p <  0.01) and reduced cognitive function (r = 0.13, p <  0.01). The final model explained 51% of the variance of concern about falling in people with MS and had an excellent goodness-of-fit. The relationship between MS severity and increased concern about falling was primarily mediated by reduced physical ability (especially if this resulted in disability and falls) and less so by executive functioning. This suggests people with MS have a realistic appraisal of their concern about falling.

  18. Design, development, and application of LANDIS-II, a spatial landscape simulation model with flexible temporal and spatial resolution

    Treesearch

    Robert M. Scheller; James B. Domingo; Brian R. Sturtevant; Jeremy S. Williams; Arnold Rudy; Eric J. Gustafson; David J. Mladenoff

    2007-01-01

    We introduce LANDIS-II, a landscape model designed to simulate forest succession and disturbances. LANDIS-II builds upon and preserves the functionality of previous LANDIS forest landscape simulation models. LANDIS-II is distinguished by the inclusion of variable time steps for different ecological processes; our use of a rigorous development and testing process used...

  19. Molecular underpinnings of neurodegenerative disorders: striatal-enriched protein tyrosine phosphatase signaling and synaptic plasticity

    PubMed Central

    Lombroso, Paul J.; Ogren, Marilee; Kurup, Pradeep; Nairn, Angus C.

    2016-01-01

    This commentary focuses on potential molecular mechanisms related to the dysfunctional synaptic plasticity that is associated with neurodegenerative disorders such as Alzheimer’s disease and Parkinson’s disease. Specifically, we focus on the role of striatal-enriched protein tyrosine phosphatase (STEP) in modulating synaptic function in these illnesses. STEP affects neuronal communication by opposing synaptic strengthening and does so by dephosphorylating several key substrates known to control synaptic signaling and plasticity. STEP levels are elevated in brains from patients with Alzheimer’s and Parkinson’s disease. Studies in model systems have found that high levels of STEP result in internalization of glutamate receptors as well as inactivation of ERK1/2, Fyn, Pyk2, and other STEP substrates necessary for the development of synaptic strengthening. We discuss the search for inhibitors of STEP activity that may offer potential treatments for neurocognitive disorders that are characterized by increased STEP activity. Future studies are needed to examine the mechanisms of differential and region-specific changes in STEP expression pattern, as such knowledge could lead to targeted therapies for disorders involving disrupted STEP activity. PMID:29098072

  20. The use of copulas to practical estimation of multivariate stochastic differential equation mixed effects models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rupšys, P.

    A system of stochastic differential equations (SDE) with mixed-effects parameters and multivariate normal copula density function were used to develop tree height model for Scots pine trees in Lithuania. A two-step maximum likelihood parameter estimation method is used and computational guidelines are given. After fitting the conditional probability density functions to outside bark diameter at breast height, and total tree height, a bivariate normal copula distribution model was constructed. Predictions from the mixed-effects parameters SDE tree height model calculated during this research were compared to the regression tree height equations. The results are implemented in the symbolic computational language MAPLE.

  1. A unifying framework for quantifying the nature of animal interactions.

    PubMed

    Potts, Jonathan R; Mokross, Karl; Lewis, Mark A

    2014-07-06

    Collective phenomena, whereby agent-agent interactions determine spatial patterns, are ubiquitous in the animal kingdom. On the other hand, movement and space use are also greatly influenced by the interactions between animals and their environment. Despite both types of interaction fundamentally influencing animal behaviour, there has hitherto been no unifying framework for the models proposed in both areas. Here, we construct a general method for inferring population-level spatial patterns from underlying individual movement and interaction processes, a key ingredient in building a statistical mechanics for ecological systems. We show that resource selection functions, as well as several examples of collective motion models, arise as special cases of our framework, thus bringing together resource selection analysis and collective animal behaviour into a single theory. In particular, we focus on combining the various mechanistic models of territorial interactions in the literature with step selection functions, by incorporating interactions into the step selection framework and demonstrating how to derive territorial patterns from the resulting models. We demonstrate the efficacy of our model by application to a population of insectivore birds in the Amazon rainforest. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  2. Ultrasonic inspection of rocket fuel model using laminated transducer and multi-channel step pulser

    NASA Astrophysics Data System (ADS)

    Mihara, T.; Hamajima, T.; Tashiro, H.; Sato, A.

    2013-01-01

    For the ultrasonic inspection for the packing of solid fuel in a rocket booster, an industrial inspection is difficult. Because the signal to noise ratio in ultrasonic inspection of rocket fuel become worse due to the large attenuation even using lower frequency ultrasound. For the improvement of this problem, we tried to applied the two techniques in ultrasonic inspection, one was the step function pulser system with the super wideband frequency properties and the other was the laminated element transducer. By combining these two techniques, we developed the new ultrasonic measurement system and demonstrated the advantages in ultrasonic inspection of rocket fuel model specimen.

  3. Matrix models and stochastic growth in Donaldson-Thomas theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Szabo, Richard J.; Tierz, Miguel; Departamento de Analisis Matematico, Facultad de Ciencias Matematicas, Universidad Complutense de Madrid, Plaza de Ciencias 3, 28040 Madrid

    We show that the partition functions which enumerate Donaldson-Thomas invariants of local toric Calabi-Yau threefolds without compact divisors can be expressed in terms of specializations of the Schur measure. We also discuss the relevance of the Hall-Littlewood and Jack measures in the context of BPS state counting and study the partition functions at arbitrary points of the Kaehler moduli space. This rewriting in terms of symmetric functions leads to a unitary one-matrix model representation for Donaldson-Thomas theory. We describe explicitly how this result is related to the unitary matrix model description of Chern-Simons gauge theory. This representation is used tomore » show that the generating functions for Donaldson-Thomas invariants are related to tau-functions of the integrable Toda and Toeplitz lattice hierarchies. The matrix model also leads to an interpretation of Donaldson-Thomas theory in terms of non-intersecting paths in the lock-step model of vicious walkers. We further show that these generating functions can be interpreted as normalization constants of a corner growth/last-passage stochastic model.« less

  4. Validation of the Activities of Community Transportation model for individuals with cognitive impairments.

    PubMed

    Sohlberg, McKay Moore; Fickas, Stephen; Lemoncello, Rik; Hung, Pei-Fang

    2009-01-01

    To develop a theoretical, functional model of community navigation for individuals with cognitive impairments: the Activities of Community Transportation (ACTs). Iterative design using qualitative methods (i.e. document review, focus groups and observations). Four agencies providing travel training to adults with cognitive impairments in the USA participated in the validation study. A thorough document review and series of focus groups led to the development of a comprehensive model (ACTs Wheels) delineating the requisite steps and skills for community navigation. The model was validated and updated based on observations of 395 actual trips by travellers with navigational challenges from the four participating agencies. Results revealed that the 'ACTs Wheel' models were complete and comprehensive. The 'ACTs Wheels' represent a comprehensive model of the steps needed to navigate to destinations using paratransit and fixed-route public transportation systems for travellers with cognitive impairments. Suggestions are made for future investigations of community transportation for this population.

  5. Modeling the pressure inactivation of Escherichia coli and Salmonella typhimurium in sapote mamey ( Pouteria sapota (Jacq.) H.E. Moore & Stearn) pulp.

    PubMed

    Saucedo-Reyes, Daniela; Carrillo-Salazar, José A; Román-Padilla, Lizbeth; Saucedo-Veloz, Crescenciano; Reyes-Santamaría, María I; Ramírez-Gilly, Mariana; Tecante, Alberto

    2018-03-01

    High hydrostatic pressure inactivation kinetics of Escherichia coli ATCC 25922 and Salmonella enterica subsp. enterica serovar Typhimurium ATCC 14028 ( S. typhimurium) in a low acid mamey pulp at four pressure levels (300, 350, 400, and 450 MPa), different exposure times (0-8 min), and temperature of 25 ± 2℃ were obtained. Survival curves showed deviations from linearity in the form of a tail (upward concavity). The primary models tested were the Weibull model, the modified Gompertz equation, and the biphasic model. The Weibull model gave the best goodness of fit ( R 2 adj  > 0.956, root mean square error < 0.290) in the modeling and the lowest Akaike information criterion value. Exponential-logistic and exponential decay models, and Bigelow-type and an empirical models for b'( P) and n( P) parameters, respectively, were tested as alternative secondary models. The process validation considered the two- and one-step nonlinear regressions for making predictions of the survival fraction; both regression types provided an adequate goodness of fit and the one-step nonlinear regression clearly reduced fitting errors. The best candidate model according to the Akaike theory information, with better accuracy and more reliable predictions was the Weibull model integrated by the exponential-logistic and exponential decay secondary models as a function of time and pressure (two-step procedure) or incorporated as one equation (one-step procedure). Both mathematical expressions were used to determine the t d parameter, where the desired reductions ( 5D) (considering d = 5 ( t 5 ) as the criterion of 5 Log 10 reduction (5 D)) in both microorganisms are attainable at 400 MPa for 5.487 ± 0.488 or 5.950 ± 0.329 min, respectively, for the one- or two-step nonlinear procedure.

  6. The Effects of Training and Performance Feedback during Behavioral Consultation on General Education Middle School Teachers' Integrity to Functional Analysis Procedures

    ERIC Educational Resources Information Center

    McKenney, Elizabeth L. W.; Waldron, Nancy; Conroy, Maureen

    2013-01-01

    This study describes the integrity with which 3 general education middle school teachers implemented functional analyses (FA) of appropriate behavior for students who typically engaged in disruption. A 4-step model consistent with behavioral consultation was used to support the assessment process. All analyses were conducted during ongoing…

  7. The trust-region self-consistent field method in Kohn-Sham density-functional theory.

    PubMed

    Thøgersen, Lea; Olsen, Jeppe; Köhn, Andreas; Jørgensen, Poul; Sałek, Paweł; Helgaker, Trygve

    2005-08-15

    The trust-region self-consistent field (TRSCF) method is extended to the optimization of the Kohn-Sham energy. In the TRSCF method, both the Roothaan-Hall step and the density-subspace minimization step are replaced by trust-region optimizations of local approximations to the Kohn-Sham energy, leading to a controlled, monotonic convergence towards the optimized energy. Previously the TRSCF method has been developed for optimization of the Hartree-Fock energy, which is a simple quadratic function in the density matrix. However, since the Kohn-Sham energy is a nonquadratic function of the density matrix, the local energy functions must be generalized for use with the Kohn-Sham model. Such a generalization, which contains the Hartree-Fock model as a special case, is presented here. For comparison, a rederivation of the popular direct inversion in the iterative subspace (DIIS) algorithm is performed, demonstrating that the DIIS method may be viewed as a quasi-Newton method, explaining its fast local convergence. In the global region the convergence behavior of DIIS is less predictable. The related energy DIIS technique is also discussed and shown to be inappropriate for the optimization of the Kohn-Sham energy.

  8. Combining Approach in Stages with Least Squares for fits of data in hyperelasticity

    NASA Astrophysics Data System (ADS)

    Beda, Tibi

    2006-10-01

    The present work concerns a method of continuous approximation by block of a continuous function; a method of approximation combining the Approach in Stages with the finite domains Least Squares. An identification procedure by sub-domains: basic generating functions are determined step-by-step permitting their weighting effects to be felt. This procedure allows one to be in control of the signs and to some extent of the optimal values of the parameters estimated, and consequently it provides a unique set of solutions that should represent the real physical parameters. Illustrations and comparisons are developed in rubber hyperelastic modeling. To cite this article: T. Beda, C. R. Mecanique 334 (2006).

  9. Cosimulation of embedded system using RTOS software simulator

    NASA Astrophysics Data System (ADS)

    Wang, Shihao; Duan, Zhigang; Liu, Mingye

    2003-09-01

    Embedded system design often employs co-simulation to verify system's function; one efficient verification tool of software is Instruction Set Simulator (ISS). As a full functional model of target CPU, ISS interprets instruction of embedded software step by step, which usually is time-consuming since it simulates at low-level. Hence ISS often becomes the bottleneck of co-simulation in a complicated system. In this paper, a new software verification tools, the RTOS software simulator (RSS) was presented. The mechanism of its operation was described in a full details. In RSS method, RTOS API is extended and hardware simulator driver is adopted to deal with data-exchange and synchronism between the two simulators.

  10. A stochastical event-based continuous time step rainfall generator based on Poisson rectangular pulse and microcanonical random cascade models

    NASA Astrophysics Data System (ADS)

    Pohle, Ina; Niebisch, Michael; Zha, Tingting; Schümberg, Sabine; Müller, Hannes; Maurer, Thomas; Hinz, Christoph

    2017-04-01

    Rainfall variability within a storm is of major importance for fast hydrological processes, e.g. surface runoff, erosion and solute dissipation from surface soils. To investigate and simulate the impacts of within-storm variabilities on these processes, long time series of rainfall with high resolution are required. Yet, observed precipitation records of hourly or higher resolution are in most cases available only for a small number of stations and only for a few years. To obtain long time series of alternating rainfall events and interstorm periods while conserving the statistics of observed rainfall events, the Poisson model can be used. Multiplicative microcanonical random cascades have been widely applied to disaggregate rainfall time series from coarse to fine temporal resolution. We present a new coupling approach of the Poisson rectangular pulse model and the multiplicative microcanonical random cascade model that preserves the characteristics of rainfall events as well as inter-storm periods. In the first step, a Poisson rectangular pulse model is applied to generate discrete rainfall events (duration and mean intensity) and inter-storm periods (duration). The rainfall events are subsequently disaggregated to high-resolution time series (user-specified, e.g. 10 min resolution) by a multiplicative microcanonical random cascade model. One of the challenges of coupling these models is to parameterize the cascade model for the event durations generated by the Poisson model. In fact, the cascade model is best suited to downscale rainfall data with constant time step such as daily precipitation data. Without starting from a fixed time step duration (e.g. daily), the disaggregation of events requires some modifications of the multiplicative microcanonical random cascade model proposed by Olsson (1998): Firstly, the parameterization of the cascade model for events of different durations requires continuous functions for the probabilities of the multiplicative weights, which we implemented through sigmoid functions. Secondly, the branching of the first and last box is constrained to preserve the rainfall event durations generated by the Poisson rectangular pulse model. The event-based continuous time step rainfall generator has been developed and tested using 10 min and hourly rainfall data of four stations in North-Eastern Germany. The model performs well in comparison to observed rainfall in terms of event durations and mean event intensities as well as wet spell and dry spell durations. It is currently being tested using data from other stations across Germany and in different climate zones. Furthermore, the rainfall event generator is being applied in modelling approaches aimed at understanding the impact of rainfall variability on hydrological processes. Reference Olsson, J.: Evaluation of a scaling cascade model for temporal rainfall disaggregation, Hydrology and Earth System Sciences, 2, 19.30

  11. Annular convective-radiative fins with a step change in thickness, and temperature-dependent thermal conductivity and heat transfer coefficient

    NASA Astrophysics Data System (ADS)

    Barforoush, M. S. M.; Saedodin, S.

    2018-01-01

    This article investigates the thermal performance of convective-radiative annular fins with a step reduction in local cross section (SRC). The thermal conductivity of the fin's material is assumed to be a linear function of temperature, and heat transfer coefficient is assumed to be a power-law function of surface temperature. Moreover, nonzero convection and radiation sink temperatures are included in the mathematical model of the energy equation. The well-known differential transformation method (DTM) is used to derive the analytical solution. An exact analytical solution for a special case is derived to prove the validity of the obtained results from the DTM. The model provided here is a more realistic representation of SRC annular fins in actual engineering practices. Effects of many parameters such as conduction-convection parameters, conduction-radiation parameter and sink temperature, and also some parameters which deal with step fins such as thickness parameter and dimensionless parameter describing the position of junction in the fin on the temperature distribution of both thin and thick sections of the fin are investigated. It is believed that the obtained results will facilitate the design and performance evaluation of SRC annular fins.

  12. Controlling the Local Electronic Properties of Si(553)-Au through Hydrogen Doping

    NASA Astrophysics Data System (ADS)

    Hogan, C.; Speiser, E.; Chandola, S.; Suchkova, S.; Aulbach, J.; Schäfer, J.; Meyer, S.; Claessen, R.; Esser, N.

    2018-04-01

    We propose a quantitative and reversible method for tuning the charge localization of Au-stabilized stepped Si surfaces by site-specific hydrogenation. This is demonstrated for Si(553)-Au as a model system by combining density functional theory simulations and reflectance anisotropy spectroscopy experiments. We find that controlled H passivation is a two-step process: step-edge adsorption drives excess charge into the conducting metal chain "reservoir" and renders it insulating, while surplus H recovers metallic behavior. Our approach illustrates a route towards microscopic manipulation of the local surface charge distribution and establishes a reversible switch of site-specific chemical reactivity and magnetic properties on vicinal surfaces.

  13. Intravital multiphoton imaging of mouse tibialis anterior muscle

    PubMed Central

    Lau, Jasmine; Goh, Chi Ching; Devi, Sapna; Keeble, Jo; See, Peter; Ginhoux, Florent; Ng, Lai Guan

    2016-01-01

    ABSTRACT Intravital imaging by multiphoton microscopy is a powerful tool to gain invaluable insight into tissue biology and function. Here, we provide a step-by-step tissue preparation protocol for imaging the mouse tibialis anterior skeletal muscle. Additionally, we include steps for jugular vein catheterization that allow for well-controlled intravenous reagent delivery. Preparation of the tibialis anterior muscle is minimally invasive, reducing the chances of inducing damage and inflammation prior to imaging. The tibialis anterior muscle is useful for imaging leukocyte interaction with vascular endothelium, and to understand muscle contraction biology. Importantly, this model can be easily adapted to study neuromuscular diseases and myopathies. PMID:28243520

  14. Escalator design features evaluation

    NASA Technical Reports Server (NTRS)

    Zimmerman, W. F.; Deshpande, G. K.

    1982-01-01

    Escalators are available with design features such as dual speed (90 and 120 fpm), mat operation and flat steps. These design features were evaluated based on the impact of each on capital and operating costs, traffic flow, and safety. A human factors engineering model was developed to analyze the need for flat steps at various speeds. Mat operation of escalators was found to be cost effective in terms of energy savings. Dual speed operation of escalators with the higher speed used during peak hours allows for efficient operation. A minimum number of flat steps required as a function of escalator speed was developed to ensure safety for the elderly.

  15. Improving particle filters in rainfall-runoff models: application of the resample-move step and development of the ensemble Gaussian particle filter

    NASA Astrophysics Data System (ADS)

    Plaza Guingla, D. A.; Pauwels, V. R.; De Lannoy, G. J.; Matgen, P.; Giustarini, L.; De Keyser, R.

    2012-12-01

    The objective of this work is to analyze the improvement in the performance of the particle filter by including a resample-move step or by using a modified Gaussian particle filter. Specifically, the standard particle filter structure is altered by the inclusion of the Markov chain Monte Carlo move step. The second choice adopted in this study uses the moments of an ensemble Kalman filter analysis to define the importance density function within the Gaussian particle filter structure. Both variants of the standard particle filter are used in the assimilation of densely sampled discharge records into a conceptual rainfall-runoff model. In order to quantify the obtained improvement, discharge root mean square errors are compared for different particle filters, as well as for the ensemble Kalman filter. First, a synthetic experiment is carried out. The results indicate that the performance of the standard particle filter can be improved by the inclusion of the resample-move step, but its effectiveness is limited to situations with limited particle impoverishment. The results also show that the modified Gaussian particle filter outperforms the rest of the filters. Second, a real experiment is carried out in order to validate the findings from the synthetic experiment. The addition of the resample-move step does not show a considerable improvement due to performance limitations in the standard particle filter with real data. On the other hand, when an optimal importance density function is used in the Gaussian particle filter, the results show a considerably improved performance of the particle filter.

  16. Whole limb kinematics are preferentially conserved over individual joint kinematics after peripheral nerve injury

    PubMed Central

    Chang, Young-Hui; Auyang, Arick G.; Scholz, John P.; Nichols, T. Richard

    2009-01-01

    Summary Biomechanics and neurophysiology studies suggest whole limb function to be an important locomotor control parameter. Inverted pendulum and mass-spring models greatly reduce the complexity of the legs and predict the dynamics of locomotion, but do not address how numerous limb elements are coordinated to achieve such simple behavior. As a first step, we hypothesized whole limb kinematics were of primary importance and would be preferentially conserved over individual joint kinematics after neuromuscular injury. We used a well-established peripheral nerve injury model of cat ankle extensor muscles to generate two experimental injury groups with a predictable time course of temporary paralysis followed by complete muscle self-reinnervation. Mean trajectories of individual joint kinematics were altered as a result of deficits after injury. By contrast, mean trajectories of limb orientation and limb length remained largely invariant across all animals, even with paralyzed ankle extensor muscles, suggesting changes in mean joint angles were coordinated as part of a long-term compensation strategy to minimize change in whole limb kinematics. Furthermore, at each measurement stage (pre-injury, paralytic and self-reinnervated) step-by-step variance of individual joint kinematics was always significantly greater than that of limb orientation. Our results suggest joint angle combinations are coordinated and selected to stabilize whole limb kinematics against short-term natural step-by-step deviations as well as long-term, pathological deviations created by injury. This may represent a fundamental compensation principle allowing animals to adapt to changing conditions with minimal effect on overall locomotor function. PMID:19837893

  17. Fields of Tension in a Boundary-Crossing World: Towards a Democratic Organization of the Self.

    PubMed

    Hermans, Hubert J M; Konopka, Agnieszka; Oosterwegel, Annerieke; Zomer, Peter

    2017-12-01

    In their study of the relationship between self and society, scientists have proposed taking society as a metaphor for understanding the dynamics of the self, such as the analogy between the self and the functioning of a totalitarian state or the analogy between the self and the functioning of a bureaucratic organization. In addition to these models, the present article proposes a democratic society as a metaphor for understanding the workings of a dialogical self in a globalizing, boundary-crossing world. The article follows four steps. In the first step the self is depicted as extended to the social and societal environment and made up of fields of tension in which a multiplicity of self-positions are involved in processes of positioning and counter-positioning and in relationships of social power. In the second step, the fertility of the democratic metaphor is demonstrated by referring to theory and research from three identity perspectives: multicultural, multiracial, and transgender. In the fields of tension emerging between the multiplicity of self-positions, new, hybrid, and mixed identities have a chance to emerge as adaptive responses to the limitations of existing societal structures. In the third step, we place the democratic self in a broader societal context by linking three levels of inclusiveness, proposed by Self-Categorization Theory (personal, social, and human) to recent conceptions of a cosmopolitan democracy. In the fourth and final step, a model is presented which allows the formulation of a series of specific research questions for future studies of a democratically organized self.

  18. Does increasing steps per day predict improvement in physical function and pain interference in adults with fibromyalgia?

    PubMed

    Kaleth, Anthony S; Slaven, James E; Ang, Dennis C

    2014-12-01

    To examine the concurrent and predictive associations between the number of steps taken per day and clinical outcomes in patients with fibromyalgia (FM). A total of 199 adults with FM (mean age 46.1 years, 95% women) who were enrolled in a randomized clinical trial wore a hip-mounted accelerometer for 1 week and completed self-report measures of physical function (Fibromyalgia Impact Questionnaire-Physical Impairment [FIQ-PI], Short Form 36 [SF-36] health survey physical component score [PCS], pain intensity and interference (Brief Pain Inventory [BPI]), and depressive symptoms (Patient Health Questionnaire-8 [PHQ-8]) as part of their baseline and followup assessments. Associations of steps per day with self-report clinical measures were evaluated from baseline to week 12 using multivariate regression models adjusted for demographic and baseline covariates. Study participants were primarily sedentary, averaging 4,019 ± 1,530 steps per day. Our findings demonstrate a linear relationship between the change in steps per day and improvement in health outcomes for FM. Incremental increases on the order of 1,000 steps per day were significantly associated with (and predictive of) improvements in FIQ-PI, SF-36 PCS, BPI pain interference, and PHQ-8 (all P < 0.05). Although higher step counts were associated with lower FIQ and BPI pain intensity scores, these were not statistically significant. Step count is an easily obtained and understood objective measure of daily physical activity. An exercise prescription that includes recommendations to gradually accumulate at least 5,000 additional steps per day may result in clinically significant improvements in outcomes relevant to patients with FM. Future studies are needed to elucidate the dose-response relationship between steps per day and patient outcomes in FM. Copyright © 2014 by the American College of Rheumatology.

  19. Evaluation of Several Two-Step Scoring Functions Based on Linear Interaction Energy, Effective Ligand Size, and Empirical Pair Potentials for Prediction of Protein-Ligand Binding Geometry and Free Energy

    PubMed Central

    Rahaman, Obaidur; Estrada, Trilce P.; Doren, Douglas J.; Taufer, Michela; Brooks, Charles L.; Armen, Roger S.

    2011-01-01

    The performance of several two-step scoring approaches for molecular docking were assessed for their ability to predict binding geometries and free energies. Two new scoring functions designed for “step 2 discrimination” were proposed and compared to our CHARMM implementation of the linear interaction energy (LIE) approach using the Generalized-Born with Molecular Volume (GBMV) implicit solvation model. A scoring function S1 was proposed by considering only “interacting” ligand atoms as the “effective size” of the ligand, and extended to an empirical regression-based pair potential S2. The S1 and S2 scoring schemes were trained and five-fold cross validated on a diverse set of 259 protein-ligand complexes from the Ligand Protein Database (LPDB). The regression-based parameters for S1 and S2 also demonstrated reasonable transferability in the CSARdock 2010 benchmark using a new dataset (NRC HiQ) of diverse protein-ligand complexes. The ability of the scoring functions to accurately predict ligand geometry was evaluated by calculating the discriminative power (DP) of the scoring functions to identify native poses. The parameters for the LIE scoring function with the optimal discriminative power (DP) for geometry (step 1 discrimination) were found to be very similar to the best-fit parameters for binding free energy over a large number of protein-ligand complexes (step 2 discrimination). Reasonable performance of the scoring functions in enrichment of active compounds in four different protein target classes established that the parameters for S1 and S2 provided reasonable accuracy and transferability. Additional analysis was performed to definitively separate scoring function performance from molecular weight effects. This analysis included the prediction of ligand binding efficiencies for a subset of the CSARdock NRC HiQ dataset where the number of ligand heavy atoms ranged from 17 to 35. This range of ligand heavy atoms is where improved accuracy of predicted ligand efficiencies is most relevant to real-world drug design efforts. PMID:21644546

  20. Evaluation of several two-step scoring functions based on linear interaction energy, effective ligand size, and empirical pair potentials for prediction of protein-ligand binding geometry and free energy.

    PubMed

    Rahaman, Obaidur; Estrada, Trilce P; Doren, Douglas J; Taufer, Michela; Brooks, Charles L; Armen, Roger S

    2011-09-26

    The performances of several two-step scoring approaches for molecular docking were assessed for their ability to predict binding geometries and free energies. Two new scoring functions designed for "step 2 discrimination" were proposed and compared to our CHARMM implementation of the linear interaction energy (LIE) approach using the Generalized-Born with Molecular Volume (GBMV) implicit solvation model. A scoring function S1 was proposed by considering only "interacting" ligand atoms as the "effective size" of the ligand and extended to an empirical regression-based pair potential S2. The S1 and S2 scoring schemes were trained and 5-fold cross-validated on a diverse set of 259 protein-ligand complexes from the Ligand Protein Database (LPDB). The regression-based parameters for S1 and S2 also demonstrated reasonable transferability in the CSARdock 2010 benchmark using a new data set (NRC HiQ) of diverse protein-ligand complexes. The ability of the scoring functions to accurately predict ligand geometry was evaluated by calculating the discriminative power (DP) of the scoring functions to identify native poses. The parameters for the LIE scoring function with the optimal discriminative power (DP) for geometry (step 1 discrimination) were found to be very similar to the best-fit parameters for binding free energy over a large number of protein-ligand complexes (step 2 discrimination). Reasonable performance of the scoring functions in enrichment of active compounds in four different protein target classes established that the parameters for S1 and S2 provided reasonable accuracy and transferability. Additional analysis was performed to definitively separate scoring function performance from molecular weight effects. This analysis included the prediction of ligand binding efficiencies for a subset of the CSARdock NRC HiQ data set where the number of ligand heavy atoms ranged from 17 to 35. This range of ligand heavy atoms is where improved accuracy of predicted ligand efficiencies is most relevant to real-world drug design efforts.

  1. Advanced methods for modeling water-levels and estimating drawdowns with SeriesSEE, an Excel add-in

    USGS Publications Warehouse

    Halford, Keith; Garcia, C. Amanda; Fenelon, Joe; Mirus, Benjamin B.

    2012-12-21

    Water-level modeling is used for multiple-well aquifer tests to reliably differentiate pumping responses from natural water-level changes in wells, or “environmental fluctuations.” Synthetic water levels are created during water-level modeling and represent the summation of multiple component fluctuations, including those caused by environmental forcing and pumping. Pumping signals are modeled by transforming step-wise pumping records into water-level changes by using superimposed Theis functions. Water-levels can be modeled robustly with this Theis-transform approach because environmental fluctuations and pumping signals are simulated simultaneously. Water-level modeling with Theis transforms has been implemented in the program SeriesSEE, which is a Microsoft® Excel add-in. Moving average, Theis, pneumatic-lag, and gamma functions transform time series of measured values into water-level model components in SeriesSEE. Earth tides and step transforms are additional computed water-level model components. Water-level models are calibrated by minimizing a sum-of-squares objective function where singular value decomposition and Tikhonov regularization stabilize results. Drawdown estimates from a water-level model are the summation of all Theis transforms minus residual differences between synthetic and measured water levels. The accuracy of drawdown estimates is limited primarily by noise in the data sets, not the Theis-transform approach. Drawdowns much smaller than environmental fluctuations have been detected across major fault structures, at distances of more than 1 mile from the pumping well, and with limited pre-pumping and recovery data at sites across the United States. In addition to water-level modeling, utilities exist in SeriesSEE for viewing, cleaning, manipulating, and analyzing time-series data.

  2. Combined Economic and Hydrologic Modeling to Support Collaborative Decision Making Processes

    NASA Astrophysics Data System (ADS)

    Sheer, D. P.

    2008-12-01

    For more than a decade, the core concept of the author's efforts in support of collaborative decision making has been a combination of hydrologic simulation and multi-objective optimization. The modeling has generally been used to support collaborative decision making processes. The OASIS model developed by HydroLogics Inc. solves a multi-objective optimization at each time step using a mixed integer linear program (MILP). The MILP can be configured to include any user defined objective, including but not limited too economic objectives. For example, an estimated marginal value for water for crops and M&I use were included in the objective function to drive trades in a model of the lower Rio Grande. The formulation of the MILP, constraints and objectives, in any time step is conditional: it changes based on the value of state variables and dynamic external forcing functions, such as rainfall, hydrology, market prices, arrival of migratory fish, water temperature, etc. It therefore acts as a dynamic short term multi-objective economic optimization for each time step. MILP is capable of solving a general problem that includes a very realistic representation of the physical system characteristics in addition to the normal multi-objective optimization objectives and constraints included in economic models. In all of these models, the short term objective function is a surrogate for achieving long term multi-objective results. The long term performance for any alternative (especially including operating strategies) is evaluated by simulation. An operating rule is the combination of conditions, parameters, constraints and objectives used to determine the formulation of the short term optimization in each time step. Heuristic wrappers for the simulation program have been developed improve the parameters of an operating rule, and are initiating research on a wrapper that will allow us to employ a genetic algorithm to improve the form of the rule (conditions, constraints, and short term objectives) as well. In the models operating rules represent different models of human behavior, and the objective of the modeling is to find rules for human behavior that perform well in terms of long term human objectives. The conceptual model used to represent human behavior incorporates economic multi-objective optimization for surrogate objectives, and rules that set those objectives based on current conditions and accounting for uncertainty, at least implicitly. The author asserts that real world operating rules follow this form and have evolved because they have been perceived as successful in the past. Thus, the modeling efforts focus on human behavior in much the same way that economic models focus on human behavior. This paper illustrates the above concepts with real world examples.

  3. FOAM (Functional Ontology Assignments for Metagenomes): A Hidden Markov Model (HMM) database with environmental focus

    DOE PAGES

    Prestat, Emmanuel; David, Maude M.; Hultman, Jenni; ...

    2014-09-26

    A new functional gene database, FOAM (Functional Ontology Assignments for Metagenomes), was developed to screen environmental metagenomic sequence datasets. FOAM provides a new functional ontology dedicated to classify gene functions relevant to environmental microorganisms based on Hidden Markov Models (HMMs). Sets of aligned protein sequences (i.e. ‘profiles’) were tailored to a large group of target KEGG Orthologs (KOs) from which HMMs were trained. The alignments were checked and curated to make them specific to the targeted KO. Within this process, sequence profiles were enriched with the most abundant sequences available to maximize the yield of accurate classifier models. An associatedmore » functional ontology was built to describe the functional groups and hierarchy. FOAM allows the user to select the target search space before HMM-based comparison steps and to easily organize the results into different functional categories and subcategories. FOAM is publicly available at http://portal.nersc.gov/project/m1317/FOAM/.« less

  4. Nonparametric Bayesian models for a spatial covariance.

    PubMed

    Reich, Brian J; Fuentes, Montserrat

    2012-01-01

    A crucial step in the analysis of spatial data is to estimate the spatial correlation function that determines the relationship between a spatial process at two locations. The standard approach to selecting the appropriate correlation function is to use prior knowledge or exploratory analysis, such as a variogram analysis, to select the correct parametric correlation function. Rather that selecting a particular parametric correlation function, we treat the covariance function as an unknown function to be estimated from the data. We propose a flexible prior for the correlation function to provide robustness to the choice of correlation function. We specify the prior for the correlation function using spectral methods and the Dirichlet process prior, which is a common prior for an unknown distribution function. Our model does not require Gaussian data or spatial locations on a regular grid. The approach is demonstrated using a simulation study as well as an analysis of California air pollution data.

  5. Calibration of two complex ecosystem models with different likelihood functions

    NASA Astrophysics Data System (ADS)

    Hidy, Dóra; Haszpra, László; Pintér, Krisztina; Nagy, Zoltán; Barcza, Zoltán

    2014-05-01

    The biosphere is a sensitive carbon reservoir. Terrestrial ecosystems were approximately carbon neutral during the past centuries, but they became net carbon sinks due to climate change induced environmental change and associated CO2 fertilization effect of the atmosphere. Model studies and measurements indicate that the biospheric carbon sink can saturate in the future due to ongoing climate change which can act as a positive feedback. Robustness of carbon cycle models is a key issue when trying to choose the appropriate model for decision support. The input parameters of the process-based models are decisive regarding the model output. At the same time there are several input parameters for which accurate values are hard to obtain directly from experiments or no local measurements are available. Due to the uncertainty associated with the unknown model parameters significant bias can be experienced if the model is used to simulate the carbon and nitrogen cycle components of different ecosystems. In order to improve model performance the unknown model parameters has to be estimated. We developed a multi-objective, two-step calibration method based on Bayesian approach in order to estimate the unknown parameters of PaSim and Biome-BGC models. Biome-BGC and PaSim are a widely used biogeochemical models that simulate the storage and flux of water, carbon, and nitrogen between the ecosystem and the atmosphere, and within the components of the terrestrial ecosystems (in this research the developed version of Biome-BGC is used which is referred as BBGC MuSo). Both models were calibrated regardless the simulated processes and type of model parameters. The calibration procedure is based on the comparison of measured data with simulated results via calculating a likelihood function (degree of goodness-of-fit between simulated and measured data). In our research different likelihood function formulations were used in order to examine the effect of the different model goodness metric on calibration. The different likelihoods are different functions of RMSE (root mean squared error) weighted by measurement uncertainty: exponential / linear / quadratic / linear normalized by correlation. As a first calibration step sensitivity analysis was performed in order to select the influential parameters which have strong effect on the output data. In the second calibration step only the sensitive parameters were calibrated (optimal values and confidence intervals were calculated). In case of PaSim more parameters were found responsible for the 95% of the output data variance than is case of BBGC MuSo. Analysis of the results of the optimized models revealed that the exponential likelihood estimation proved to be the most robust (best model simulation with optimized parameter, highest confidence interval increase). The cross-validation of the model simulations can help in constraining the highly uncertain greenhouse gas budget of grasslands.

  6. Predicting the continuum between corridors and barriers to animal movements using Step Selection Functions and Randomized Shortest Paths.

    PubMed

    Panzacchi, Manuela; Van Moorter, Bram; Strand, Olav; Saerens, Marco; Kivimäki, Ilkka; St Clair, Colleen C; Herfindal, Ivar; Boitani, Luigi

    2016-01-01

    The loss, fragmentation and degradation of habitat everywhere on Earth prompts increasing attention to identifying landscape features that support animal movement (corridors) or impedes it (barriers). Most algorithms used to predict corridors assume that animals move through preferred habitat either optimally (e.g. least cost path) or as random walkers (e.g. current models), but neither extreme is realistic. We propose that corridors and barriers are two sides of the same coin and that animals experience landscapes as spatiotemporally dynamic corridor-barrier continua connecting (separating) functional areas where individuals fulfil specific ecological processes. Based on this conceptual framework, we propose a novel methodological approach that uses high-resolution individual-based movement data to predict corridor-barrier continua with increased realism. Our approach consists of two innovations. First, we use step selection functions (SSF) to predict friction maps quantifying corridor-barrier continua for tactical steps between consecutive locations. Secondly, we introduce to movement ecology the randomized shortest path algorithm (RSP) which operates on friction maps to predict the corridor-barrier continuum for strategic movements between functional areas. By modulating the parameter Ѳ, which controls the trade-off between exploration and optimal exploitation of the environment, RSP bridges the gap between algorithms assuming optimal movements (when Ѳ approaches infinity, RSP is equivalent to LCP) or random walk (when Ѳ → 0, RSP → current models). Using this approach, we identify migration corridors for GPS-monitored wild reindeer (Rangifer t. tarandus) in Norway. We demonstrate that reindeer movement is best predicted by an intermediate value of Ѳ, indicative of a movement trade-off between optimization and exploration. Model calibration allows identification of a corridor-barrier continuum that closely fits empirical data and demonstrates that RSP outperforms models that assume either optimality or random walk. The proposed approach models the multiscale cognitive maps by which animals likely navigate real landscapes and generalizes the most common algorithms for identifying corridors. Because suboptimal, but non-random, movement strategies are likely widespread, our approach has the potential to predict more realistic corridor-barrier continua for a wide range of species. © 2015 The Authors. Journal of Animal Ecology © 2015 British Ecological Society.

  7. Reduction of physical activity in daily life and its determinants in smokers without airflow obstruction.

    PubMed

    Furlanetto, Karina Couto; Mantoani, Leandro Cruz; Bisca, Gianna; Morita, Andrea Akemi; Zabatiero, Juliana; Proença, Mahara; Kovelis, Demétria; Pitta, Fabio

    2014-04-01

    In smokers without airflow obstruction, detailed, objective and controlled quantification of the level of physical inactivity in daily life has never been performed. This study aimed to objectively assess the level of physical activity in daily life in adult smokers without airflow obstruction in comparison with matched non-smokers, and to investigate the determinants for daily physical activity in smokers. Sixty smokers (aged 50 (39-54) years) and 50 non-smokers (aged 48 (40-53) years) matched for gender, age, anthropometric characteristics, educational level, employment status and seasons of the year assessment period were cross-sectionally assessed regarding their daily physical activity with a step counter, besides assessment of lung function, functional exercise capacity, quality of life, anxiety, depression, self-reported comorbidities carbon monoxide level, nicotine dependence and smoking habits. When compared with non-smokers, smokers walked less in daily life (7923 ± 3558 vs 9553 ± 3637 steps/day, respectively), presented worse lung function, functional exercise capacity, quality of life, anxiety and depression. Multiple regression analyses identified functional exercise capacity, Borg fatigue, self-reported motivation/physical activity behaviour and cardiac disease as significant determinants of number of steps/day in smokers (partial r(2)  = 0.10, 0.12, 0.16 and 0.05; b = 15, -997, 1207 and -2330 steps/day, respectively; overall fit of the model R(2)  = 0.38; P < 0.001). Adult smokers without airflow obstruction presented reduced level of daily physical activity. Functional exercise capacity, extended fatigue sensation, aspects of motivation/physical activity behaviour and self-reported cardiac disease are significant determinants of physical activity in daily life in smokers. © 2014 The Authors. Respirology © 2014 Asian Pacific Society of Respirology.

  8. Smoothing-based compressed state Kalman filter for joint state-parameter estimation: Applications in reservoir characterization and CO2 storage monitoring

    NASA Astrophysics Data System (ADS)

    Li, Y. J.; Kokkinaki, Amalia; Darve, Eric F.; Kitanidis, Peter K.

    2017-08-01

    The operation of most engineered hydrogeological systems relies on simulating physical processes using numerical models with uncertain parameters and initial conditions. Predictions by such uncertain models can be greatly improved by Kalman-filter techniques that sequentially assimilate monitoring data. Each assimilation constitutes a nonlinear optimization, which is solved by linearizing an objective function about the model prediction and applying a linear correction to this prediction. However, if model parameters and initial conditions are uncertain, the optimization problem becomes strongly nonlinear and a linear correction may yield unphysical results. In this paper, we investigate the utility of one-step ahead smoothing, a variant of the traditional filtering process, to eliminate nonphysical results and reduce estimation artifacts caused by nonlinearities. We present the smoothing-based compressed state Kalman filter (sCSKF), an algorithm that combines one step ahead smoothing, in which current observations are used to correct the state and parameters one step back in time, with a nonensemble covariance compression scheme, that reduces the computational cost by efficiently exploring the high-dimensional state and parameter space. Numerical experiments show that when model parameters are uncertain and the states exhibit hyperbolic behavior with sharp fronts, as in CO2 storage applications, one-step ahead smoothing reduces overshooting errors and, by design, gives physically consistent state and parameter estimates. We compared sCSKF with commonly used data assimilation methods and showed that for the same computational cost, combining one step ahead smoothing and nonensemble compression is advantageous for real-time characterization and monitoring of large-scale hydrogeological systems with sharp moving fronts.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wemhoff, A P; Burnham, A K; Nichols III, A L

    The reduction of the number of reactions in kinetic models for both the HMX beta-delta phase transition and thermal cookoff provides an attractive alternative to traditional multi-stage kinetic models due to reduced calibration effort requirements. In this study, we use the LLNL code ALE3D to provide calibrated kinetic parameters for a two-reaction bidirectional beta-delta HMX phase transition model based on Sandia Instrumented Thermal Ignition (SITI) and Scaled Thermal Explosion (STEX) temperature history curves, and a Prout-Tompkins cookoff model based on One-Dimensional Time to Explosion (ODTX) data. Results show that the two-reaction bidirectional beta-delta transition model presented here agrees as wellmore » with STEX and SITI temperature history curves as a reversible four-reaction Arrhenius model, yet requires an order of magnitude less computational effort. In addition, a single-reaction Prout-Tompkins model calibrated to ODTX data provides better agreement with ODTX data than a traditional multi-step Arrhenius model, and can contain up to 90% less chemistry-limited time steps for low-temperature ODTX simulations. Manual calibration methods for the Prout-Tompkins kinetics provide much better agreement with ODTX experimental data than parameters derived from Differential Scanning Calorimetry (DSC) measurements at atmospheric pressure. The predicted surface temperature at explosion for STEX cookoff simulations is a weak function of the cookoff model used, and a reduction of up to 15% of chemistry-limited time steps can be achieved by neglecting the beta-delta transition for this type of simulation. Finally, the inclusion of the beta-delta transition model in the overall kinetics model can affect the predicted time to explosion by 1% for the traditional multi-step Arrhenius approach, while up to 11% using a Prout-Tompkins cookoff model.« less

  10. Valid approximation of spatially distributed grain size distributions - A priori information encoded to a feedforward network

    NASA Astrophysics Data System (ADS)

    Berthold, T.; Milbradt, P.; Berkhahn, V.

    2018-04-01

    This paper presents a model for the approximation of multiple, spatially distributed grain size distributions based on a feedforward neural network. Since a classical feedforward network does not guarantee to produce valid cumulative distribution functions, a priori information is incor porated into the model by applying weight and architecture constraints. The model is derived in two steps. First, a model is presented that is able to produce a valid distribution function for a single sediment sample. Although initially developed for sediment samples, the model is not limited in its application; it can also be used to approximate any other multimodal continuous distribution function. In the second part, the network is extended in order to capture the spatial variation of the sediment samples that have been obtained from 48 locations in the investigation area. Results show that the model provides an adequate approximation of grain size distributions, satisfying the requirements of a cumulative distribution function.

  11. A hierarchical model for probabilistic independent component analysis of multi-subject fMRI studies

    PubMed Central

    Tang, Li

    2014-01-01

    Summary An important goal in fMRI studies is to decompose the observed series of brain images to identify and characterize underlying brain functional networks. Independent component analysis (ICA) has been shown to be a powerful computational tool for this purpose. Classic ICA has been successfully applied to single-subject fMRI data. The extension of ICA to group inferences in neuroimaging studies, however, is challenging due to the unavailability of a pre-specified group design matrix. Existing group ICA methods generally concatenate observed fMRI data across subjects on the temporal domain and then decompose multi-subject data in a similar manner to single-subject ICA. The major limitation of existing methods is that they ignore between-subject variability in spatial distributions of brain functional networks in group ICA. In this paper, we propose a new hierarchical probabilistic group ICA method to formally model subject-specific effects in both temporal and spatial domains when decomposing multi-subject fMRI data. The proposed method provides model-based estimation of brain functional networks at both the population and subject level. An important advantage of the hierarchical model is that it provides a formal statistical framework to investigate similarities and differences in brain functional networks across subjects, e.g., subjects with mental disorders or neurodegenerative diseases such as Parkinson’s as compared to normal subjects. We develop an EM algorithm for model estimation where both the E-step and M-step have explicit forms. We compare the performance of the proposed hierarchical model with that of two popular group ICA methods via simulation studies. We illustrate our method with application to an fMRI study of Zen meditation. PMID:24033125

  12. Use of multivariate linear regression and support vector regression to predict functional outcome after surgery for cervical spondylotic myelopathy.

    PubMed

    Hoffman, Haydn; Lee, Sunghoon I; Garst, Jordan H; Lu, Derek S; Li, Charles H; Nagasawa, Daniel T; Ghalehsari, Nima; Jahanforouz, Nima; Razaghy, Mehrdad; Espinal, Marie; Ghavamrezaii, Amir; Paak, Brian H; Wu, Irene; Sarrafzadeh, Majid; Lu, Daniel C

    2015-09-01

    This study introduces the use of multivariate linear regression (MLR) and support vector regression (SVR) models to predict postoperative outcomes in a cohort of patients who underwent surgery for cervical spondylotic myelopathy (CSM). Currently, predicting outcomes after surgery for CSM remains a challenge. We recruited patients who had a diagnosis of CSM and required decompressive surgery with or without fusion. Fine motor function was tested preoperatively and postoperatively with a handgrip-based tracking device that has been previously validated, yielding mean absolute accuracy (MAA) results for two tracking tasks (sinusoidal and step). All patients completed Oswestry disability index (ODI) and modified Japanese Orthopaedic Association questionnaires preoperatively and postoperatively. Preoperative data was utilized in MLR and SVR models to predict postoperative ODI. Predictions were compared to the actual ODI scores with the coefficient of determination (R(2)) and mean absolute difference (MAD). From this, 20 patients met the inclusion criteria and completed follow-up at least 3 months after surgery. With the MLR model, a combination of the preoperative ODI score, preoperative MAA (step function), and symptom duration yielded the best prediction of postoperative ODI (R(2)=0.452; MAD=0.0887; p=1.17 × 10(-3)). With the SVR model, a combination of preoperative ODI score, preoperative MAA (sinusoidal function), and symptom duration yielded the best prediction of postoperative ODI (R(2)=0.932; MAD=0.0283; p=5.73 × 10(-12)). The SVR model was more accurate than the MLR model. The SVR can be used preoperatively in risk/benefit analysis and the decision to operate. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. Foot and Ankle Kinematics During Descent From Varying Step Heights.

    PubMed

    Gerstle, Emily E; O'Connor, Kristian; Keenan, Kevin G; Cobb, Stephen C

    2017-12-01

    In the general population, one-third of incidences during step negotiation occur during the transition to level walking. Furthermore, falls during curb negotiation are a common cause of injury in older adults. Distal foot kinematics may be an important factor in determining injury risk associated with transition step negotiation. The purpose of this study was to identify foot and ankle kinematics of uninjured individuals during descent from varying step heights. A 7-segment foot model was used to quantify kinematics as participants walked on a level walkway, stepped down a single step (heights: 5 cm, 15 cm, 25 cm), and continued walking. As step height increased, landing strategy transitioned from the rearfoot to the forefoot, and the rearfoot, lateral and medial midfoot, and medial forefoot became more plantar flexed. During weight acceptance, sagittal plane range of motion of the rearfoot, lateral midfoot, and medial and lateral forefoot increased as step height increased. The changes in landing strategy and distal foot function suggest a less stable ankle position at initial contact and increased demand on the distal foot at initial contact and through the weight acceptance phase of transition step negotiation as step height increases.

  14. Quantum chemical modeling of enzymatic reactions: the case of 4-oxalocrotonate tautomerase.

    PubMed

    Sevastik, Robin; Himo, Fahmi

    2007-12-01

    The reaction mechanism of 4-oxalocrotonate tautomerase (4-OT) is studied using the density functional theory method B3LYP. This enzyme catalyzes the isomerisation of unconjugated alpha-keto acids to their conjugated isomers. Two different quantum chemical models of the active site are devised and the potential energy curves for the reaction are computed. The calculations support the proposed reaction mechanism in which Pro-1 acts as a base to shuttle a proton from the C3 to the C5 position of the substrate. The first step (proton transfer from C3 to proline) is shown to be the rate-limiting step. The energy of the charge-separated intermediate (protonated proline-deprotonated substrate) is calculated to be quite low, in accordance with measured pKa values. The results of the two models are used to evaluate the methodology employed in modeling enzyme active sites using quantum chemical cluster models.

  15. Modelling of current loads on aquaculture net cages

    NASA Astrophysics Data System (ADS)

    Kristiansen, Trygve; Faltinsen, Odd M.

    2012-10-01

    In this paper we propose and discuss a screen type of force model for the viscous hydrodynamic load on nets. The screen model assumes that the net is divided into a number of flat net panels, or screens. It may thus be applied to any kind of net geometry. In this paper we focus on circular net cages for fish farms. The net structure itself is modelled by an existing truss model. The net shape is solved for in a time-stepping procedure that involves solving a linear system of equations for the unknown tensions at each time step. We present comparisons to experiments with circular net cages in steady current, and discuss the sensitivity of the numerical results to a set of chosen parameters. Satisfactory agreement between experimental and numerical prediction of drag and lift as function of the solidity ratio of the net and the current velocity is documented.

  16. Determination of the mass transfer limiting step of dye adsorption onto commercial adsorbent by using mathematical models.

    PubMed

    Marin, Pricila; Borba, Carlos Eduardo; Módenes, Aparecido Nivaldo; Espinoza-Quiñones, Fernando R; de Oliveira, Silvia Priscila Dias; Kroumov, Alexander Dimitrov

    2014-01-01

    Reactive blue 5G dye removal in a fixed-bed column packed with Dowex Optipore SD-2 adsorbent was modelled. Three mathematical models were tested in order to determine the limiting step of the mass transfer of the dye adsorption process onto the adsorbent. The mass transfer resistance was considered to be a criterion for the determination of the difference between models. The models contained information about the external, internal, or surface adsorption limiting step. In the model development procedure, two hypotheses were applied to describe the internal mass transfer resistance. First, the mass transfer coefficient constant was considered. Second, the mass transfer coefficient was considered as a function of the dye concentration in the adsorbent. The experimental breakthrough curves were obtained for different particle diameters of the adsorbent, flow rates, and feed dye concentrations in order to evaluate the predictive power of the models. The values of the mass transfer parameters of the mathematical models were estimated by using the downhill simplex optimization method. The results showed that the model that considered internal resistance with a variable mass transfer coefficient was more flexible than the other ones and this model described the dynamics of the adsorption process of the dye in the fixed-bed column better. Hence, this model can be used for optimization and column design purposes for the investigated systems and similar ones.

  17. Dynamic Analysis of Large In-Space Deployable Membrane Antennas

    NASA Technical Reports Server (NTRS)

    Fang, Houfei; Yang, Bingen; Ding, Hongli; Hah, John; Quijano, Ubaldo; Huang, John

    2006-01-01

    This paper presents a vibration analysis of an eight-meter diameter membrane reflectarray antenna, which is composed of a thin membrane and a deployable frame. This analysis process has two main steps. In the first step, a two-variable-parameter (2-VP) membrane model is developed to determine the in-plane stress distribution of the membrane due to pre-tensioning, which eventually yields the differential stiffness of the membrane. In the second step, the obtained differential stiffness is incorporated in a dynamic equation governing the transverse vibration of the membrane-frame assembly. This dynamic equation is then solved by a semi-analytical method, called the Distributed Transfer Function Method (DTFM), which produces the natural frequencies and mode shapes of the antenna. The combination of the 2-VP model and the DTFM provides an accurate prediction of the in-plane stress distribution and modes of vibration for the antenna.

  18. Distributed optimisation problem with communication delay and external disturbance

    NASA Astrophysics Data System (ADS)

    Tran, Ngoc-Tu; Xiao, Jiang-Wen; Wang, Yan-Wu; Yang, Wu

    2017-12-01

    This paper investigates the distributed optimisation problem for the multi-agent systems (MASs) with the simultaneous presence of external disturbance and the communication delay. To solve this problem, a two-step design scheme is introduced. In the first step, based on the internal model principle, the internal model term is constructed to compensate the disturbance asymptotically. In the second step, a distributed optimisation algorithm is designed to solve the distributed optimisation problem based on the MASs with the simultaneous presence of disturbance and communication delay. Moreover, in the proposed algorithm, each agent interacts with its neighbours through the connected topology and the delay occurs during the information exchange. By utilising Lyapunov-Krasovskii functional, the delay-dependent conditions are derived for both slowly and fast time-varying delay, respectively, to ensure the convergence of the algorithm to the optimal solution of the optimisation problem. Several numerical simulation examples are provided to illustrate the effectiveness of the theoretical results.

  19. System Engineering Infrastructure Evolution Galileo IOV and the Steps Beyond

    NASA Astrophysics Data System (ADS)

    Eickhoff, J.; Herpel, H.-J.; Steinle, T.; Birn, R.; Steiner, W.-D.; Eisenmann, H.; Ludwig, T.

    2009-05-01

    The trends to more and more constrained financial budgets in satellite engineering require a permanent optimization of the S/C system engineering processes and infrastructure. Astrium in the recent years already has built up a system simulation infrastructure - the "Model-based Development & Verification Environment" - which meanwhile is well known all over Europe and is established as Astrium's standard approach for ESA, DLR projects and now even the EU/ESA-Project Galileo IOV. The key feature of the MDVE / FVE approach is to provide entire S/C simulation (with full featured OBC simulation) already in early phases to start OBSW code tests on a simulated S/C and to later add hardware in the loop step by step up to an entire "Engineering Functional Model (EFM)" or "FlatSat". The subsequent enhancements to this simulator infrastructure w.r.t. spacecraft design data handling are reported in the following sections.

  20. Making a Computer Model of the Most Complex System Ever Built - Continuum

    Science.gov Websites

    Eastern Interconnection, all as a function of time. All told, that's about 1,000 gigabytes of data the modeling software steps forward in time, those decisions affect how the grid operates under Interconnection at five-minute intervals for one year would have required more than 400 days of computing time

  1. Regularized lattice Boltzmann model for immiscible two-phase flows with power-law rheology

    NASA Astrophysics Data System (ADS)

    Ba, Yan; Wang, Ningning; Liu, Haihu; Li, Qiang; He, Guoqiang

    2018-03-01

    In this work, a regularized lattice Boltzmann color-gradient model is developed for the simulation of immiscible two-phase flows with power-law rheology. This model is as simple as the Bhatnagar-Gross-Krook (BGK) color-gradient model except that an additional regularization step is introduced prior to the collision step. In the regularization step, the pseudo-inverse method is adopted as an alternative solution for the nonequilibrium part of the total distribution function, and it can be easily extended to other discrete velocity models no matter whether a forcing term is considered or not. The obtained expressions for the nonequilibrium part are merely related to macroscopic variables and velocity gradients that can be evaluated locally. Several numerical examples, including the single-phase and two-phase layered power-law fluid flows between two parallel plates, and the droplet deformation and breakup in a simple shear flow, are conducted to test the capability and accuracy of the proposed color-gradient model. Results show that the present model is more stable and accurate than the BGK color-gradient model for power-law fluids with a wide range of power-law indices. Compared to its multiple-relaxation-time counterpart, the present model can increase the computing efficiency by around 15%, while keeping the same accuracy and stability. Also, the present model is found to be capable of reasonably predicting the critical capillary number of droplet breakup.

  2. Direct model-based predictive control scheme without cost function for voltage source inverters with reduced common-mode voltage

    NASA Astrophysics Data System (ADS)

    Kim, Jae-Chang; Moon, Sung-Ki; Kwak, Sangshin

    2018-04-01

    This paper presents a direct model-based predictive control scheme for voltage source inverters (VSIs) with reduced common-mode voltages (CMVs). The developed method directly finds optimal vectors without using repetitive calculation of a cost function. To adjust output currents with the CMVs in the range of -Vdc/6 to +Vdc/6, the developed method uses voltage vectors, as finite control resources, excluding zero voltage vectors which produce the CMVs in the VSI within ±Vdc/2. In a model-based predictive control (MPC), not using zero voltage vectors increases the output current ripples and the current errors. To alleviate these problems, the developed method uses two non-zero voltage vectors in one sampling step. In addition, the voltage vectors scheduled to be used are directly selected at every sampling step once the developed method calculates the future reference voltage vector, saving the efforts of repeatedly calculating the cost function. And the two non-zero voltage vectors are optimally allocated to make the output current approach the reference current as close as possible. Thus, low CMV, rapid current-following capability and sufficient output current ripple performance are attained by the developed method. The results of a simulation and an experiment verify the effectiveness of the developed method.

  3. Evaluation and inversion of a net ecosystem carbon exchange model for grasslands and croplands

    NASA Astrophysics Data System (ADS)

    Herbst, M.; Klosterhalfen, A.; Weihermueller, L.; Graf, A.; Schmidt, M.; Huisman, J. A.; Vereecken, H.

    2017-12-01

    A one-dimensional soil water, heat, and CO2 flux model (SOILCO2), a pool concept of soil carbon turnover (RothC), and a crop growth module (SUCROS) was coupled to predict the net ecosystem exchange (NEE) of carbon. This model, further referred to as AgroC, was extended with routines for managed grassland as well as for root exudation and root decay. In a first step, the coupled model was applied to two winter wheat sites and one upland grassland site in Germany. The model was calibrated based on soil water content, soil temperature, biometric, and soil respiration measurements for each site, and validated in terms of hourly NEE measured with the eddy covariance technique. The overall model performance of AgroC was acceptable with a model efficiency >0.78 for NEE. In a second step, AgroC was optimized with the eddy covariance NEE measurements to examine the effect of various objective functions, constraints, and data-transformations on estimated NEE, which showed a distinct sensitivity to the choice of objective function and the inclusion of soil respiration data in the optimization process. Both, day and nighttime fluxes, were found to be sensitive to the selected optimization strategy. Additional consideration of soil respiration measurements improved the simulation of small positive fluxes remarkably. Even though the model performance of the selected optimization strategies did not diverge substantially, the resulting annual NEE differed substantially. We conclude that data-transformation, definition of objective functions, and data sources have to be considered cautiously when using a terrestrial ecosystem model to determine carbon balances by means of eddy covariance measurements.

  4. Influence of ultrasound speckle tracking strategies for motion and strain estimation.

    PubMed

    Curiale, Ariel H; Vegas-Sánchez-Ferrero, Gonzalo; Aja-Fernández, Santiago

    2016-08-01

    Speckle Tracking is one of the most prominent techniques used to estimate the regional movement of the heart based on ultrasound acquisitions. Many different approaches have been proposed, proving their suitability to obtain quantitative and qualitative information regarding myocardial deformation, motion and function assessment. New proposals to improve the basic algorithm usually focus on one of these three steps: (1) the similarity measure between images and the speckle model; (2) the transformation model, i.e. the type of motion considered between images; (3) the optimization strategies, such as the use of different optimization techniques in the transformation step or the inclusion of structural information. While many contributions have shown their good performance independently, it is not always clear how they perform when integrated in a whole pipeline. Every step will have a degree of influence over the following and hence over the final result. Thus, a Speckle Tracking pipeline must be analyzed as a whole when developing novel methods, since improvements in a particular step might be undermined by the choices taken in further steps. This work presents two main contributions: (1) We provide a complete analysis of the influence of the different steps in a Speckle Tracking pipeline over the motion and strain estimation accuracy. (2) The study proposes a methodology for the analysis of Speckle Tracking systems specifically designed to provide an easy and systematic way to include other strategies. We close the analysis with some conclusions and recommendations that can be used as an orientation of the degree of influence of the models for speckle, the transformation models, interpolation schemes and optimization strategies over the estimation of motion features. They can be further use to evaluate and design new strategy into a Speckle Tracking system. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. Quantitative analysis of the thermal requirements for stepwise physical dormancy-break in seeds of the winter annual Geranium carolinianum (Geraniaceae)

    PubMed Central

    Gama-Arachchige, N. S.; Baskin, J. M.; Geneve, R. L.; Baskin, C. C.

    2013-01-01

    Background and Aims Physical dormancy (PY)-break in some annual plant species is a two-step process controlled by two different temperature and/or moisture regimes. The thermal time model has been used to quantify PY-break in several species of Fabaceae, but not to describe stepwise PY-break. The primary aims of this study were to quantify the thermal requirement for sensitivity induction by developing a thermal time model and to propose a mechanism for stepwise PY-breaking in the winter annual Geranium carolinianum. Methods Seeds of G. carolinianum were stored under dry conditions at different constant and alternating temperatures to induce sensitivity (step I). Sensitivity induction was analysed based on the thermal time approach using the Gompertz function. The effect of temperature on step II was studied by incubating sensitive seeds at low temperatures. Scanning electron microscopy, penetrometer techniques, and different humidity levels and temperatures were used to explain the mechanism of stepwise PY-break. Key Results The base temperature (Tb) for sensitivity induction was 17·2 °C and constant for all seed fractions of the population. Thermal time for sensitivity induction during step I in the PY-breaking process agreed with the three-parameter Gompertz model. Step II (PY-break) did not agree with the thermal time concept. Q10 values for the rate of sensitivity induction and PY-break were between 2·0 and 3·5 and between 0·02 and 0·1, respectively. The force required to separate the water gap palisade layer from the sub-palisade layer was significantly reduced after sensitivity induction. Conclusions Step I and step II in PY-breaking of G. carolinianum are controlled by chemical and physical processes, respectively. This study indicates the feasibility of applying the developed thermal time model to predict or manipulate sensitivity induction in seeds with two-step PY-breaking processes. The model is the first and most detailed one yet developed for sensitivity induction in PY-break. PMID:23456728

  6. Relationship of functional fitness with daily steps in community-dwelling older adults.

    PubMed

    de Melo, Lucelia Luna; Menec, Verena H; Ready, A Elizabeth

    2014-01-01

    Walking is the main type of physical activity among community-dwelling older adults and it is associated with various health benefits. However, there is limited evidence about the relationship between functional fitness and walking performed under independent living conditions among older adults. This study examined the relationship between functional fitness and steps walked per day among older adults, both assessed objectively, with performance-based measures accounting for the effect of age, gender, and chronic conditions. In this cross-sectional study, 60 participants aged 65 years or older (mean = 76.9 ± 7.3 years, range 65-92 years) wore pedometers for 3 consecutive days. Functional fitness was measured using the Functional Fitness Test (lower and upper body strength, endurance, lower and upper body flexibility, agility/balance). The outcome measure was the mean number of steps walked for 3 days with participants classified into tertiles: low walkers (<3000 steps), medium walkers (≥3000 < 6500 steps), and high walkers (≥6500 steps). After controlling for age, gender, and the number of chronic conditions, none of the functional fitness parameters was significantly associated with steps taken per day when comparing medium walkers with low walkers. In contrast, all functional fitness parameters, except upper body flexibility, were significantly associated with steps taken per day when comparing high walkers with low walkers. In this sample of older adults, greater functional fitness was associated only with relatively high levels of walking involving 6500 steps per day or more. It was not related to medium walking levels. The findings point to the importance of interventions to maintain or enhance functional fitness among older adults.

  7. Film thickness measurement for spiral groove and Rayleigh step lift pad self-acting face seals

    NASA Technical Reports Server (NTRS)

    Dirusso, E.

    1982-01-01

    One Rayleigh step lift pad and three spiral groove self-acting face seal configurations were tested to measure film thickness and frictional torque as a function of shaft speed. The seals were tested at a constant face load of 73 N (16.4 lb) with ambient air at room temperature and atmospheric pressure as the fluid medium. The test speed range was from 7000 to 17,000 rpm. The measured film thickness was compared with theoretical data from mathematical models. The mathematical models overpredicted the measured film thickness at the lower speeds of the test speed range and underpredicted the measured film thickness at the higher speeds of the test speed range.

  8. Parametric correlation functions to model the structure of permanent environmental (co)variances in milk yield random regression models.

    PubMed

    Bignardi, A B; El Faro, L; Cardoso, V L; Machado, P F; Albuquerque, L G

    2009-09-01

    The objective of the present study was to estimate milk yield genetic parameters applying random regression models and parametric correlation functions combined with a variance function to model animal permanent environmental effects. A total of 152,145 test-day milk yields from 7,317 first lactations of Holstein cows belonging to herds located in the southeastern region of Brazil were analyzed. Test-day milk yields were divided into 44 weekly classes of days in milk. Contemporary groups were defined by herd-test-day comprising a total of 2,539 classes. The model included direct additive genetic, permanent environmental, and residual random effects. The following fixed effects were considered: contemporary group, age of cow at calving (linear and quadratic regressions), and the population average lactation curve modeled by fourth-order orthogonal Legendre polynomial. Additive genetic effects were modeled by random regression on orthogonal Legendre polynomials of days in milk, whereas permanent environmental effects were estimated using a stationary or nonstationary parametric correlation function combined with a variance function of different orders. The structure of residual variances was modeled using a step function containing 6 variance classes. The genetic parameter estimates obtained with the model using a stationary correlation function associated with a variance function to model permanent environmental effects were similar to those obtained with models employing orthogonal Legendre polynomials for the same effect. A model using a sixth-order polynomial for additive effects and a stationary parametric correlation function associated with a seventh-order variance function to model permanent environmental effects would be sufficient for data fitting.

  9. A Data-Driven, Integrated Flare Model Based on Self-Organized Criticality

    NASA Astrophysics Data System (ADS)

    Dimitropoulou, M.; Isliker, H.; Vlahos, L.; Georgoulis, M.

    2013-09-01

    We interpret solar flares as events originating in solar active regions having reached the self-organized critical state, by alternatively using two versions of an "integrated flare model" - one static and one dynamic. In both versions the initial conditions are derived from observations aiming to investigate whether well-known scaling laws observed in the distribution functions of characteristic flare parameters are reproduced after the self-organized critical state has been reached. In the static model, we first apply a nonlinear force-free extrapolation that reconstructs the three-dimensional magnetic fields from two-dimensional vector magnetograms. We then locate magnetic discontinuities exceeding a threshold in the Laplacian of the magnetic field. These discontinuities are relaxed in local diffusion events, implemented in the form of cellular-automaton evolution rules. Subsequent loading and relaxation steps lead the system to self-organized criticality, after which the statistical properties of the simulated events are examined. In the dynamic version we deploy an enhanced driving mechanism, which utilizes the observed evolution of active regions, making use of sequential vector magnetograms. We first apply the static cellular automaton model to consecutive solar vector magnetograms until the self-organized critical state is reached. We then evolve the magnetic field inbetween these processed snapshots through spline interpolation, acting as a natural driver in the dynamic model. The identification of magnetically unstable sites as well as their relaxation follow the same rules as in the static model after each interpolation step. Subsequent interpolation/driving and relaxation steps cover all transitions until the end of the sequence. Physical requirements, such as the divergence-free condition for the magnetic field vector, are approximately satisfied in both versions of the model. We obtain robust power laws in the distribution functions of the modelled flaring events with scaling indices in good agreement with observations. We therefore conclude that well-known statistical properties of flares are reproduced after active regions reach self-organized criticality. The significant enhancement in both the static and the dynamic integrated flare models is that they initiate the simulation from observations, thus facilitating energy calculation in physical units. Especially in the dynamic version of the model, the driving of the system is based on observed, evolving vector magnetograms, allowing for the separation between MHD and kinetic timescales through the assignment of distinct MHD timestamps to each interpolation step.

  10. surrosurv: An R package for the evaluation of failure time surrogate endpoints in individual patient data meta-analyses of randomized clinical trials.

    PubMed

    Rotolo, Federico; Paoletti, Xavier; Michiels, Stefan

    2018-03-01

    Surrogate endpoints are attractive for use in clinical trials instead of well-established endpoints because of practical convenience. To validate a surrogate endpoint, two important measures can be estimated in a meta-analytic context when individual patient data are available: the R indiv 2 or the Kendall's τ at the individual level, and the R trial 2 at the trial level. We aimed at providing an R implementation of classical and well-established as well as more recent statistical methods for surrogacy assessment with failure time endpoints. We also intended incorporating utilities for model checking and visualization and data generating methods described in the literature to date. In the case of failure time endpoints, the classical approach is based on two steps. First, a Kendall's τ is estimated as measure of individual level surrogacy using a copula model. Then, the R trial 2 is computed via a linear regression of the estimated treatment effects; at this second step, the estimation uncertainty can be accounted for via measurement-error model or via weights. In addition to the classical approach, we recently developed an approach based on bivariate auxiliary Poisson models with individual random effects to measure the Kendall's τ and treatment-by-trial interactions to measure the R trial 2 . The most common data simulation models described in the literature are based on: copula models, mixed proportional hazard models, and mixture of half-normal and exponential random variables. The R package surrosurv implements the classical two-step method with Clayton, Plackett, and Hougaard copulas. It also allows to optionally adjusting the second-step linear regression for measurement-error. The mixed Poisson approach is implemented with different reduced models in addition to the full model. We present the package functions for estimating the surrogacy models, for checking their convergence, for performing leave-one-trial-out cross-validation, and for plotting the results. We illustrate their use in practice on individual patient data from a meta-analysis of 4069 patients with advanced gastric cancer from 20 trials of chemotherapy. The surrosurv package provides an R implementation of classical and recent statistical methods for surrogacy assessment of failure time endpoints. Flexible simulation functions are available to generate data according to the methods described in the literature. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Prediction of the backflow and recovery regions in the backward facing step at various Reynolds numbers

    NASA Technical Reports Server (NTRS)

    Michelassi, V.; Durbin, P. A.; Mansour, N. N.

    1996-01-01

    A four-equation model of turbulence is applied to the numerical simulation of flows with massive separation induced by a sudden expansion. The model constants are a function of the flow parameters, and two different formulations for these functions are tested. The results are compared with experimental data for a high Reynolds-number case and with experimental and DNS data for a low Reynolds-number case. The computations prove that the recovery region downstream of the massive separation is properly modeled only for the high Re case. The problems in this case stem from the gradient diffusion hypothesis, which underestimates the turbulent diffusion.

  12. Computing quantum hashing in the model of quantum branching programs

    NASA Astrophysics Data System (ADS)

    Ablayev, Farid; Ablayev, Marat; Vasiliev, Alexander

    2018-02-01

    We investigate the branching program complexity of quantum hashing. We consider a quantum hash function that maps elements of a finite field into quantum states. We require that this function is preimage-resistant and collision-resistant. We consider two complexity measures for Quantum Branching Programs (QBP): a number of qubits and a number of compu-tational steps. We show that the quantum hash function can be computed efficiently. Moreover, we prove that such QBP construction is optimal. That is, we prove lower bounds that match the constructed quantum hash function computation.

  13. Use of distributed water level and soil moisture data in the evaluation of the PUMMA periurban distributed hydrological model: application to the Mercier catchment, France

    NASA Astrophysics Data System (ADS)

    Braud, Isabelle; Fuamba, Musandji; Branger, Flora; Batchabani, Essoyéké; Sanzana, Pedro; Sarrazin, Benoit; Jankowfsky, Sonja

    2016-04-01

    Distributed hydrological models are used at best when their outputs are compared not only to the outlet discharge, but also to internal observed variables, so that they can be used as powerful hypothesis-testing tools. In this paper, the interest of distributed networks of sensors for evaluating a distributed model and the underlying functioning hypotheses is explored. Two types of data are used: surface soil moisture and water level in streams. The model used in the study is the periurban PUMMA (Peri-Urban Model for landscape Management, Jankowfsky et al., 2014), that is applied to the Mercier catchment (6.7 km2) a semi-rural catchment with 14% imperviousness, located close to Lyon, France where distributed water level (13 locations) and surface soil moisture data (9 locations) are available. Model parameters are specified using in situ information or the results of previous studies, without any calibration and the model is run for four years from January 1st 2007 to December 31st 2010 with a variable time step for rainfall and an hourly time step for reference evapotranspiration. The model evaluation protocol was guided by the available data and how they can be interpreted in terms of hydrological processes and constraints for the model components and parameters. We followed a stepwise approach. The first step was a simple model water balance assessment, without comparison to observed data. It can be interpreted as a basic quality check for the model, ensuring that it conserves mass, makes the difference between dry and wet years, and reacts to rainfall events. The second step was an evaluation against observed discharge data at the outlet, using classical performance criteria. It gives a general picture of the model performance and allows to comparing it to other studies found in the literature. In the next steps (steps 3 to 6), focus was made on more specific hydrological processes. In step 3, distributed surface soil moisture data was used to assess the relevance of the simulated seasonal soil water storage dynamics. In step 4, we evaluated the base flow generation mechanisms in the model through comparison with continuous water level data transformed into stream intermittency statistics. In step 5, the water level data was used again but at the event time scale, to evaluate the fast flow generation components through comparison of modelled and observed reaction and response times. Finally, in step 6, we studied correlation between observed and simulated reaction and response times and various characteristics of the rainfall events (rain volume, intensity) and antecedent soil moisture, to see if the model was able to reproduce the observed features as described in Sarrazin (2012). The results show that the model is able to represent satisfactorily the soil water storage dynamics and stream intermittency. On the other hand, the model does not reproduce the response times and the difference in response between forested and agricultural areas. References: Jankowfsky et al., 2014. Assessing anthropogenic influence on the hydrology of small peri-urban catchments: Development of the object-oriented PUMMA model by integrating urban and rural hydrological models. J. Hydrol., 517, 1056-1071 Sarrazin, B., 2012. MNT et observations multi-locales du réseau hydrographique d'un petit bassin versant rural dans une perspective d'aide à la modélisation hydrologique. Ecole doctorale Terre, Univers, Environnement. l'Institut National Polytechnique de Grenoble, 269 pp (in French).

  14. Finite element model updating using the shadow hybrid Monte Carlo technique

    NASA Astrophysics Data System (ADS)

    Boulkaibet, I.; Mthembu, L.; Marwala, T.; Friswell, M. I.; Adhikari, S.

    2015-02-01

    Recent research in the field of finite element model updating (FEM) advocates the adoption of Bayesian analysis techniques to dealing with the uncertainties associated with these models. However, Bayesian formulations require the evaluation of the Posterior Distribution Function which may not be available in analytical form. This is the case in FEM updating. In such cases sampling methods can provide good approximations of the Posterior distribution when implemented in the Bayesian context. Markov Chain Monte Carlo (MCMC) algorithms are the most popular sampling tools used to sample probability distributions. However, the efficiency of these algorithms is affected by the complexity of the systems (the size of the parameter space). The Hybrid Monte Carlo (HMC) offers a very important MCMC approach to dealing with higher-dimensional complex problems. The HMC uses the molecular dynamics (MD) steps as the global Monte Carlo (MC) moves to reach areas of high probability where the gradient of the log-density of the Posterior acts as a guide during the search process. However, the acceptance rate of HMC is sensitive to the system size as well as the time step used to evaluate the MD trajectory. To overcome this limitation we propose the use of the Shadow Hybrid Monte Carlo (SHMC) algorithm. The SHMC algorithm is a modified version of the Hybrid Monte Carlo (HMC) and designed to improve sampling for large-system sizes and time steps. This is done by sampling from a modified Hamiltonian function instead of the normal Hamiltonian function. In this paper, the efficiency and accuracy of the SHMC method is tested on the updating of two real structures; an unsymmetrical H-shaped beam structure and a GARTEUR SM-AG19 structure and is compared to the application of the HMC algorithm on the same structures.

  15. Protocols for Molecular Modeling with Rosetta3 and RosettaScripts

    PubMed Central

    2016-01-01

    Previously, we published an article providing an overview of the Rosetta suite of biomacromolecular modeling software and a series of step-by-step tutorials [Kaufmann, K. W., et al. (2010) Biochemistry 49, 2987–2998]. The overwhelming positive response to this publication we received motivates us to here share the next iteration of these tutorials that feature de novo folding, comparative modeling, loop construction, protein docking, small molecule docking, and protein design. This updated and expanded set of tutorials is needed, as since 2010 Rosetta has been fully redesigned into an object-oriented protein modeling program Rosetta3. Notable improvements include a substantially improved energy function, an XML-like language termed “RosettaScripts” for flexibly specifying modeling task, new analysis tools, the addition of the TopologyBroker to control conformational sampling, and support for multiple templates in comparative modeling. Rosetta’s ability to model systems with symmetric proteins, membrane proteins, noncanonical amino acids, and RNA has also been greatly expanded and improved. PMID:27490953

  16. Developing the Polynomial Expressions for Fields in the ITER Tokamak

    NASA Astrophysics Data System (ADS)

    Sharma, Stephen

    2017-10-01

    The two most important problems to be solved in the development of working nuclear fusion power plants are: sustained partial ignition and turbulence. These two phenomena are the subject of research and investigation through the development of analytic functions and computational models. Ansatz development through Gaussian wave-function approximations, dielectric quark models, field solutions using new elliptic functions, and better descriptions of the polynomials of the superconducting current loops are the critical theoretical developments that need to be improved. Euler-Lagrange equations of motion in addition to geodesic formulations generate the particle model which should correspond to the Dirac dispersive scattering coefficient calculations and the fluid plasma model. Feynman-Hellman formalism and Heaviside step functional forms are introduced to the fusion equations to produce simple expressions for the kinetic energy and loop currents. Conclusively, a polynomial description of the current loops, the Biot-Savart field, and the Lagrangian must be uncovered before there can be an adequate computational and iterative model of the thermonuclear plasma.

  17. Expressions for Fields in the ITER Tokamak

    NASA Astrophysics Data System (ADS)

    Sharma, Stephen

    2017-10-01

    The two most important problems to be solved in the development of working nuclear fusion power plants are: sustained partial ignition and turbulence. These two phenomenon are the subject of research and investigation through the development of analytic functions and computational models. Ansatz development through Gaussian wave-function approximations, dielectric quark models, field solutions using new elliptic functions, and better descriptions of the polynomials of the superconducting current loops are the critical theoretical developments that need to be improved. Euler-Lagrange equations of motion in addition to geodesic formulations generate the particle model which should correspond to the Dirac dispersive scattering coefficient calculations and the fluid plasma model. Feynman-Hellman formalism and Heaviside step functional forms are introduced to the fusion equations to produce simple expressions for the kinetic energy and loop currents. Conclusively, a polynomial description of the current loops, the Biot-Savart field, and the Lagrangian must be uncovered before there can be an adequate computational and iterative model of the thermonuclear plasma.

  18. Health service costs and clinical gains of psychotherapy for personality disorders: a randomized controlled trial of day-hospital-based step-down treatment versus outpatient treatment at a specialist practice

    PubMed Central

    2013-01-01

    Background Day-hospital-based treatment programmes have been recommended for poorly functioning patients with personality disorders (PD). However, more research is needed to confirm the cost-effectiveness of such extensive programmes over other, presumably simpler, treatment formats. Methods This study compared health service costs and psychosocial functioning for PD patients randomly allocated to either a day-hospital-based treatment programme combining individual and group psychotherapy in a step-down format, or outpatient individual psychotherapy at a specialist practice. It included 107 PD patients, 46% of whom had borderline PD, and 40% of whom had avoidant PD. Costs included the two treatment conditions and additional primary and secondary in- and outpatient services. Psychosocial functioning was assessed using measures of global (observer-rated GAF) and occupational (self-report) functioning. Repeated assessments over three years were analysed using mixed models. Results The costs of step-down treatment were higher than those of outpatient treatment, but these high costs were compensated by considerably lower costs of other health services. However, costs and clinical gains depended on the type of PD. For borderline PD patients, cost-effectiveness did not differ by treatment condition. Health service costs declined during the trial, and functioning improved to mild impairment levels (GAF > 60). For avoidant PD patients, considerable adjuvant health services expanded the outpatient format. Clinical improvements were nevertheless superior to the step-down condition. Conclusion Our results indicate that decisions on treatment format should differentiate between PD types. For borderline PD patients, the costs and gains of step-down and outpatient treatment conditions did not differ. For avoidant PD patients, the outpatient format was a better alternative, leaning, however, on costly additional health services in the early phase of treatment. Trial registration Clinical Trials NCT00378248 PMID:24268099

  19. Microplates with adaptive surfaces.

    PubMed

    Akbulut, Meshude; Lakshmi, Dhana; Whitcombe, Michael J; Piletska, Elena V; Chianella, Iva; Güven, Olgun; Piletsky, Sergey A

    2011-11-14

    Here we present a new and versatile method for the modification of the well surfaces of polystyrene microtiter plates (microplates) with poly(N-phenylethylene diamine methacrylamide), (poly-NPEDMA). The chemical grafting of poly-NPEDMA to the surface of microplates resulted in the formation of thin layers of a polyaniline derivative bearing pendant methacrylamide double bonds. These were used as the attachment point for various functional polymers through photochemical grafting of various, for example, acrylate and methacrylate, polymers with different functionalities. In a model experiment, we have modified poly-NPEDMA-coated microplates with a small library of polymers containing different functional groups using a two-step approach. In the first step, double bonds were activated by UV irradiation in the presence of N,N-diethyldithiocarbamic acid benzyl ester (iniferter). This enabled grafting of the polymer library in the second step by UV irradiation of solutions of the corresponding monomers in the microplate wells. The uniformity of coatings was confirmed spectrophotometrically, by microscopic imaging and by contact angle measurements (CA). The feasibility of the current technology has been shown by the generation of a small library of polymers grafted to the microplate well surfaces and screening of their affinity to small molecules, such as atrazine, a trio of organic dyes, and a model protein, bovine serum albumin (BSA). The stability of the polymers, reproducibility of measurement, ease of preparation, and cost-effectiveness make this approach suitable for applications in high-throughput screening in the area of materials research.

  20. Ontology-based reusable clinical document template production system.

    PubMed

    Nam, Sejin; Lee, Sungin; Kim, James G Boram; Kim, Hong-Gee

    2012-01-01

    Clinical documents embody professional clinical knowledge. This paper shows an effective clinical document template (CDT) production system that uses a clinical description entity (CDE) model, a CDE ontology, and a knowledge management system called STEP that manages ontology-based clinical description entities. The ontology represents CDEs and their inter-relations, and the STEP system stores and manages CDE ontology-based information regarding CDTs. The system also provides Web Services interfaces for search and reasoning over clinical entities. The system was populated with entities and relations extracted from 35 CDTs that were used in admission, discharge, and progress reports, as well as those used in nursing and operation functions. A clinical document template editor is shown that uses STEP.

  1. Detection of blob objects in microscopic zebrafish images based on gradient vector diffusion.

    PubMed

    Li, Gang; Liu, Tianming; Nie, Jingxin; Guo, Lei; Malicki, Jarema; Mara, Andrew; Holley, Scott A; Xia, Weiming; Wong, Stephen T C

    2007-10-01

    The zebrafish has become an important vertebrate animal model for the study of developmental biology, functional genomics, and disease mechanisms. It is also being used for drug discovery. Computerized detection of blob objects has been one of the important tasks in quantitative phenotyping of zebrafish. We present a new automated method that is able to detect blob objects, such as nuclei or cells in microscopic zebrafish images. This method is composed of three key steps. The first step is to produce a diffused gradient vector field by a physical elastic deformable model. In the second step, the flux image is computed on the diffused gradient vector field. The third step performs thresholding and nonmaximum suppression based on the flux image. We report the validation and experimental results of this method using zebrafish image datasets from three independent research labs. Both sensitivity and specificity of this method are over 90%. This method is able to differentiate closely juxtaposed or connected blob objects, with high sensitivity and specificity in different situations. It is characterized by a good, consistent performance in blob object detection.

  2. Individualizing drug dosage with longitudinal data.

    PubMed

    Zhu, Xiaolu; Qu, Annie

    2016-10-30

    We propose a two-step procedure to personalize drug dosage over time under the framework of a log-linear mixed-effect model. We model patients' heterogeneity using subject-specific random effects, which are treated as the realizations of an unspecified stochastic process. We extend the conditional quadratic inference function to estimate both fixed-effect coefficients and individual random effects on a longitudinal training data sample in the first step and propose an adaptive procedure to estimate new patients' random effects and provide dosage recommendations for new patients in the second step. An advantage of our approach is that we do not impose any distribution assumption on estimating random effects. Moreover, the new approach can accommodate more general time-varying covariates corresponding to random effects. We show in theory and numerical studies that the proposed method is more efficient compared with existing approaches, especially when covariates are time varying. In addition, a real data example of a clozapine study confirms that our two-step procedure leads to more accurate drug dosage recommendations. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  3. Implicit unified gas-kinetic scheme for steady state solutions in all flow regimes

    NASA Astrophysics Data System (ADS)

    Zhu, Yajun; Zhong, Chengwen; Xu, Kun

    2016-06-01

    This paper presents an implicit unified gas-kinetic scheme (UGKS) for non-equilibrium steady state flow computation. The UGKS is a direct modeling method for flow simulation in all regimes with the updates of both macroscopic flow variables and microscopic gas distribution function. By solving the macroscopic equations implicitly, a predicted equilibrium state can be obtained first through iterations. With the newly predicted equilibrium state, the evolution equation of the gas distribution function and the corresponding collision term can be discretized in a fully implicit way for fast convergence through iterations as well. The lower-upper symmetric Gauss-Seidel (LU-SGS) factorization method is implemented to solve both macroscopic and microscopic equations, which improves the efficiency of the scheme. Since the UGKS is a direct modeling method and its physical solution depends on the mesh resolution and the local time step, a physical time step needs to be fixed before using an implicit iterative technique with a pseudo-time marching step. Therefore, the physical time step in the current implicit scheme is determined by the same way as that in the explicit UGKS for capturing the physical solution in all flow regimes, but the convergence to a steady state speeds up through the adoption of a numerical time step with large CFL number. Many numerical test cases in different flow regimes from low speed to hypersonic ones, such as the Couette flow, cavity flow, and the flow passing over a cylinder, are computed to validate the current implicit method. The overall efficiency of the implicit UGKS can be improved by one or two orders of magnitude in comparison with the explicit one.

  4. Diffusion on Cu surfaces

    NASA Technical Reports Server (NTRS)

    Karimi, Majid

    1993-01-01

    Understanding surface diffusion is essential in understanding surface phenomena, such as crystal growth, thin film growth, corrosion, physisorption, and chemisorption. Because of its importance, various experimental and theoretical efforts have been directed to understand this phenomena. The Field Ion Microscope (FIM) has been the major experimental tool for studying surface diffusion. FIM have been employed by various research groups to study surface diffusion of adatoms. Because of limitations of the FIM, such studies are only limited to a few surfaces: nickel, platinum, aluminum, iridium, tungsten, and rhodium. From the theoretical standpoint, various atomistic simulations are performed to study surface diffusion. In most of these calculations the Embedded Atom Method (EAM) along with the molecular static (MS) simulation are utilized. The EAM is a semi-empirical approach for modeling the interatomic interactions. The MS simulation is a technique for minimizing the total energy of a system of particles with respect to the positions of its particles. One of the objectives of this work is to develop the EAM functions for Cu and use them in conjunction with the molecular static (MS) simulation to study diffusion of a Cu atom on a perfect as well as stepped Cu(100) surfaces. This will provide a test of the validity of the EAM functions on Cu(100) surface and near the stepped environments. In particular, we construct a terrace-ledge-kink (TLK) model and calculate the migration energies of an atom on a terrace, near a ledge site, near a kink site, and going over a descending step. We have also calculated formation energies of an atom on the bare surface, a vacancy in the surface, a stepped surface, and a stepped-kink surface. Our results are compared with the available experimental and theoretical results.

  5. Investigations for Thermal and Electrical Conductivity of ABS-Graphene Blended Prototypes

    PubMed Central

    Singh, Rupinder; Sandhu, Gurleen S.; Penna, Rosa; Farina, Ilenia

    2017-01-01

    The thermoplastic materials such as acrylonitrile-butadiene-styrene (ABS) and Nylon have large applications in three-dimensional printing of functional/non-functional prototypes. Usually these polymer-based prototypes are lacking in thermal and electrical conductivity. Graphene (Gr) has attracted impressive enthusiasm in the recent past due to its natural mechanical, thermal, and electrical properties. This paper presents the step by step procedure (as a case study) for development of an in-house ABS-Gr blended composite feedstock filament for fused deposition modelling (FDM) applications. The feedstock filament has been prepared by two different methods (mechanical and chemical mixing). For mechanical mixing, a twin screw extrusion (TSE) process has been used, and for chemical mixing, the composite of Gr in an ABS matrix has been set by chemical dissolution, followed by mechanical blending through TSE. Finally, the electrical and thermal conductivity of functional prototypes prepared from composite feedstock filaments have been optimized. PMID:28773244

  6. A three-step maximum a posteriori probability method for InSAR data inversion of coseismic rupture with application to the 14 April 2010 Mw 6.9 Yushu, China, earthquake

    NASA Astrophysics Data System (ADS)

    Sun, Jianbao; Shen, Zheng-Kang; Bürgmann, Roland; Wang, Min; Chen, Lichun; Xu, Xiwei

    2013-08-01

    develop a three-step maximum a posteriori probability method for coseismic rupture inversion, which aims at maximizing the a posterior probability density function (PDF) of elastic deformation solutions of earthquake rupture. The method originates from the fully Bayesian inversion and mixed linear-nonlinear Bayesian inversion methods and shares the same posterior PDF with them, while overcoming difficulties with convergence when large numbers of low-quality data are used and greatly improving the convergence rate using optimization procedures. A highly efficient global optimization algorithm, adaptive simulated annealing, is used to search for the maximum of a posterior PDF ("mode" in statistics) in the first step. The second step inversion approaches the "true" solution further using the Monte Carlo inversion technique with positivity constraints, with all parameters obtained from the first step as the initial solution. Then slip artifacts are eliminated from slip models in the third step using the same procedure of the second step, with fixed fault geometry parameters. We first design a fault model with 45° dip angle and oblique slip, and produce corresponding synthetic interferometric synthetic aperture radar (InSAR) data sets to validate the reliability and efficiency of the new method. We then apply this method to InSAR data inversion for the coseismic slip distribution of the 14 April 2010 Mw 6.9 Yushu, China earthquake. Our preferred slip model is composed of three segments with most of the slip occurring within 15 km depth and the maximum slip reaches 1.38 m at the surface. The seismic moment released is estimated to be 2.32e+19 Nm, consistent with the seismic estimate of 2.50e+19 Nm.

  7. Implementation and Validation of the Viscoelastic Continuum Damage Theory for Asphalt Mixture and Pavement Analysis in Brazil

    NASA Astrophysics Data System (ADS)

    Nascimento, Luis Alberto Herrmann do

    This dissertation presents the implementation and validation of the viscoelastic continuum damage (VECD) model for asphalt mixture and pavement analysis in Brazil. It proposes a simulated damage-to-fatigue cracked area transfer function for the layered viscoelastic continuum damage (LVECD) program framework and defines the model framework's fatigue cracking prediction error for asphalt pavement reliability-based design solutions in Brazil. The research is divided into three main steps: (i) implementation of the simplified viscoelastic continuum damage (S-VECD) model in Brazil (Petrobras) for asphalt mixture characterization, (ii) validation of the LVECD model approach for pavement analysis based on field performance observations, and defining a local simulated damage-to-cracked area transfer function for the Fundao Project's pavement test sections in Rio de Janeiro, RJ, and (iii) validation of the Fundao project local transfer function to be used throughout Brazil for asphalt pavement fatigue cracking predictions, based on field performance observations of the National MEPDG Project's pavement test sections, thereby validating the proposed framework's prediction capability. For the first step, the S-VECD test protocol, which uses controlled-on-specimen strain mode-of-loading, was successfully implemented at the Petrobras and used to characterize Brazilian asphalt mixtures that are composed of a wide range of asphalt binders. This research verified that the S-VECD model coupled with the GR failure criterion is accurate for fatigue life predictions of Brazilian asphalt mixtures, even when very different asphalt binders are used. Also, the applicability of the load amplitude sweep (LAS) test for the fatigue characterization of the asphalt binders was checked, and the effects of different asphalt binders on the fatigue damage properties of the asphalt mixtures was investigated. The LAS test results, modeled according to VECD theory, presented a strong correlation with the asphalt mixtures' fatigue performance. In the second step, the S-VECD test protocol was used to characterize the asphalt mixtures used in the 27 selected Fundao project test sections and subjected to real traffic loading. Thus, the asphalt mixture properties, pavement structure data, traffic loading, and climate were input into the LVECD program for pavement fatigue cracking performance simulations. The simulation results showed good agreement with the field-observed distresses. Then, a damage shift approach, based on the initial simulated damage growth rate, was introduced in order to obtain a unique relationship between the LVECD-simulated shifted damage and the pavement-observed fatigue cracked areas. This correlation was fitted to a power form function and defined as the averaged reduced damage-to-cracked area transfer function. The last step consisted of using the averaged reduced damage-to-cracked area transfer function that was developed in the Fundao project to predict pavement fatigue cracking in 17 National MEPDG project test sections. The procedures for the material characterization and pavement data gathering adopted in this step are similar to those used for the Fundao project simulations. This research verified that the transfer function defined for the Fundao project sections can be used for the fatigue performance predictions of a wide range of pavements all over Brazil, as the predicted and observed cracked areas for the National MEPDG pavements presented good agreement, following the same trends found for the Fundao project pavement sites. Based on the prediction errors determined for all 44 pavement test sections (Fundao and National MEPDG test sections), the proposed framework's prediction capability was determined so that reliability-based solutions can be applied for flexible pavement design. It was concluded that the proposed LVECD program framework has very good fatigue cracking prediction capability.

  8. Planning energy-efficient bipedal locomotion on patterned terrain

    NASA Astrophysics Data System (ADS)

    Zamani, Ali; Bhounsule, Pranav A.; Taha, Ahmad

    2016-05-01

    Energy-efficient bipedal walking is essential in realizing practical bipedal systems. However, current energy-efficient bipedal robots (e.g., passive-dynamics-inspired robots) are limited to walking at a single speed and step length. The objective of this work is to address this gap by developing a method of synthesizing energy-efficient bipedal locomotion on patterned terrain consisting of stepping stones using energy-efficient primitives. A model of Cornell Ranger (a passive-dynamics inspired robot) is utilized to illustrate our technique. First, an energy-optimal trajectory control problem for a single step is formulated and solved. The solution minimizes the Total Cost Of Transport (TCOT is defined as the energy used per unit weight per unit distance travelled) subject to various constraints such as actuator limits, foot scuffing, joint kinematic limits, ground reaction forces. The outcome of the optimization scheme is a table of TCOT values as a function of step length and step velocity. Next, we parameterize the terrain to identify the location of the stepping stones. Finally, the TCOT table is used in conjunction with the parameterized terrain to plan an energy-efficient stepping strategy.

  9. OTA-Grapes: A Mechanistic Model to Predict Ochratoxin A Risk in Grapes, a Step beyond the Systems Approach

    PubMed Central

    Battilani, Paola; Camardo Leggieri, Marco

    2015-01-01

    Ochratoxin A (OTA) is a fungal metabolite dangerous for human and animal health due to its nephrotoxic, immunotoxic, mutagenic, teratogenic and carcinogenic effects, classified by the International Agency for Research on Cancer in group 2B, possible human carcinogen. This toxin has been stated as a wine contaminant since 1996. The aim of this study was to develop a conceptual model for the dynamic simulation of the A. carbonarius life cycle in grapes along the growing season, including OTA production in berries. Functions describing the role of weather parameters in each step of the infection cycle were developed and organized in a prototype model called OTA-grapes. Modelling the influence of temperature on OTA production, it emerged that fungal strains can be shared in two different clusters, based on the dynamic of OTA production and according to the optimal temperature. Therefore, two functions were developed, and based on statistical data analysis, it was assumed that the two types of strains contribute equally to the population. Model validation was not possible because of poor OTA contamination data, but relevant differences in OTA-I, the output index of the model, were noticed between low and high risk areas. To our knowledge, this is the first attempt to assess/model A. carbonarius in order to predict the risk of OTA contamination in grapes. PMID:26258791

  10. The host dark matter haloes of [O II] emitters at 0.5 < z < 1.5

    NASA Astrophysics Data System (ADS)

    Gonzalez-Perez, V.; Comparat, J.; Norberg, P.; Baugh, C. M.; Contreras, S.; Lacey, C.; McCullagh, N.; Orsi, A.; Helly, J.; Humphries, J.

    2018-03-01

    Emission line galaxies (ELGs) are used in several ongoing and upcoming surveys (SDSS-IV/eBOSS, DESI) as tracers of the dark matter distribution. Using a new galaxy formation model, we explore the characteristics of [O II] emitters, which dominate optical ELG selections at z ≃ 1. Model [O II] emitters at 0.5 < z < 1.5 are selected to mimic the DEEP2, VVDS, eBOSS and DESI surveys. The luminosity functions of model [O II] emitters are in reasonable agreement with observations. The selected [O II] emitters are hosted by haloes with Mhalo ≥ 1010.3h-1M⊙, with ˜90 per cent of them being central star-forming galaxies. The predicted mean halo occupation distributions of [O II] emitters have a shape typical of that inferred for star-forming galaxies, with the contribution from central galaxies, < N > _{[O II] cen}, being far from the canonical step function. The < N > _{[O II] cen}} can be described as the sum of an asymmetric Gaussian for discs and a step function for spheroids, which plateau below unity. The model [O II] emitters have a clustering bias close to unity, which is below the expectations for eBOSS and DESI ELGs. At z ˜ 1, a comparison with observed g-band-selected galaxy, which is expected to be dominated by [O II] emitters, indicates that our model produces too few [O II] emitters that are satellite galaxies. This suggests the need to revise our modelling of hot gas stripping in satellite galaxies.

  11. PM Evaluation Guidelines.

    ERIC Educational Resources Information Center

    Bauch, Jerold P.

    This paper presents guidelines for the evaluation of candidate performance, the basic function of the evaluation component of the Georgia program model for the preparation of elementary school teachers. The three steps in the evaluation procedure are outlined: (1) proficiency module (PM) entry appraisal (pretest); (2) self evaluation and the…

  12. A Pilot Study of Gait Function in Farmworkers in Eastern North Carolina.

    PubMed

    Nguyen, Ha T; Kritchevsky, Stephen B; Foxworth, Judy L; Quandt, Sara A; Summers, Phillip; Walker, Francis O; Arcury, Thomas A

    2015-01-01

    Farmworkers endure many job-related hazards, including fall-related work injuries. Gait analysis may be useful in identifying potential fallers. The goal of this pilot study was to explore differences in gait between farmworkers and non-farmworkers. The sample included 16 farmworkers and 24 non-farmworkers. Gait variables were collected using the portable GAITRite system, a 16-foot computerized walkway. Generalized linear regression models were used to examine group differences. All models were adjusted for two established confounders, age and body mass index. There were no significant differences in stride length, step length, double support time, and base of support; but farmworkers had greater irregularity of stride length (P = .01) and step length (P = .08). Farmworkers performed significantly worse on gait velocity (P = .003) and cadence (P < .001) relative to non-farmworkers. We found differences in gait function between farmworkers and non-farmworkers. These findings suggest that measuring gait with a portable walkway system is feasible and informative in farmworkers and may possibly be of use in assessing fall risk.

  13. The analysis and interpretation of very-long-period seismic signals on volcanoes

    NASA Astrophysics Data System (ADS)

    Sindija, Dinko; Neuberg, Jurgen; Smith, Patrick

    2017-04-01

    The study of very long period (VLP) seismic signals became possible with the widespread use of broadband instruments. VLP seismic signals are caused by transients of pressure in the volcanic edifice and have periods ranging from several seconds to several minutes. For the VLP events recorded in March 2012 and 2014 at Soufriere Hills Volcano, Montserrat, we model the ground displacement using several source time functions: a step function using Richards growth equation, Küpper wavelet, and a damped sine wave to which an instrument response is then applied. This way we get a synthetic velocity seismogram which is directly comparable to the data. After the full vector field of ground displacement is determined, we model the source mechanism to determine the relationship between the source mechanism and the observed VLP waveforms. Emphasis of the research is on how different VLP waveforms are related to the volcano environment and the instrumentation used and on the processing steps in this low frequency band to get most out of broadband instruments.

  14. Computer Modeling of the Earliest Cellular Structures and Functions

    NASA Technical Reports Server (NTRS)

    Pohorille, Andrew; Chipot, Christophe; Schweighofer, Karl

    2000-01-01

    In the absence of extinct or extant record of protocells (the earliest ancestors of contemporary cells). the most direct way to test our understanding of the origin of cellular life is to construct laboratory models of protocells. Such efforts are currently underway in the NASA Astrobiology Program. They are accompanied by computational studies aimed at explaining self-organization of simple molecules into ordered structures and developing designs for molecules that perform proto-cellular functions. Many of these functions, such as import of nutrients, capture and storage of energy. and response to changes in the environment are carried out by proteins bound to membrane< We will discuss a series of large-scale, molecular-level computer simulations which demonstrate (a) how small proteins (peptides) organize themselves into ordered structures at water-membrane interfaces and insert into membranes, (b) how these peptides aggregate to form membrane-spanning structures (eg. channels), and (c) by what mechanisms such aggregates perform essential proto-cellular functions, such as proton transport of protons across cell walls, a key step in cellular bioenergetics. The simulations were performed using the molecular dynamics method, in which Newton's equations of motion for each item in the system are solved iteratively. The problems of interest required simulations on multi-nanosecond time scales, which corresponded to 10(exp 6)-10(exp 8) time steps.

  15. Covalent immobilization of molecularly imprinted polymer nanoparticles using an epoxy silane.

    PubMed

    Kamra, Tripta; Chaudhary, Shilpi; Xu, Changgang; Johansson, Niclas; Montelius, Lars; Schnadt, Joachim; Ye, Lei

    2015-05-01

    Molecularly imprinted polymers (MIPs) can be used as antibody mimics to develop robust chemical sensors. One challenging problem in using MIPs for sensor development is the lack of reliable conjugation chemistry that allows MIPs to be fixed on transducer surface. In this work, we study the use of epoxy silane to immobilize MIP nanoparticles on model transducer surfaces without impairing the function of the immobilized nanoparticles. The MIP nanoparticles with a core-shell structure have selective molecular binding sites in the core and multiple amino groups in the shell. The model transducer surface is functionalized with a self-assembled monolayer of epoxy silane, which reacts with the core-shell MIP particles to enable straightforward immobilization. The whole process is characterized by studying the treated surfaces after each preparation step using atomic force microscopy, scanning electron microscopy, fluorescence microscopy, contact angle measurements and X-ray photoelectron spectroscopy. The microscopy results show that the MIP particles are immobilized uniformly on surface. The photoelectron spectroscopy results further confirm the action of each functionalization step. The molecular selectivity of the MIP-functionalized surface is verified by radioligand binding analysis. The particle immobilization approach described here has a general applicability for constructing selective chemical sensors in different formats. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  16. Three-step approach for prediction of limit cycle pressure oscillations in combustion chambers of gas turbines

    NASA Astrophysics Data System (ADS)

    Iurashev, Dmytro; Campa, Giovanni; Anisimov, Vyacheslav V.; Cosatto, Ezio

    2017-11-01

    Currently, gas turbine manufacturers frequently face the problem of strong acoustic combustion driven oscillations inside combustion chambers. These combustion instabilities can cause extensive wear and sometimes even catastrophic damages to combustion hardware. This requires prevention of combustion instabilities, which, in turn, requires reliable and fast predictive tools. This work presents a three-step method to find stability margins within which gas turbines can be operated without going into self-excited pressure oscillations. As a first step, a set of unsteady Reynolds-averaged Navier-Stokes simulations with the Flame Speed Closure (FSC) model implemented in the OpenFOAM® environment are performed to obtain the flame describing function of the combustor set-up. The standard FSC model is extended in this work to take into account the combined effect of strain and heat losses on the flame. As a second step, a linear three-time-lag-distributed model for a perfectly premixed swirl-stabilized flame is extended to the nonlinear regime. The factors causing changes in the model parameters when applying high-amplitude velocity perturbations are analysed. As a third step, time-domain simulations employing a low-order network model implemented in Simulink® are performed. In this work, the proposed method is applied to a laboratory test rig. The proposed method permits not only the unsteady frequencies of acoustic oscillations to be computed, but the amplitudes of such oscillations as well. Knowing the amplitudes of unstable pressure oscillations, it is possible to determine how these oscillations are harmful to the combustor equipment. The proposed method has a low cost because it does not require any license for computational fluid dynamics software.

  17. Real-time control of hind limb functional electrical stimulation using feedback from dorsal root ganglia recordings

    NASA Astrophysics Data System (ADS)

    Bruns, Tim M.; Wagenaar, Joost B.; Bauman, Matthew J.; Gaunt, Robert A.; Weber, Douglas J.

    2013-04-01

    Objective. Functional electrical stimulation (FES) approaches often utilize an open-loop controller to drive state transitions. The addition of sensory feedback may allow for closed-loop control that can respond effectively to perturbations and muscle fatigue. Approach. We evaluated the use of natural sensory nerve signals obtained with penetrating microelectrode arrays in lumbar dorsal root ganglia (DRG) as real-time feedback for closed-loop control of FES-generated hind limb stepping in anesthetized cats. Main results. Leg position feedback was obtained in near real-time at 50 ms intervals by decoding the firing rates of more than 120 DRG neurons recorded simultaneously. Over 5 m of effective linear distance was traversed during closed-loop stepping trials in each of two cats. The controller compensated effectively for perturbations in the stepping path when DRG sensory feedback was provided. The presence of stimulation artifacts and the quality of DRG unit sorting did not significantly affect the accuracy of leg position feedback obtained from the linear decoding model as long as at least 20 DRG units were included in the model. Significance. This work demonstrates the feasibility and utility of closed-loop FES control based on natural neural sensors. Further work is needed to improve the controller and electrode technologies and to evaluate long-term viability.

  18. Real-time control of hind limb functional electrical stimulation using feedback from dorsal root ganglia recordings

    PubMed Central

    Bruns, Tim M; Wagenaar, Joost B; Bauman, Matthew J; Gaunt, Robert A; Weber, Douglas J

    2013-01-01

    Objective Functional electrical stimulation (FES) approaches often utilize an open-loop controller to drive state transitions. The addition of sensory feedback may allow for closed-loop control that can respond effectively to perturbations and muscle fatigue. Approach We evaluated the use of natural sensory nerve signals obtained with penetrating microelectrode arrays in lumbar dorsal root ganglia (DRG) as real-time feedback for closed-loop control of FES-generated hind limb stepping in anesthetized cats. Main results Leg position feedback was obtained in near real-time at 50 ms intervals by decoding the firing rates of more than 120 DRG neurons recorded simultaneously. Over 5 m of effective linear distance was traversed during closed-loop stepping trials in each of two cats. The controller compensated effectively for perturbations in the stepping path when DRG sensory feedback was provided. The presence of stimulation artifacts and the quality of DRG unit sorting did not significantly affect the accuracy of leg position feedback obtained from the linear decoding model as long as at least 20 DRG units were included in the model. Significance This work demonstrates the feasibility and utility of closed-loop FES control based on natural neural sensors. Further work is needed to improve the controller and electrode technologies and to evaluate long-term viability. PMID:23503062

  19. Simplified model of mean double step (MDS) in human body movement

    NASA Astrophysics Data System (ADS)

    Dusza, Jacek J.; Wawrzyniak, Zbigniew M.; Mugarra González, C. Fernando

    In this paper we present a simplified and useful model of the human body movement based on the full gait cycle description, called the Mean Double Step (MDS). It enables the parameterization and simplification of the human movement. Furthermore it allows a description of the gait cycle by providing standardized estimators to transform the gait cycle into a periodical movement process. Moreover the method of simplifying the MDS model and its compression are demonstrated. The simplification is achieved by reducing the number of bars of the spectrum and I or by reducing the number of samples describing the MDS both in terms of reducing their computational burden and their resources for the data storage. Our MDS model, which is applicable to the gait cycle method for examining patients, is non-invasive and provides the additional advantage of featuring a functional characterization of the relative or absolute movement of any part of the body.

  20. A Multiobjective Interval Programming Model for Wind-Hydrothermal Power System Dispatching Using 2-Step Optimization Algorithm

    PubMed Central

    Jihong, Qu

    2014-01-01

    Wind-hydrothermal power system dispatching has received intensive attention in recent years because it can help develop various reasonable plans to schedule the power generation efficiency. But future data such as wind power output and power load would not be accurately predicted and the nonlinear nature involved in the complex multiobjective scheduling model; therefore, to achieve accurate solution to such complex problem is a very difficult task. This paper presents an interval programming model with 2-step optimization algorithm to solve multiobjective dispatching. Initially, we represented the future data into interval numbers and simplified the object function to a linear programming problem to search the feasible and preliminary solutions to construct the Pareto set. Then the simulated annealing method was used to search the optimal solution of initial model. Thorough experimental results suggest that the proposed method performed reasonably well in terms of both operating efficiency and precision. PMID:24895663

  1. A multiobjective interval programming model for wind-hydrothermal power system dispatching using 2-step optimization algorithm.

    PubMed

    Ren, Kun; Jihong, Qu

    2014-01-01

    Wind-hydrothermal power system dispatching has received intensive attention in recent years because it can help develop various reasonable plans to schedule the power generation efficiency. But future data such as wind power output and power load would not be accurately predicted and the nonlinear nature involved in the complex multiobjective scheduling model; therefore, to achieve accurate solution to such complex problem is a very difficult task. This paper presents an interval programming model with 2-step optimization algorithm to solve multiobjective dispatching. Initially, we represented the future data into interval numbers and simplified the object function to a linear programming problem to search the feasible and preliminary solutions to construct the Pareto set. Then the simulated annealing method was used to search the optimal solution of initial model. Thorough experimental results suggest that the proposed method performed reasonably well in terms of both operating efficiency and precision.

  2. Utility of a novel error-stepping method to improve gradient-based parameter identification by increasing the smoothness of the local objective surface: a case-study of pulmonary mechanics.

    PubMed

    Docherty, Paul D; Schranz, Christoph; Chase, J Geoffrey; Chiew, Yeong Shiong; Möller, Knut

    2014-05-01

    Accurate model parameter identification relies on accurate forward model simulations to guide convergence. However, some forward simulation methodologies lack the precision required to properly define the local objective surface and can cause failed parameter identification. The role of objective surface smoothness in identification of a pulmonary mechanics model was assessed using forward simulation from a novel error-stepping method and a proprietary Runge-Kutta method. The objective surfaces were compared via the identified parameter discrepancy generated in a Monte Carlo simulation and the local smoothness of the objective surfaces they generate. The error-stepping method generated significantly smoother error surfaces in each of the cases tested (p<0.0001) and more accurate model parameter estimates than the Runge-Kutta method in three of the four cases tested (p<0.0001) despite a 75% reduction in computational cost. Of note, parameter discrepancy in most cases was limited to a particular oblique plane, indicating a non-intuitive multi-parameter trade-off was occurring. The error-stepping method consistently improved or equalled the outcomes of the Runge-Kutta time-integration method for forward simulations of the pulmonary mechanics model. This study indicates that accurate parameter identification relies on accurate definition of the local objective function, and that parameter trade-off can occur on oblique planes resulting prematurely halted parameter convergence. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  3. Electro-thermal battery model identification for automotive applications

    NASA Astrophysics Data System (ADS)

    Hu, Y.; Yurkovich, S.; Guezennec, Y.; Yurkovich, B. J.

    This paper describes a model identification procedure for identifying an electro-thermal model of lithium ion batteries used in automotive applications. The dynamic model structure adopted is based on an equivalent circuit model whose parameters are scheduled on the state-of-charge, temperature, and current direction. Linear spline functions are used as the functional form for the parametric dependence. The model identified in this way is valid inside a large range of temperatures and state-of-charge, so that the resulting model can be used for automotive applications such as on-board estimation of the state-of-charge and state-of-health. The model coefficients are identified using a multiple step genetic algorithm based optimization procedure designed for large scale optimization problems. The validity of the procedure is demonstrated experimentally for an A123 lithium ion iron-phosphate battery.

  4. A multilayer shallow water system for polydisperse sedimentation

    NASA Astrophysics Data System (ADS)

    Fernández-Nieto, E. D.; Koné, E. H.; Morales de Luna, T.; Bürger, R.

    2013-04-01

    This work considers the flow of a fluid containing one disperse substance consisting of small particles that belong to different species differing in size and density. The flow is modelled by combining a multilayer shallow water approach with a polydisperse sedimentation process. This technique allows one to keep information on the vertical distribution of the solid particles in the mixture, and thereby to model the segregation of the particle species from each other, and from the fluid, taking place in the vertical direction of the gravity body force only. This polydisperse sedimentation process is described by the well-known Masliyah-Lockett-Bassoon (MLB) velocity functions. The resulting multilayer sedimentation-flow model can be written as a hyperbolic system with nonconservative products. The definitions of the nonconservative products are related to the hydrostatic pressure and to the mass and momentum hydrodynamic transfer terms between the layers. For the numerical discretization a strategy of two steps is proposed, where the first one is also divided into two parts. In the first step, instead of approximating the complete model, we approximate a reduced model with a smaller number of unknowns. Then, taking advantage of the fact that the concentrations are passive scalars in the system, we approximate the concentrations of the different species by an upwind scheme related to the numerical flux of the total concentration. In the second step, the effect of the transference terms defined in terms of the MLB model is introduced. These transfer terms are approximated by using a numerical flux function used to discretize the 1D vertical polydisperse model, see Bürger et al. [ R. Bürger, A. García, K.H. Karlsen, J.D. Towers, A family of numerical schemes for kinematic flows with discontinuous flux, J. Eng. Math. 60 (2008) 387-425]. Finally, some numerical examples are presented. Numerical results suggest that the multilayer shallow water model could be adequate in situations where the settling takes place from a suspension that undergoes horizontal movement.

  5. Mitigating Handoff Call Dropping in Wireless Cellular Networks: A Call Admission Control Technique

    NASA Astrophysics Data System (ADS)

    Ekpenyong, Moses Effiong; Udoh, Victoria Idia; Bassey, Udoma James

    2016-06-01

    Handoff management has been an important but challenging issue in the field of wireless communication. It seeks to maintain seamless connectivity of mobile users changing their points of attachment from one base station to another. This paper derives a call admission control model and establishes an optimal step-size coefficient (k) that regulates the admission probability of handoff calls. An operational CDMA network carrier was investigated through the analysis of empirical data collected over a period of 1 month, to verify the performance of the network. Our findings revealed that approximately 23 % of calls in the existing system were lost, while 40 % of the calls (on the average) were successfully admitted. A simulation of the proposed model was then carried out under ideal network conditions to study the relationship between the various network parameters and validate our claim. Simulation results showed that increasing the step-size coefficient degrades the network performance. Even at optimum step-size (k), the network could still be compromised in the presence of severe network crises, but our model was able to recover from these problems and still functions normally.

  6. Modeling Growth of Nanostructures in Plasmas

    NASA Technical Reports Server (NTRS)

    Hwang, Helen H.; Bose, Deepak; Govindan, T. R.; Meyyappan, M.

    2004-01-01

    As semiconductor circuits shrink to CDs below 0.1 nm, it is becoming increasingly critical to replace and/or enhance existing technology with nanoscale structures, such as nanowires for interconnects. Nanowires grown in plasmas are strongly dependent on processing conditions, such as gas composition and substrate temperature. Growth occurs at specific sites, or step-edges, with the bulk growth rate of the nanowires determined from the equation of motion of the nucleating crystalline steps. Traditional front-tracking algorithms, such as string-based or level set methods, suffer either from numerical complications in higher spatial dimensions, or from difficulties in incorporating surface-intense physical and chemical phenomena. Phase field models have the robustness of the level set method, combined with the ability to implement surface-specific chemistry that is required to model crystal growth, although they do not necessarily directly solve for the advancing front location. We have adopted a phase field approach and will present results of the adatom density and step-growth location in time as a function of processing conditions, such as temperature and plasma gas composition.

  7. Modelling microbial metabolic rewiring during growth in a complex medium.

    PubMed

    Fondi, Marco; Bosi, Emanuele; Presta, Luana; Natoli, Diletta; Fani, Renato

    2016-11-24

    In their natural environment, bacteria face a wide range of environmental conditions that change over time and that impose continuous rearrangements at all the cellular levels (e.g. gene expression, metabolism). When facing a nutritionally rich environment, for example, microbes first use the preferred compound(s) and only later start metabolizing the other one(s). A systemic re-organization of the overall microbial metabolic network in response to a variation in the composition/concentration of the surrounding nutrients has been suggested, although the range and the entity of such modifications in organisms other than a few model microbes has been scarcely described up to now. We used multi-step constraint-based metabolic modelling to simulate the growth in a complex medium over several time steps of the Antarctic model organism Pseudoalteromonas haloplanktis TAC125. As each of these phases is characterized by a specific set of amino acids to be used as carbon and energy source our modelling framework describes the major consequences of nutrients switching at the system level. The model predicts that a deep metabolic reprogramming might be required to achieve optimal biomass production in different stages of growth (different medium composition), with at least half of the cellular metabolic network involved (more than 50% of the metabolic genes). Additionally, we show that our modelling framework is able to capture metabolic functional association and/or common regulatory features of the genes embedded in our reconstruction (e.g. the presence of common regulatory motifs). Finally, to explore the possibility of a sub-optimal biomass objective function (i.e. that cells use resources in alternative metabolic processes at the expense of optimal growth) we have implemented a MOMA-based approach (called nutritional-MOMA) and compared the outcomes with those obtained with Flux Balance Analysis (FBA). Growth simulations under this scenario revealed the deep impact of choosing among alternative objective functions on the resulting predictions of fluxes distribution. Here we provide a time-resolved, systems-level scheme of PhTAC125 metabolic re-wiring as a consequence of carbon source switching in a nutritionally complex medium. Our analyses suggest the presence of a potential efficient metabolic reprogramming machinery to continuously and promptly adapt to this nutritionally changing environment, consistent with adaptation to fast growth in a fairly, but probably inconstant and highly competitive, environment. Also, we show i) how functional partnership and co-regulation features can be predicted by integrating multi-step constraint-based metabolic modelling with fed-batch growth data and ii) that performing simulations under a sub-optimal objective function may lead to different flux distributions in respect to canonical FBA.

  8. 11-Step Total Synthesis of (−)-Maoecrystal V

    PubMed Central

    2016-01-01

    An expedient, practical, and enantioselective route to the highly congested ent-kaurane diterpene maoecrystal V is presented. This route, which has been several years in the making, is loosely modeled after a key pinacol shift in the proposed biosynthesis. Only 11 steps, many of which are strategic in that they build key skeletal bonds and incorporate critical functionalities, are required to access (−)-maoecrystal V. Several unique and unexpected maneuvers are featured in this potentially scalable pathway. Reevaluation of the biological activity calls into question the initial exuberance surrounding this natural product. PMID:27457680

  9. Cost-effectiveness Analysis in R Using a Multi-state Modeling Survival Analysis Framework: A Tutorial.

    PubMed

    Williams, Claire; Lewsey, James D; Briggs, Andrew H; Mackay, Daniel F

    2017-05-01

    This tutorial provides a step-by-step guide to performing cost-effectiveness analysis using a multi-state modeling approach. Alongside the tutorial, we provide easy-to-use functions in the statistics package R. We argue that this multi-state modeling approach using a package such as R has advantages over approaches where models are built in a spreadsheet package. In particular, using a syntax-based approach means there is a written record of what was done and the calculations are transparent. Reproducing the analysis is straightforward as the syntax just needs to be run again. The approach can be thought of as an alternative way to build a Markov decision-analytic model, which also has the option to use a state-arrival extended approach. In the state-arrival extended multi-state model, a covariate that represents patients' history is included, allowing the Markov property to be tested. We illustrate the building of multi-state survival models, making predictions from the models and assessing fits. We then proceed to perform a cost-effectiveness analysis, including deterministic and probabilistic sensitivity analyses. Finally, we show how to create 2 common methods of visualizing the results-namely, cost-effectiveness planes and cost-effectiveness acceptability curves. The analysis is implemented entirely within R. It is based on adaptions to functions in the existing R package mstate to accommodate parametric multi-state modeling that facilitates extrapolation of survival curves.

  10. Finite cohesion due to chain entanglement in polymer melts.

    PubMed

    Cheng, Shiwang; Lu, Yuyuan; Liu, Gengxin; Wang, Shi-Qing

    2016-04-14

    Three different types of experiments, quiescent stress relaxation, delayed rate-switching during stress relaxation, and elastic recovery after step strain, are carried out in this work to elucidate the existence of a finite cohesion barrier against free chain retraction in entangled polymers. Our experiments show that there is little hastened stress relaxation from step-wise shear up to γ = 0.7 and step-wise extension up to the stretching ratio λ = 1.5 at any time before or after the Rouse time. In contrast, a noticeable stress drop stemming from the built-in barrier-free chain retraction is predicted using the GLaMM model. In other words, the experiment reveals a threshold magnitude of step-wise deformation below which the stress relaxation follows identical dynamics whereas the GLaMM or Doi-Edwards model indicates a monotonic acceleration of the stress relaxation dynamics as a function of the magnitude of the step-wise deformation. Furthermore, a sudden application of startup extension during different stages of stress relaxation after a step-wise extension, i.e. the delayed rate-switching experiment, shows that the geometric condensation of entanglement strands in the cross-sectional area survives beyond the reptation time τd that is over 100 times the Rouse time τR. Our results point to the existence of a cohesion barrier that can prevent free chain retraction upon moderate deformation in well-entangled polymer melts.

  11. Migration mechanisms of a faceted grain boundary

    NASA Astrophysics Data System (ADS)

    Hadian, R.; Grabowski, B.; Finnis, M. W.; Neugebauer, J.

    2018-04-01

    We report molecular dynamics simulations and their analysis for a mixed tilt and twist grain boundary vicinal to the Σ 7 symmetric tilt boundary of the type {1 2 3 } in aluminum. When minimized in energy at 0 K , a grain boundary of this type exhibits nanofacets that contain kinks. We observe that at higher temperatures of migration simulations, given extended annealing times, it is energetically favorable for these nanofacets to coalesce into a large terrace-facet structure. Therefore, we initiate the simulations from such a structure and study as a function of applied driving force and temperature how the boundary migrates. We find the migration of a faceted boundary can be described in terms of the flow of steps. The migration is dominated at lower driving force by the collective motion of the steps incorporated in the facet, and at higher driving forces by the step detachment from the terrace-facet junction and propagation of steps across the terraces. The velocity of steps on terraces is faster than their velocity when incorporated in the facet, and very much faster than the velocity of the facet profile itself, which is almost stationary. A simple kinetic Monte Carlo model matches the broad kinematic features revealed by the molecular dynamics. Since the mechanisms seem likely to be very general on kinked grain-boundary planes, the step-flow description is a promising approach to more quantitative modeling of general grain boundaries.

  12. Intermediate surface structure between step bunching and step flow in SrRuO3 thin film growth

    NASA Astrophysics Data System (ADS)

    Bertino, Giulia; Gura, Anna; Dawber, Matthew

    We performed a systematic study of SrRuO3 thin films grown on TiO2 terminated SrTiO3 substrates using off-axis magnetron sputtering. We investigated the step bunching formation and the evolution of the SRO film morphology by varying the step size of the substrate, the growth temperature and the film thickness. The thin films were characterized using Atomic Force Microscopy and X-Ray Diffraction. We identified single and multiple step bunching and step flow growth regimes as a function of the growth parameters. Also, we clearly observe a stronger influence of the step size of the substrate on the evolution of the SRO film surface with respect to the other growth parameters. Remarkably, we observe the formation of a smooth, regular and uniform ``fish skin'' structure at the transition between one regime and another. We believe that the fish skin structure results from the merging of 2D flat islands predicted by previous models. The direct observation of this transition structure allows us to better understand how and when step bunching develops in the growth of SrRuO3 thin films.

  13. Construction of Optimally Reduced Empirical Model by Spatially Distributed Climate Data

    NASA Astrophysics Data System (ADS)

    Gavrilov, A.; Mukhin, D.; Loskutov, E.; Feigin, A.

    2016-12-01

    We present an approach to empirical reconstruction of the evolution operator in stochastic form by space-distributed time series. The main problem in empirical modeling consists in choosing appropriate phase variables which can efficiently reduce the dimension of the model at minimal loss of information about system's dynamics which consequently leads to more robust model and better quality of the reconstruction. For this purpose we incorporate in the model two key steps. The first step is standard preliminary reduction of observed time series dimension by decomposition via certain empirical basis (e. g. empirical orthogonal function basis or its nonlinear or spatio-temporal generalizations). The second step is construction of an evolution operator by principal components (PCs) - the time series obtained by the decomposition. In this step we introduce a new way of reducing the dimension of the embedding in which the evolution operator is constructed. It is based on choosing proper combinations of delayed PCs to take into account the most significant spatio-temporal couplings. The evolution operator is sought as nonlinear random mapping parameterized using artificial neural networks (ANN). Bayesian approach is used to learn the model and to find optimal hyperparameters: the number of PCs, the dimension of the embedding, the degree of the nonlinearity of ANN. The results of application of the method to climate data (sea surface temperature, sea level pressure) and their comparing with the same method based on non-reduced embedding are presented. The study is supported by Government of Russian Federation (agreement #14.Z50.31.0033 with the Institute of Applied Physics of RAS).

  14. Finger muscle attachments for an OpenSim upper-extremity model.

    PubMed

    Lee, Jong Hwa; Asakawa, Deanna S; Dennerlein, Jack T; Jindrich, Devin L

    2015-01-01

    We determined muscle attachment points for the index, middle, ring and little fingers in an OpenSim upper-extremity model. Attachment points were selected to match both experimentally measured locations and mechanical function (moment arms). Although experimental measurements of finger muscle attachments have been made, models differ from specimens in many respects such as bone segment ratio, joint kinematics and coordinate system. Likewise, moment arms are not available for all intrinsic finger muscles. Therefore, it was necessary to scale and translate muscle attachments from one experimental or model environment to another while preserving mechanical function. We used a two-step process. First, we estimated muscle function by calculating moment arms for all intrinsic and extrinsic muscles using the partial velocity method. Second, optimization using Simulated Annealing and Hooke-Jeeves algorithms found muscle-tendon paths that minimized root mean square (RMS) differences between experimental and modeled moment arms. The partial velocity method resulted in variance accounted for (VAF) between measured and calculated moment arms of 75.5% on average (range from 48.5% to 99.5%) for intrinsic and extrinsic index finger muscles where measured data were available. RMS error between experimental and optimized values was within one standard deviation (S.D) of measured moment arm (mean RMS error = 1.5 mm < measured S.D = 2.5 mm). Validation of both steps of the technique allowed for estimation of muscle attachment points for muscles whose moment arms have not been measured. Differences between modeled and experimentally measured muscle attachments, averaged over all finger joints, were less than 4.9 mm (within 7.1% of the average length of the muscle-tendon paths). The resulting non-proprietary musculoskeletal model of the human fingers could be useful for many applications, including better understanding of complex multi-touch and gestural movements.

  15. Finger Muscle Attachments for an OpenSim Upper-Extremity Model

    PubMed Central

    Lee, Jong Hwa; Asakawa, Deanna S.; Dennerlein, Jack T.; Jindrich, Devin L.

    2015-01-01

    We determined muscle attachment points for the index, middle, ring and little fingers in an OpenSim upper-extremity model. Attachment points were selected to match both experimentally measured locations and mechanical function (moment arms). Although experimental measurements of finger muscle attachments have been made, models differ from specimens in many respects such as bone segment ratio, joint kinematics and coordinate system. Likewise, moment arms are not available for all intrinsic finger muscles. Therefore, it was necessary to scale and translate muscle attachments from one experimental or model environment to another while preserving mechanical function. We used a two-step process. First, we estimated muscle function by calculating moment arms for all intrinsic and extrinsic muscles using the partial velocity method. Second, optimization using Simulated Annealing and Hooke-Jeeves algorithms found muscle-tendon paths that minimized root mean square (RMS) differences between experimental and modeled moment arms. The partial velocity method resulted in variance accounted for (VAF) between measured and calculated moment arms of 75.5% on average (range from 48.5% to 99.5%) for intrinsic and extrinsic index finger muscles where measured data were available. RMS error between experimental and optimized values was within one standard deviation (S.D) of measured moment arm (mean RMS error = 1.5 mm < measured S.D = 2.5 mm). Validation of both steps of the technique allowed for estimation of muscle attachment points for muscles whose moment arms have not been measured. Differences between modeled and experimentally measured muscle attachments, averaged over all finger joints, were less than 4.9 mm (within 7.1% of the average length of the muscle-tendon paths). The resulting non-proprietary musculoskeletal model of the human fingers could be useful for many applications, including better understanding of complex multi-touch and gestural movements. PMID:25853869

  16. VirtualLeaf: an open-source framework for cell-based modeling of plant tissue growth and development.

    PubMed

    Merks, Roeland M H; Guravage, Michael; Inzé, Dirk; Beemster, Gerrit T S

    2011-02-01

    Plant organs, including leaves and roots, develop by means of a multilevel cross talk between gene regulation, patterned cell division and cell expansion, and tissue mechanics. The multilevel regulatory mechanisms complicate classic molecular genetics or functional genomics approaches to biological development, because these methodologies implicitly assume a direct relation between genes and traits at the level of the whole plant or organ. Instead, understanding gene function requires insight into the roles of gene products in regulatory networks, the conditions of gene expression, etc. This interplay is impossible to understand intuitively. Mathematical and computer modeling allows researchers to design new hypotheses and produce experimentally testable insights. However, the required mathematics and programming experience makes modeling poorly accessible to experimental biologists. Problem-solving environments provide biologically intuitive in silico objects ("cells", "regulation networks") required for setting up a simulation and present those to the user in terms of familiar, biological terminology. Here, we introduce the cell-based computer modeling framework VirtualLeaf for plant tissue morphogenesis. The current version defines a set of biologically intuitive C++ objects, including cells, cell walls, and diffusing and reacting chemicals, that provide useful abstractions for building biological simulations of developmental processes. We present a step-by-step introduction to building models with VirtualLeaf, providing basic example models of leaf venation and meristem development. VirtualLeaf-based models provide a means for plant researchers to analyze the function of developmental genes in the context of the biophysics of growth and patterning. VirtualLeaf is an ongoing open-source software project (http://virtualleaf.googlecode.com) that runs on Windows, Mac, and Linux.

  17. Modeling on-column reduction of trisulfide bonds in monoclonal antibodies during protein A chromatography.

    PubMed

    Ghose, Sanchayita; Rajshekaran, Rupshika; Labanca, Marisa; Conley, Lynn

    2017-01-06

    Trisulfides can be a common post-translational modification in many recombinant monoclonal antibodies. These are a source of product heterogeneity that add to the complexity of product characterization and hence, need to be reduced for consistent product quality. Trisulfide bonds can be converted to the regular disulfide bonds by incorporating a novel cysteine wash step during Protein A affinity chromatography. An empirical model is developed for this on-column reduction reaction to compare the reaction rates as a function of typical operating parameters such as temperature, cysteine concentration, reaction time and starting level of trisulfides. The model presented here is anticipated to assist in the development of optimal wash conditions for the Protein A step to effectively reduce trisulfides to desired levels. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. A film-rupture model of hydrogen-induced, slow crack growth in alpha-beta titanium

    NASA Technical Reports Server (NTRS)

    Nelson, H. G.

    1975-01-01

    The appearance of the terrace like fracture morphology of gaseous hydrogen induced crack growth in acicular alpha-beta titanium alloys is discussed as a function of specimen configuration, magnitude of applied stress intensity, test temperature, and hydrogen pressure. Although the overall appearance of the terrace structure remained essentially unchanged, a distinguishable variation is found in the size of the individual terrace steps, and step size is found to be inversely dependent upon the rate of hydrogen induced slow crack growth. Additionally, this inverse relationship is independent of all the variables investigated. These observations are quantitatively discussed in terms of the formation and growth of a thin hydride film along the alpha-beta boundaries and a qualitative model for hydrogen induced slow crack growth is presented, based on the film-rupture model of stress corrosion cracking.

  19. Numerically accurate computational techniques for optimal estimator analyses of multi-parameter models

    NASA Astrophysics Data System (ADS)

    Berger, Lukas; Kleinheinz, Konstantin; Attili, Antonio; Bisetti, Fabrizio; Pitsch, Heinz; Mueller, Michael E.

    2018-05-01

    Modelling unclosed terms in partial differential equations typically involves two steps: First, a set of known quantities needs to be specified as input parameters for a model, and second, a specific functional form needs to be defined to model the unclosed terms by the input parameters. Both steps involve a certain modelling error, with the former known as the irreducible error and the latter referred to as the functional error. Typically, only the total modelling error, which is the sum of functional and irreducible error, is assessed, but the concept of the optimal estimator enables the separate analysis of the total and the irreducible errors, yielding a systematic modelling error decomposition. In this work, attention is paid to the techniques themselves required for the practical computation of irreducible errors. Typically, histograms are used for optimal estimator analyses, but this technique is found to add a non-negligible spurious contribution to the irreducible error if models with multiple input parameters are assessed. Thus, the error decomposition of an optimal estimator analysis becomes inaccurate, and misleading conclusions concerning modelling errors may be drawn. In this work, numerically accurate techniques for optimal estimator analyses are identified and a suitable evaluation of irreducible errors is presented. Four different computational techniques are considered: a histogram technique, artificial neural networks, multivariate adaptive regression splines, and an additive model based on a kernel method. For multiple input parameter models, only artificial neural networks and multivariate adaptive regression splines are found to yield satisfactorily accurate results. Beyond a certain number of input parameters, the assessment of models in an optimal estimator analysis even becomes practically infeasible if histograms are used. The optimal estimator analysis in this paper is applied to modelling the filtered soot intermittency in large eddy simulations using a dataset of a direct numerical simulation of a non-premixed sooting turbulent flame.

  20. Dynamical systems, attractors, and neural circuits.

    PubMed

    Miller, Paul

    2016-01-01

    Biology is the study of dynamical systems. Yet most of us working in biology have limited pedagogical training in the theory of dynamical systems, an unfortunate historical fact that can be remedied for future generations of life scientists. In my particular field of systems neuroscience, neural circuits are rife with nonlinearities at all levels of description, rendering simple methodologies and our own intuition unreliable. Therefore, our ideas are likely to be wrong unless informed by good models. These models should be based on the mathematical theories of dynamical systems since functioning neurons are dynamic-they change their membrane potential and firing rates with time. Thus, selecting the appropriate type of dynamical system upon which to base a model is an important first step in the modeling process. This step all too easily goes awry, in part because there are many frameworks to choose from, in part because the sparsely sampled data can be consistent with a variety of dynamical processes, and in part because each modeler has a preferred modeling approach that is difficult to move away from. This brief review summarizes some of the main dynamical paradigms that can arise in neural circuits, with comments on what they can achieve computationally and what signatures might reveal their presence within empirical data. I provide examples of different dynamical systems using simple circuits of two or three cells, emphasizing that any one connectivity pattern is compatible with multiple, diverse functions.

  1. Step-by-step integration for fractional operators

    NASA Astrophysics Data System (ADS)

    Colinas-Armijo, Natalia; Di Paola, Mario

    2018-06-01

    In this paper, an approach based on the definition of the Riemann-Liouville fractional operators is proposed in order to provide a different discretisation technique as alternative to the Grünwald-Letnikov operators. The proposed Riemann-Liouville discretisation consists of performing step-by-step integration based upon the discretisation of the function f(t). It has been shown that, as f(t) is discretised as stepwise or piecewise function, the Riemann-Liouville fractional integral and derivative are governing by operators very similar to the Grünwald-Letnikov operators. In order to show the accuracy and capabilities of the proposed Riemann-Liouville discretisation technique and the Grünwald-Letnikov discrete operators, both techniques have been applied to: unit step functions, exponential functions and sample functions of white noise.

  2. Phenotyping male infertility in the mouse: how to get the most out of a 'non-performer'.

    PubMed

    Borg, Claire L; Wolski, Katja M; Gibbs, Gerard M; O'Bryan, Moira K

    2010-01-01

    Functional male gametes are produced through complex processes that take place within the testis, epididymis and female reproductive tract. A breakdown at any of these phases can result in male infertility. The production of mutant mouse models often yields an unexpected male infertility phenotype. It is with this in mind that the current review has been written. The review aims to act as a guide to the 'non-reproductive biologist' to facilitate a systematic analysis of sterile or subfertile mice and to assist in extracting the maximum amount of information from each model. This is a review of the original literature on defects in the processes that take a mouse spermatogonial stem cell through to a fully functional spermatozoon, which result in male infertility. Based on literature searches and personal experience, we have outlined a step-by-step strategy for the analysis of an infertile male mouse line. A wide range of methods can be used to define the phenotype of an infertile male mouse. These methods range from histological methods such as electron microscopy and immunohistochemistry, to hormone analyses and methods to assess sperm maturation status and functional competence. With the increased rate of genetically modified mouse production, the generation of mouse models with unexpected male infertility is increasing. This manuscript will help to ensure that the maximum amount of information is obtained from each mouse model and, by extension, will facilitate the knowledge of both normal fertility processes and the causes of human infertility.

  3. Predict the fatigue life of crack based on extended finite element method and SVR

    NASA Astrophysics Data System (ADS)

    Song, Weizhen; Jiang, Zhansi; Jiang, Hui

    2018-05-01

    Using extended finite element method (XFEM) and support vector regression (SVR) to predict the fatigue life of plate crack. Firstly, the XFEM is employed to calculate the stress intensity factors (SIFs) with given crack sizes. Then predicetion model can be built based on the function relationship of the SIFs with the fatigue life or crack length. Finally, according to the prediction model predict the SIFs at different crack sizes or different cycles. Because of the accuracy of the forward Euler method only ensured by the small step size, a new prediction method is presented to resolve the issue. The numerical examples were studied to demonstrate the proposed method allow a larger step size and have a high accuracy.

  4. Investigation of the capillary flow through open surface microfluidic structures

    NASA Astrophysics Data System (ADS)

    Taher, Ahmed; Jones, Benjamin; Fiorini, Paolo; Lagae, Liesbet

    2017-02-01

    The passive nature of capillary microfluidics for pumping and actuation of fluids is attractive for many applications including point of care medical diagnostics. For such applications, there is often the need to spot dried chemical reagents in the bottom of microfluidic channels after device fabrication; it is often more practical to have open surface devices (i.e., without a cover or lid). However, the dynamics of capillary driven flow in open surface devices have not been well studied for many geometries of interest. In this paper, we investigate capillary flow in an open surface microchannel with a backward facing step. An analytical model is developed to calculate the capillary pressure as the liquid-vapor interface traverses a backward facing step in an open microchannel. The developed model is validated against results from Surface Evolver liquid-vapor surface simulations and ANSYS Fluent two-phase flow simulations using the volume of fluid approach. Three different aspect ratios (inlet channel height by channel width) were studied. The analytical model shows good agreement with the simulation results from both modeling methods for all geometries. The analytical model is used to derive an expression for the critical aspect ratio (the minimum channel aspect ratio for flow to proceed across the backward facing step) as a function of contact angle.

  5. Calculational Schemes in GUTs

    NASA Astrophysics Data System (ADS)

    Kounnas, Costas

    The following sections are included: * Introduction * Mass Spectrum in a Spontaneously Broken-Theory SU(5) - Minimal Model * Renormalization and Renormalization Group Equation (R.G.E.) * Step Approximation and Decoupling Theorem * Notion of the Effective Coupling Constant * First Estimation of MX, α(MX) and sin2θ(MW) * Renormalization Properties and Photon-Z Mixing * β-Function Definitions * Threshold Functions and Decoupling Theorem * MX-Determination * Proton Lifetime * sin2θ(μ)-Determination * Quark-Lepton Mass Relations (mb/mτ) * Overview of the Conventional GUTs - Hierarchy Problem * Stability of Hierarchy - Supersymmetric GUTS * Cosmologically Acceptable SUSY GUT Models * Radiative Breaking of SU(2) × U(1) — MW/MX Hierarchy Generation * No Scale Supergravity Models^{56,57} Dynamical Determination of M_{B}-M_{F} * Conclusion * References

  6. Stochastic modeling of turbulent reacting flows

    NASA Technical Reports Server (NTRS)

    Fox, R. O.; Hill, J. C.; Gao, F.; Moser, R. D.; Rogers, M. M.

    1992-01-01

    Direct numerical simulations of a single-step irreversible chemical reaction with non-premixed reactants in forced isotropic turbulence at R(sub lambda) = 63, Da = 4.0, and Sc = 0.7 were made using 128 Fourier modes to obtain joint probability density functions (pdfs) and other statistical information to parameterize and test a Fokker-Planck turbulent mixing model. Preliminary results indicate that the modeled gradient stretching term for an inert scalar is independent of the initial conditions of the scalar field. The conditional pdf of scalar gradient magnitudes is found to be a function of the scalar until the reaction is largely completed. Alignment of concentration gradients with local strain rate and other features of the flow were also investigated.

  7. A Modeling and Data Analysis of Laser Beam Propagation in the Maritime Domain

    DTIC Science & Technology

    2015-05-18

    approach to computing pdfs is the Kernel Density Method (Reference [9] has an intro - duction to the method), which we will apply to compute the pdf of our...The project has two parts to it: 1) we present a computational analysis of different probability density function approximation techniques; and 2) we... computational analysis of different probability density function approximation techniques; and 2) we introduce preliminary steps towards developing a

  8. Optimally frugal foraging

    NASA Astrophysics Data System (ADS)

    Bénichou, O.; Bhat, U.; Krapivsky, P. L.; Redner, S.

    2018-02-01

    We introduce the frugal foraging model in which a forager performs a discrete-time random walk on a lattice in which each site initially contains S food units. The forager metabolizes one unit of food at each step and starves to death when it last ate S steps in the past. Whenever the forager eats, it consumes all food at its current site and this site remains empty forever (no food replenishment). The crucial property of the forager is that it is frugal and eats only when encountering food within at most k steps of starvation. We compute the average lifetime analytically as a function of the frugality threshold and show that there exists an optimal strategy, namely, an optimal frugality threshold k* that maximizes the forager lifetime.

  9. A new approach for the one-step synthesis of bioactive PS vs. PMMA silica hybrid microspheres as potential drug delivery systems.

    PubMed

    Angelopoulou, A; Efthimiadou, E K; Boukos, N; Kordas, G

    2014-05-01

    In this work, hybrid microspheres were prepared in a two-step process combining the emulsifier free-emulsion polymerization and the sol-gel coating method. In the first step, polystyrene (St) and poly(methyl methacrylate) (PMMA) microspheres were prepared as sacrificial template and in the second step a silanol shell was fabricated. The functionalized surface of the hybrid microspheres by silane analogs (APTES, TEOS) resulted in enhanced effects. The hollow microspheres were resulted either in an additional step by template dissolution and/or during the coating process. The microspheres' surface interactions and the size distribution were optimized by treatment in simulated body fluids, which resulted in the in vitro prediction of bioactivity. The bioassay test indicated that the induced hydroxyapatite resembled in structure to naturally occurring bone apatite. The drug doxorubicin (DOX) was used as a model entity for the evaluation of drug loading and release. The drug release study was performed in two different pH conditions, at acidic (pH=4.5) close to cancer cell environment and at slightly basic pH (pH=7.4) resembling the orthopedic environment. The results of the present study indicated promising hybrid microspheres for the potential application as drug delivery vehicles, for dual orthopedic functionalities in bone defects, bone inflammation, bone cancer and bone repair. Copyright © 2014 Elsevier B.V. All rights reserved.

  10. Heterologous pathway assembly reveals molecular steps of fungal terreic acid biosynthesis.

    PubMed

    Kong, Chuixing; Huang, Hezhou; Xue, Ying; Liu, Yiqi; Peng, Qiangqiang; Liu, Qi; Xu, Qin; Zhu, Qiaoyun; Yin, Ying; Zhou, Xiangshan; Zhang, Yuanxing; Cai, Menghao

    2018-02-01

    Terreic acid is a potential anticancer drug as it inhibits Bruton's tyrosine kinase; however, its biosynthetic molecular steps remain unclear. In this work, the individual reactions of terreic acid biosynthesis were determined by stepwise pathway assembly in a heterologous host, Pichia pastoris, on the basis of previous knockout studies in a native host, Aspergillus terreus. Polyketide synthase AtX was found to catalyze the formation of partially reduced polyketide 6-methylsalicylic acid, followed by 3-methylcatechol synthesis by salicylate 1-monooxygenase AtA-mediated decarboxylative hydroxylation of 6-methylsalicylic acid. Our results show that cytochrome P450 monooxygenase AtE hydroxylates 3-methylcatechol, thus producing the next product, 3-methyl-1,2,4-benzenetriol. A smaller putative cytochrome P450 monooxygenase, AtG, assists with this step. Then, AtD causes epoxidation and hydroxyl oxidation of 3-methyl-1,2,4-benzenetriol and produces a compound terremutin, via which the previously unknown function of AtD was identified as cyclooxygenation. The final step involves an oxidation reaction of a hydroxyl group by a glucose-methanol-choline oxidoreductase, AtC, which leads to the final product: terreic acid. Functions of AtD and AtG were determined for the first time. All the genes were reanalyzed and all intermediates and final products were isolated and identified. Our model fully defines the molecular steps and corrects previous results from the literature.

  11. The Role of Emotions in Employee Creativity.

    ERIC Educational Resources Information Center

    Higgins, Lexis F.; And Others

    1992-01-01

    This paper examines research on influences of emotions on creativity, describes how feelings impact an individual's ability and willingness to function creatively, and discusses the implications for management of creativity in the employment setting. A four-step model of the creative process is discussed, and two sources (proximal and distal) of…

  12. Identifying mechanical property parameters of planetary soil using in-situ data obtained from exploration rovers

    NASA Astrophysics Data System (ADS)

    Ding, Liang; Gao, Haibo; Liu, Zhen; Deng, Zongquan; Liu, Guangjun

    2015-12-01

    Identifying the mechanical property parameters of planetary soil based on terramechanics models using in-situ data obtained from autonomous planetary exploration rovers is both an important scientific goal and essential for control strategy optimization and high-fidelity simulations of rovers. However, identifying all the terrain parameters is a challenging task because of the nonlinear and coupling nature of the involved functions. Three parameter identification methods are presented in this paper to serve different purposes based on an improved terramechanics model that takes into account the effects of slip, wheel lugs, etc. Parameter sensitivity and coupling of the equations are analyzed, and the parameters are grouped according to their sensitivity to the normal force, resistance moment and drawbar pull. An iterative identification method using the original integral model is developed first. In order to realize real-time identification, the model is then simplified by linearizing the normal and shearing stresses to derive decoupled closed-form analytical equations. Each equation contains one or two groups of soil parameters, making step-by-step identification of all the unknowns feasible. Experiments were performed using six different types of single-wheels as well as a four-wheeled rover moving on planetary soil simulant. All the unknown model parameters were identified using the measured data and compared with the values obtained by conventional experiments. It is verified that the proposed iterative identification method provides improved accuracy, making it suitable for scientific studies of soil properties, whereas the step-by-step identification methods based on simplified models require less calculation time, making them more suitable for real-time applications. The models have less than 10% margin of error comparing with the measured results when predicting the interaction forces and moments using the corresponding identified parameters.

  13. Applying the multivariate time-rescaling theorem to neural population models

    PubMed Central

    Gerhard, Felipe; Haslinger, Robert; Pipa, Gordon

    2011-01-01

    Statistical models of neural activity are integral to modern neuroscience. Recently, interest has grown in modeling the spiking activity of populations of simultaneously recorded neurons to study the effects of correlations and functional connectivity on neural information processing. However any statistical model must be validated by an appropriate goodness-of-fit test. Kolmogorov-Smirnov tests based upon the time-rescaling theorem have proven to be useful for evaluating point-process-based statistical models of single-neuron spike trains. Here we discuss the extension of the time-rescaling theorem to the multivariate (neural population) case. We show that even in the presence of strong correlations between spike trains, models which neglect couplings between neurons can be erroneously passed by the univariate time-rescaling test. We present the multivariate version of the time-rescaling theorem, and provide a practical step-by-step procedure for applying it towards testing the sufficiency of neural population models. Using several simple analytically tractable models and also more complex simulated and real data sets, we demonstrate that important features of the population activity can only be detected using the multivariate extension of the test. PMID:21395436

  14. Efficient hyperspectral image segmentation using geometric active contour formulation

    NASA Astrophysics Data System (ADS)

    Albalooshi, Fatema A.; Sidike, Paheding; Asari, Vijayan K.

    2014-10-01

    In this paper, we present a new formulation of geometric active contours that embeds the local hyperspectral image information for an accurate object region and boundary extraction. We exploit self-organizing map (SOM) unsupervised neural network to train our model. The segmentation process is achieved by the construction of a level set cost functional, in which, the dynamic variable is the best matching unit (BMU) coming from SOM map. In addition, we use Gaussian filtering to discipline the deviation of the level set functional from a signed distance function and this actually helps to get rid of the re-initialization step that is computationally expensive. By using the properties of the collective computational ability and energy convergence capability of the active control models (ACM) energy functional, our method optimizes the geometric ACM energy functional with lower computational time and smoother level set function. The proposed algorithm starts with feature extraction from raw hyperspectral images. In this step, the principal component analysis (PCA) transformation is employed, and this actually helps in reducing dimensionality and selecting best sets of the significant spectral bands. Then the modified geometric level set functional based ACM is applied on the optimal number of spectral bands determined by the PCA. By introducing local significant spectral band information, our proposed method is capable to force the level set functional to be close to a signed distance function, and therefore considerably remove the need of the expensive re-initialization procedure. To verify the effectiveness of the proposed technique, we use real-life hyperspectral images and test our algorithm in varying textural regions. This framework can be easily adapted to different applications for object segmentation in aerial hyperspectral imagery.

  15. Linear model for fast background subtraction in oligonucleotide microarrays.

    PubMed

    Kroll, K Myriam; Barkema, Gerard T; Carlon, Enrico

    2009-11-16

    One important preprocessing step in the analysis of microarray data is background subtraction. In high-density oligonucleotide arrays this is recognized as a crucial step for the global performance of the data analysis from raw intensities to expression values. We propose here an algorithm for background estimation based on a model in which the cost function is quadratic in a set of fitting parameters such that minimization can be performed through linear algebra. The model incorporates two effects: 1) Correlated intensities between neighboring features in the chip and 2) sequence-dependent affinities for non-specific hybridization fitted by an extended nearest-neighbor model. The algorithm has been tested on 360 GeneChips from publicly available data of recent expression experiments. The algorithm is fast and accurate. Strong correlations between the fitted values for different experiments as well as between the free-energy parameters and their counterparts in aqueous solution indicate that the model captures a significant part of the underlying physical chemistry.

  16. Vocal Development as a Guide to Modeling the Evolution of Language.

    PubMed

    Oller, D Kimbrough; Griebel, Ulrike; Warlaumont, Anne S

    2016-04-01

    Modeling of evolution and development of language has principally utilized mature units of spoken language, phonemes and words, as both targets and inputs. This approach cannot address the earliest phases of development because young infants are unable to produce such language features. We argue that units of early vocal development-protophones and their primitive illocutionary/perlocutionary forces-should be targeted in evolutionary modeling because they suggest likely units of hominin vocalization/communication shortly after the split from the chimpanzee/bonobo lineage, and because early development of spontaneous vocal capability is a logically necessary step toward vocal language, a root capability without which other crucial steps toward vocal language capability are impossible. Modeling of language evolution/development must account for dynamic change in early communicative units of form/function across time. We argue for interactive contributions of sender/infants and receiver/caregivers in a feedback loop involving both development and evolution and propose to begin computational modeling at the hominin break from the primate communicative background. Copyright © 2016 Cognitive Science Society, Inc.

  17. Penalized Nonlinear Least Squares Estimation of Time-Varying Parameters in Ordinary Differential Equations

    PubMed Central

    Cao, Jiguo; Huang, Jianhua Z.; Wu, Hulin

    2012-01-01

    Ordinary differential equations (ODEs) are widely used in biomedical research and other scientific areas to model complex dynamic systems. It is an important statistical problem to estimate parameters in ODEs from noisy observations. In this article we propose a method for estimating the time-varying coefficients in an ODE. Our method is a variation of the nonlinear least squares where penalized splines are used to model the functional parameters and the ODE solutions are approximated also using splines. We resort to the implicit function theorem to deal with the nonlinear least squares objective function that is only defined implicitly. The proposed penalized nonlinear least squares method is applied to estimate a HIV dynamic model from a real dataset. Monte Carlo simulations show that the new method can provide much more accurate estimates of functional parameters than the existing two-step local polynomial method which relies on estimation of the derivatives of the state function. Supplemental materials for the article are available online. PMID:23155351

  18. Principal Dynamic Mode Analysis of the Hodgkin–Huxley Equations

    PubMed Central

    Eikenberry, Steffen E.; Marmarelis, Vasilis Z.

    2015-01-01

    We develop an autoregressive model framework based on the concept of Principal Dynamic Modes (PDMs) for the process of action potential (AP) generation in the excitable neuronal membrane described by the Hodgkin–Huxley (H–H) equations. The model's exogenous input is injected current, and whenever the membrane potential output exceeds a specified threshold, it is fed back as a second input. The PDMs are estimated from the previously developed Nonlinear Autoregressive Volterra (NARV) model, and represent an efficient functional basis for Volterra kernel expansion. The PDM-based model admits a modular representation, consisting of the forward and feedback PDM bases as linear filterbanks for the exogenous and autoregressive inputs, respectively, whose outputs are then fed to a static nonlinearity composed of polynomials operating on the PDM outputs and cross-terms of pair-products of PDM outputs. A two-step procedure for model reduction is performed: first, influential subsets of the forward and feedback PDM bases are identified and selected as the reduced PDM bases. Second, the terms of the static nonlinearity are pruned. The first step reduces model complexity from a total of 65 coefficients to 27, while the second further reduces the model coefficients to only eight. It is demonstrated that the performance cost of model reduction in terms of out-of-sample prediction accuracy is minimal. Unlike the full model, the eight coefficient pruned model can be easily visualized to reveal the essential system components, and thus the data-derived PDM model can yield insight into the underlying system structure and function. PMID:25630480

  19. Empirical Analysis of the Photoelectrochemical Impedance Response of Hematite Photoanodes for Water Photo-oxidation.

    PubMed

    Klotz, Dino; Grave, Daniel A; Dotan, Hen; Rothschild, Avner

    2018-03-15

    Photoelectrochemical impedance spectroscopy (PEIS) is a useful tool for the characterization of photoelectrodes for solar water splitting. However, the analysis of PEIS spectra often involves a priori assumptions that might bias the results. This work puts forward an empirical method that analyzes the distribution of relaxation times (DRT), obtained directly from the measured PEIS spectra of a model hematite photoanode. By following how the DRT evolves as a function of control parameters such as the applied potential and composition of the electrolyte solution, we obtain unbiased insights into the underlying mechanisms that shape the photocurrent. In a subsequent step, we fit the data to a process-oriented equivalent circuit model (ECM) whose makeup is derived from the DRT analysis in the first step. This yields consistent quantitative trends of the dominant polarization processes observed. Our observations reveal a common step for the photo-oxidation reactions of water and H 2 O 2 in alkaline solution.

  20. Numerical Analysis of Modeling Based on Improved Elman Neural Network

    PubMed Central

    Jie, Shao

    2014-01-01

    A modeling based on the improved Elman neural network (IENN) is proposed to analyze the nonlinear circuits with the memory effect. The hidden layer neurons are activated by a group of Chebyshev orthogonal basis functions instead of sigmoid functions in this model. The error curves of the sum of squared error (SSE) varying with the number of hidden neurons and the iteration step are studied to determine the number of the hidden layer neurons. Simulation results of the half-bridge class-D power amplifier (CDPA) with two-tone signal and broadband signals as input have shown that the proposed behavioral modeling can reconstruct the system of CDPAs accurately and depict the memory effect of CDPAs well. Compared with Volterra-Laguerre (VL) model, Chebyshev neural network (CNN) model, and basic Elman neural network (BENN) model, the proposed model has better performance. PMID:25054172

  1. Sensitivity analysis, calibration, and testing of a distributed hydrological model using error‐based weighting and one objective function

    USGS Publications Warehouse

    Foglia, L.; Hill, Mary C.; Mehl, Steffen W.; Burlando, P.

    2009-01-01

    We evaluate the utility of three interrelated means of using data to calibrate the fully distributed rainfall‐runoff model TOPKAPI as applied to the Maggia Valley drainage area in Switzerland. The use of error‐based weighting of observation and prior information data, local sensitivity analysis, and single‐objective function nonlinear regression provides quantitative evaluation of sensitivity of the 35 model parameters to the data, identification of data types most important to the calibration, and identification of correlations among parameters that contribute to nonuniqueness. Sensitivity analysis required only 71 model runs, and regression required about 50 model runs. The approach presented appears to be ideal for evaluation of models with long run times or as a preliminary step to more computationally demanding methods. The statistics used include composite scaled sensitivities, parameter correlation coefficients, leverage, Cook's D, and DFBETAS. Tests suggest predictive ability of the calibrated model typical of hydrologic models.

  2. The Throw-and-Catch Model of Human Gait: Evidence from Coupling of Pre-Step Postural Activity and Step Location

    PubMed Central

    Bancroft, Matthew J.; Day, Brian L.

    2016-01-01

    Postural activity normally precedes the lift of a foot from the ground when taking a step, but its function is unclear. The throw-and-catch hypothesis of human gait proposes that the pre-step activity is organized to generate momentum for the body to fall ballistically along a specific trajectory during the step. The trajectory is appropriate for the stepping foot to land at its intended location while at the same time being optimally placed to catch the body and regain balance. The hypothesis therefore predicts a strong coupling between the pre-step activity and step location. Here we examine this coupling when stepping to visually-presented targets at different locations. Ten healthy, young subjects were instructed to step as accurately as possible onto targets placed in five locations that required either different step directions or different step lengths. In 75% of trials, the target location remained constant throughout the step. In the remaining 25% of trials, the intended step location was changed by making the target jump to a new location 96 ms ± 43 ms after initiation of the pre-step activity, long before foot lift. As predicted by the throw-and-catch hypothesis, when the target location remained constant, the pre-step activity led to body momentum at foot lift that was coupled to the intended step location. When the target location jumped, the pre-step activity was adjusted (median latency 223 ms) and prolonged (on average by 69 ms), which altered the body’s momentum at foot lift according to where the target had moved. We conclude that whenever possible the coupling between the pre-step activity and the step location is maintained. This provides further support for the throw-and-catch hypothesis of human gait. PMID:28066208

  3. The Throw-and-Catch Model of Human Gait: Evidence from Coupling of Pre-Step Postural Activity and Step Location.

    PubMed

    Bancroft, Matthew J; Day, Brian L

    2016-01-01

    Postural activity normally precedes the lift of a foot from the ground when taking a step, but its function is unclear. The throw-and-catch hypothesis of human gait proposes that the pre-step activity is organized to generate momentum for the body to fall ballistically along a specific trajectory during the step. The trajectory is appropriate for the stepping foot to land at its intended location while at the same time being optimally placed to catch the body and regain balance. The hypothesis therefore predicts a strong coupling between the pre-step activity and step location. Here we examine this coupling when stepping to visually-presented targets at different locations. Ten healthy, young subjects were instructed to step as accurately as possible onto targets placed in five locations that required either different step directions or different step lengths. In 75% of trials, the target location remained constant throughout the step. In the remaining 25% of trials, the intended step location was changed by making the target jump to a new location 96 ms ± 43 ms after initiation of the pre-step activity, long before foot lift. As predicted by the throw-and-catch hypothesis, when the target location remained constant, the pre-step activity led to body momentum at foot lift that was coupled to the intended step location. When the target location jumped, the pre-step activity was adjusted (median latency 223 ms) and prolonged (on average by 69 ms), which altered the body's momentum at foot lift according to where the target had moved. We conclude that whenever possible the coupling between the pre-step activity and the step location is maintained. This provides further support for the throw-and-catch hypothesis of human gait.

  4. CXTFIT/Excel A modular adaptable code for parameter estimation, sensitivity analysis and uncertainty analysis for laboratory or field tracer experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tang, Guoping; Mayes, Melanie; Parker, Jack C

    2010-01-01

    We implemented the widely used CXTFIT code in Excel to provide flexibility and added sensitivity and uncertainty analysis functions to improve transport parameter estimation and to facilitate model discrimination for multi-tracer experiments on structured soils. Analytical solutions for one-dimensional equilibrium and nonequilibrium convection dispersion equations were coded as VBA functions so that they could be used as ordinary math functions in Excel for forward predictions. Macros with user-friendly interfaces were developed for optimization, sensitivity analysis, uncertainty analysis, error propagation, response surface calculation, and Monte Carlo analysis. As a result, any parameter with transformations (e.g., dimensionless, log-transformed, species-dependent reactions, etc.) couldmore » be estimated with uncertainty and sensitivity quantification for multiple tracer data at multiple locations and times. Prior information and observation errors could be incorporated into the weighted nonlinear least squares method with a penalty function. Users are able to change selected parameter values and view the results via embedded graphics, resulting in a flexible tool applicable to modeling transport processes and to teaching students about parameter estimation. The code was verified by comparing to a number of benchmarks with CXTFIT 2.0. It was applied to improve parameter estimation for four typical tracer experiment data sets in the literature using multi-model evaluation and comparison. Additional examples were included to illustrate the flexibilities and advantages of CXTFIT/Excel. The VBA macros were designed for general purpose and could be used for any parameter estimation/model calibration when the forward solution is implemented in Excel. A step-by-step tutorial, example Excel files and the code are provided as supplemental material.« less

  5. Direct 3D cell-printing of human skin with functional transwell system.

    PubMed

    Kim, Byoung Soo; Lee, Jung-Seob; Gao, Ge; Cho, Dong-Woo

    2017-06-06

    Three-dimensional (3D) cell-printing has been emerging as a promising technology with which to build up human skin models by enabling rapid and versatile design. Despite the technological advances, challenges remain in the development of fully functional models that recapitulate complexities in the native tissue. Moreover, although several approaches have been explored for the development of biomimetic human skin models, the present skin models based on multistep fabrication methods using polydimethylsiloxane chips and commercial transwell inserts could be tackled by leveraging 3D cell-printing technology. In this paper, we present a new 3D cell-printing strategy for engineering a 3D human skin model with a functional transwell system in a single-step process. A hybrid 3D cell-printing system was developed, allowing for the use of extrusion and inkjet modules at the same time. We began by revealing the significance of each module in engineering human skin models; by using the extrusion-dispensing module, we engineered a collagen-based construct with polycaprolactone (PCL) mesh that prevented the contraction of collagen during tissue maturation; the inkjet-based dispensing module was used to uniformly distribute keratinocytes. Taking these features together, we engineered a human skin model with a functional transwell system; the transwell system and fibroblast-populated dermis were consecutively fabricated by using the extrusion modules. Following this process, keratinocytes were uniformly distributed onto the engineered dermis by the inkjet module. Our transwell system indicates a supportive 3D construct composed of PCL, enabling the maturation of a skin model without the aid of commercial transwell inserts. This skin model revealed favorable biological characteristics that included a stabilized fibroblast-stretched dermis and stratified epidermis layers after 14 days. It was also observed that a 50 times reduction in cost was achieved and 10 times less medium was used than in a conventional culture. Collectively, because this single-step process opens up chances for versatile designs, we envision that our cell-printing strategy could provide an attractive platform in engineering various human skin models.

  6. Gait efficiency on an uneven surface is associated with falls and injury in older subjects with a spectrum of lower limb neuromuscular function: a prospective study

    PubMed Central

    Zurales, Katie; DeMott, Trina K.; Kim, Hogene; Allet, Lara; Ashton-Miller, James A.; Richardson, James K.

    2015-01-01

    Objective To determine which gait measures on smooth and uneven surfaces predict falls and fall-related injuries in older subjects with diabetic peripheral neuropathy (DPN). Design Twenty-seven subjects (12 women) with a spectrum of peripheral nerve function ranging from normal to moderately severe DPN walked on smooth and uneven surfaces, with gait parameters determined by optoelectronic kinematic techniques. Falls and injuries were then determined prospectively over the following year. Results Seventeen subjects (62.9%) fell and 12 (44.4%) sustained a fall-related injury. As compared to non-fallers, the subject group reporting any fall, as well as the subject group reporting fall-related injury, demonstrated decreased speed, greater step width (SW), shorter step length (SL) and greater step-width-to-step-length ratio (SW:SL) on both surfaces. Uneven surface SW:SL was the strongest predictor of falls (pseudo-R2 = 0.65; p = .012) and remained so with inclusion of other relevant variables into the model. Post-hoc analysis comparing injured with non-injured fallers showed no difference in any gait parameter. Conclusion SW:SL on an uneven surface is the strongest predictor of falls and injuries in older subjects with a spectrum of peripheral neurologic function. Given the relationship between SW:SL and efficiency, older neuropathic patients at increased fall risk appear to sacrifice efficiency for stability on uneven surfaces. PMID:26053187

  7. Continuous versus discontinuous albedo representations in a simple diffusive climate model

    NASA Astrophysics Data System (ADS)

    Simmons, P. A.; Griffel, D. H.

    1988-07-01

    A one-dimensional annually and zonally averaged energy-balance model, with diffusive meridional heat transport and including icealbedo feedback, is considered. This type of model is found to be very sensitive to the form of albedo used. The solutions for a discontinuous step-function albedo are compared to those for a more realistic smoothly varying albedo. The smooth albedo gives a closer fit to present conditions, but the discontinuous form gives a better representation of climates in earlier epochs.

  8. Calibration process of highly parameterized semi-distributed hydrological model

    NASA Astrophysics Data System (ADS)

    Vidmar, Andrej; Brilly, Mitja

    2017-04-01

    Hydrological phenomena take place in the hydrological system, which is governed by nature, and are essentially stochastic. These phenomena are unique, non-recurring, and changeable across space and time. Since any river basin with its own natural characteristics and any hydrological event therein, are unique, this is a complex process that is not researched enough. Calibration is a procedure of determining the parameters of a model that are not known well enough. Input and output variables and mathematical model expressions are known, while only some parameters are unknown, which are determined by calibrating the model. The software used for hydrological modelling nowadays is equipped with sophisticated algorithms for calibration purposes without possibility to manage process by modeler. The results are not the best. We develop procedure for expert driven process of calibration. We use HBV-light-CLI hydrological model which has command line interface and coupling it with PEST. PEST is parameter estimation tool which is used widely in ground water modeling and can be used also on surface waters. Process of calibration managed by expert directly, and proportionally to the expert knowledge, affects the outcome of the inversion procedure and achieves better results than if the procedure had been left to the selected optimization algorithm. First step is to properly define spatial characteristic and structural design of semi-distributed model including all morphological and hydrological phenomena, like karstic area, alluvial area and forest area. This step includes and requires geological, meteorological, hydraulic and hydrological knowledge of modeler. Second step is to set initial parameter values at their preferred values based on expert knowledge. In this step we also define all parameter and observation groups. Peak data are essential in process of calibration if we are mainly interested in flood events. Each Sub Catchment in the model has own observations group. Third step is to set appropriate bounds to parameters in their range of realistic values. Fourth step is to use of singular value decomposition (SVD) ensures that PEST maintains numerical stability, regardless of how ill-posed is the inverse problem Fifth step is to run PWTADJ1. This creates a new PEST control file in which weights are adjusted such that the contribution made to the total objective function by each observation group is the same. This prevents the information content of any group from being invisible to the inversion process. Sixth step is to add Tikhonov regularization to the PEST control file by running the ADDREG1 utility (Doherty, J, 2013). In adding regularization to the PEST control file ADDREG1 automatically provides a prior information equation for each parameter in which the preferred value of that parameter is equated to its initial value. Last step is to run PEST. We run BeoPEST which a parallel version of PEST and can be run on multiple computers in parallel in same time on TCP communications and this speedup process of calibrations. The case study with results of calibration and validation of the model will be presented.

  9. Accurate upwind methods for the Euler equations

    NASA Technical Reports Server (NTRS)

    Huynh, Hung T.

    1993-01-01

    A new class of piecewise linear methods for the numerical solution of the one-dimensional Euler equations of gas dynamics is presented. These methods are uniformly second-order accurate, and can be considered as extensions of Godunov's scheme. With an appropriate definition of monotonicity preservation for the case of linear convection, it can be shown that they preserve monotonicity. Similar to Van Leer's MUSCL scheme, they consist of two key steps: a reconstruction step followed by an upwind step. For the reconstruction step, a monotonicity constraint that preserves uniform second-order accuracy is introduced. Computational efficiency is enhanced by devising a criterion that detects the 'smooth' part of the data where the constraint is redundant. The concept and coding of the constraint are simplified by the use of the median function. A slope steepening technique, which has no effect at smooth regions and can resolve a contact discontinuity in four cells, is described. As for the upwind step, existing and new methods are applied in a manner slightly different from those in the literature. These methods are derived by approximating the Euler equations via linearization and diagonalization. At a 'smooth' interface, Harten, Lax, and Van Leer's one intermediate state model is employed. A modification for this model that can resolve contact discontinuities is presented. Near a discontinuity, either this modified model or a more accurate one, namely, Roe's flux-difference splitting. is used. The current presentation of Roe's method, via the conceptually simple flux-vector splitting, not only establishes a connection between the two splittings, but also leads to an admissibility correction with no conditional statement, and an efficient approximation to Osher's approximate Riemann solver. These reconstruction and upwind steps result in schemes that are uniformly second-order accurate and economical at smooth regions, and yield high resolution at discontinuities.

  10. The joy of interactive modeling

    NASA Astrophysics Data System (ADS)

    Donchyts, Gennadii; Baart, Fedor; van Dam, Arthur; Jagers, Bert

    2013-04-01

    The conventional way of working with hydrodynamical models usually consists of the following steps: 1) define a schematization (e.g., in a graphical user interface, or by editing input files) 2) run model from start to end 3) visualize results 4) repeat any of the previous steps. This cycle commonly takes up from hours to several days. What if we can make this happen instantly? As most of the research done using numerical models is in fact qualitative and exploratory (Oreskes et al., 1994), why not use these models as such? How can we adapt models so that we can edit model input, run and visualize results at the same time? More and more, interactive models become available as online apps, mainly for demonstration and educational purposes. These models often simplify the physics behind flows and run on simplified model geometries, particularly when compared with state-of-the-art scientific simulation packages. Here we show how the aforementioned conventional standalone models ("static, run once") can be transformed into interactive models. The basic concepts behind turning existing (conventional) model engines into interactive engines are the following. The engine does not run the model from start to end, but is always available in memory, and can be fed by new boundary conditions, or state changes at any time. The model can be run continuously, per step, or up to a specified time. The Hollywood principle dictates how the model engine is instructed from 'outside', instead of the model engine taking all necessary actions on its own initiative. The underlying techniques that facilitate these concepts are introspection of the computation engine, which exposes its state variables, and control functions, e.g. for time stepping, via a standardized interface, such as BMI (Peckam et. al., 2012). In this work we have used a shallow water flow model engine D-Flow Flexible Mesh. The model was converted from executable to a library, and coupled to the graphical modelling environment Delta Shell. Both the engine and the environment are open source tools under active development at Deltares. The combination provides direct interactive control over the time loop and model state, and offers live 3D visualization of the running model using VTK library.

  11. A Data Stream Model For Runoff Simulation In A Changing Environment

    NASA Astrophysics Data System (ADS)

    Yang, Q.; Shao, J.; Zhang, H.; Wang, G.

    2017-12-01

    Runoff simulation is of great significance for water engineering design, water disaster control, water resources planning and management in a catchment or region. A large number of methods including concept-based process-driven models and statistic-based data-driven models, have been proposed and widely used in worldwide during past decades. Most existing models assume that the relationship among runoff and its impacting factors is stationary. However, in the changing environment (e.g., climate change, human disturbance), their relationship usually evolves over time. In this study, we propose a data stream model for runoff simulation in a changing environment. Specifically, the proposed model works in three steps: learning a rule set, expansion of a rule, and simulation. The first step is to initialize a rule set. When a new observation arrives, the model will check which rule covers it and then use the rule for simulation. Meanwhile, Page-Hinckley (PH) change detection test is used to monitor the online simulation error of each rule. If a change is detected, the corresponding rule is removed from the rule set. In the second step, for each rule, if it covers more than a given number of instance, the rule is expected to expand. In the third step, a simulation model of each leaf node is learnt with a perceptron without activation function, and is updated with adding a newly incoming observation. Taking Fuxi River catchment as a case study, we applied the model to simulate the monthly runoff in the catchment. Results show that abrupt change is detected in the year of 1997 by using the Page-Hinckley change detection test method, which is consistent with the historic record of flooding. In addition, the model achieves good simulation results with the RMSE of 13.326, and outperforms many established methods. The findings demonstrated that the proposed data stream model provides a promising way to simulate runoff in a changing environment.

  12. Analytical results for the statistical distribution related to a memoryless deterministic walk: dimensionality effect and mean-field models.

    PubMed

    Terçariol, César Augusto Sangaletti; Martinez, Alexandre Souto

    2005-08-01

    Consider a medium characterized by N points whose coordinates are randomly generated by a uniform distribution along the edges of a unitary d-dimensional hypercube. A walker leaves from each point of this disordered medium and moves according to the deterministic rule to go to the nearest point which has not been visited in the preceding mu steps (deterministic tourist walk). Each trajectory generated by this dynamics has an initial nonperiodic part of t steps (transient) and a final periodic part of p steps (attractor). The neighborhood rank probabilities are parametrized by the normalized incomplete beta function Id= I1/4 [1/2, (d+1) /2] . The joint distribution S(N) (mu,d) (t,p) is relevant, and the marginal distributions previously studied are particular cases. We show that, for the memory-less deterministic tourist walk in the euclidean space, this distribution is Sinfinity(1,d) (t,p) = [Gamma (1+ I(-1)(d)) (t+ I(-1)(d) ) /Gamma(t+p+ I(-1)(d)) ] delta(p,2), where t=0, 1,2, ... infinity, Gamma(z) is the gamma function and delta(i,j) is the Kronecker delta. The mean-field models are the random link models, which correspond to d-->infinity, and the random map model which, even for mu=0 , presents nontrivial cycle distribution [ S(N)(0,rm) (p) proportional to p(-1) ] : S(N)(0,rm) (t,p) =Gamma(N)/ {Gamma[N+1- (t+p) ] N( t+p)}. The fundamental quantities are the number of explored points n(e)=t+p and Id. Although the obtained distributions are simple, they do not follow straightforwardly and they have been validated by numerical experiments.

  13. 326 Lung Age/Chronological Age Index as Indicator of Clinical Improvement or Severity in Asthma Patients

    PubMed Central

    Castrejon-Vázquez, Isabel; Vargas, Maria Eugenia; Sabido, Raúl Cicero; Tapía, Jorge Galicia

    2012-01-01

    Background Spirometry is a very useful clinical test to evaluate pulmonary function in asthma. However pulmonary function could be affected by the sex, time of clinical evolution, lung age (LA) and chronological age (CA). The aim of this study was to evaluate LA/CA as index of clinical improvement or severity in asthma patients. Methods The tenets of the Declaration of Helsinki were followed, and all patients gave their informed consent to participate in this study. Asthma severity was evaluated according with GINA classification. Spirometry was performed at the beginning of this study, at 46 days, 96 days, 192 days and after 8 months. Statistical analysis was performed using t test, 2-way ANOVA test, correlation and multiple regression models as well as ROC curves were also performed, a P < 0.05 was considered as significant. Results 70 asthma patients were included (22 male and 48 female), mean CA was 35-years old; mean LA was 48-years with a LA/CA index = 1.4, time of clinical evolution was 13 years. A LA/CA index = 1 (range 0.5 to 0.9) was observed in asymptomatic patients. LA/CA index over 1 were related with airway inflammation, and a LA/CA index more than 2 correlated with GINA step 3. Interestingly when we analyzed CA and LA, we observed that in female group more than 10 years of difference between CA and LA, (GINA Step2 and 3); while in male we observed (GINA Step1, Step2 and Step3). LA/CA index ≤ 1 was considered as normal. Conclusions LA/CA index is a good as clinical indicator of clinical improvement or severity in asthma patients in with excellent correlation of pulmonary function and age.

  14. Selecting predictors for discriminant analysis of species performance: an example from an amphibious softwater plant.

    PubMed

    Vanderhaeghe, F; Smolders, A J P; Roelofs, J G M; Hoffmann, M

    2012-03-01

    Selecting an appropriate variable subset in linear multivariate methods is an important methodological issue for ecologists. Interest often exists in obtaining general predictive capacity or in finding causal inferences from predictor variables. Because of a lack of solid knowledge on a studied phenomenon, scientists explore predictor variables in order to find the most meaningful (i.e. discriminating) ones. As an example, we modelled the response of the amphibious softwater plant Eleocharis multicaulis using canonical discriminant function analysis. We asked how variables can be selected through comparison of several methods: univariate Pearson chi-square screening, principal components analysis (PCA) and step-wise analysis, as well as combinations of some methods. We expected PCA to perform best. The selected methods were evaluated through fit and stability of the resulting discriminant functions and through correlations between these functions and the predictor variables. The chi-square subset, at P < 0.05, followed by a step-wise sub-selection, gave the best results. In contrast to expectations, PCA performed poorly, as so did step-wise analysis. The different chi-square subset methods all yielded ecologically meaningful variables, while probable noise variables were also selected by PCA and step-wise analysis. We advise against the simple use of PCA or step-wise discriminant analysis to obtain an ecologically meaningful variable subset; the former because it does not take into account the response variable, the latter because noise variables are likely to be selected. We suggest that univariate screening techniques are a worthwhile alternative for variable selection in ecology. © 2011 German Botanical Society and The Royal Botanical Society of the Netherlands.

  15. Improved Ionospheric Electrodynamic Models and Application to Calculating Joule Heating Rates

    NASA Technical Reports Server (NTRS)

    Weimer, D. R.

    2004-01-01

    Improved techniques have been developed for empirical modeling of the high-latitude electric potentials and magnetic field aligned currents (FAC) as a function of the solar wind parameters. The FAC model is constructed using scalar magnetic Euler potentials, and functions as a twin to the electric potential model. The improved models have more accurate field values as well as more accurate boundary locations. Non-linear saturation effects in the solar wind-magnetosphere coupling are also better reproduced. The models are constructed using a hybrid technique, which has spherical harmonic functions only within a small area at the pole. At lower latitudes the potentials are constructed from multiple Fourier series functions of longitude, at discrete latitudinal steps. It is shown that the two models can be used together in order to calculate the total Poynting flux and Joule heating in the ionosphere. An additional model of the ionospheric conductivity is not required in order to obtain the ionospheric currents and Joule heating, as the conductivity variations as a function of the solar inclination are implicitly contained within the FAC model's data. The models outputs are shown for various input conditions, as well as compared with satellite measurements. The calculations of the total Joule heating are compared with results obtained by the inversion of ground-based magnetometer measurements. Like their predecessors, these empirical models should continue to be a useful research and forecast tools.

  16. SAR-based change detection using hypothesis testing and Markov random field modelling

    NASA Astrophysics Data System (ADS)

    Cao, W.; Martinis, S.

    2015-04-01

    The objective of this study is to automatically detect changed areas caused by natural disasters from bi-temporal co-registered and calibrated TerraSAR-X data. The technique in this paper consists of two steps: Firstly, an automatic coarse detection step is applied based on a statistical hypothesis test for initializing the classification. The original analytical formula as proposed in the constant false alarm rate (CFAR) edge detector is reviewed and rewritten in a compact form of the incomplete beta function, which is a builtin routine in commercial scientific software such as MATLAB and IDL. Secondly, a post-classification step is introduced to optimize the noisy classification result in the previous step. Generally, an optimization problem can be formulated as a Markov random field (MRF) on which the quality of a classification is measured by an energy function. The optimal classification based on the MRF is related to the lowest energy value. Previous studies provide methods for the optimization problem using MRFs, such as the iterated conditional modes (ICM) algorithm. Recently, a novel algorithm was presented based on graph-cut theory. This method transforms a MRF to an equivalent graph and solves the optimization problem by a max-flow/min-cut algorithm on the graph. In this study this graph-cut algorithm is applied iteratively to improve the coarse classification. At each iteration the parameters of the energy function for the current classification are set by the logarithmic probability density function (PDF). The relevant parameters are estimated by the method of logarithmic cumulants (MoLC). Experiments are performed using two flood events in Germany and Australia in 2011 and a forest fire on La Palma in 2009 using pre- and post-event TerraSAR-X data. The results show convincing coarse classifications and considerable improvement by the graph-cut post-classification step.

  17. Quantum vacuum emission from a refractive-index front

    NASA Astrophysics Data System (ADS)

    Jacquet, Maxime; König, Friedrich

    2015-08-01

    A moving boundary separating two otherwise homogeneous regions of a dielectric is known to emit radiation from the quantum vacuum. An analytical framework based on the Hopfield model, describing a moving refractive-index step in 1 +1 dimensions for realistic dispersive media has been developed by S. Finazzi and I. Carusotto [Phys. Rev. A 87, 023803 (2013)], 10.1103/PhysRevA.87.023803. We expand the use of this model to calculate explicitly spectra of all modes of positive and negative norms. Furthermore, for lower step heights we obtain a unique set of mode configurations encompassing black-hole and white-hole setups. This leads to a realistic emission spectrum featuring black-hole and white-hole emission for different frequencies. We also present spectra as measured in the laboratory frame that include all modes, in particular a dominant negative-norm mode, which is the partner mode in any Hawking-type emission. We find that the emission spectrum is highly structured into intervals of emission with black-hole, white-hole, and no horizons. Finally, we estimate the number of photons emitted as a function of the step height and find a power law of 2.5 for low step heights.

  18. New numerical approach for the modelling of machining applied to aeronautical structural parts

    NASA Astrophysics Data System (ADS)

    Rambaud, Pierrick; Mocellin, Katia

    2018-05-01

    The manufacturing of aluminium alloy structural aerospace parts involves several steps: forming (rolling, forging …etc), heat treatments and machining. Before machining, the manufacturing processes have embedded residual stresses into the workpiece. The final geometry is obtained during this last step, when up to 90% of the raw material volume is removed by machining. During this operation, the mechanical equilibrium of the part is in constant evolution due to the redistribution of the initial stresses. This redistribution is the main cause for workpiece deflections during machining and for distortions - after unclamping. Both may lead to non-conformity of the part regarding the geometrical and dimensional specifications and therefore to rejection of the part or additional conforming steps. In order to improve the machining accuracy and the robustness of the process, the effect of the residual stresses has to be considered for the definition of the machining process plan and even in the geometrical definition of the part. In this paper, the authors present two new numerical approaches concerning the modelling of machining of aeronautical structural parts. The first deals with the use of an immersed volume framework to model the cutting step, improving the robustness and the quality of the resulting mesh compared to the previous version. The second is about the mechanical modelling of the machining problem. The authors thus show that in the framework of rolled aluminium parts the use of a linear elasticity model is functional in the finite element formulation and promising regarding the reduction of computation times.

  19. Functional reasoning in diagnostic problem solving

    NASA Technical Reports Server (NTRS)

    Sticklen, Jon; Bond, W. E.; Stclair, D. C.

    1988-01-01

    This work is one facet of an integrated approach to diagnostic problem solving for aircraft and space systems currently under development. The authors are applying a method of modeling and reasoning about deep knowledge based on a functional viewpoint. The approach recognizes a level of device understanding which is intermediate between a compiled level of typical Expert Systems, and a deep level at which large-scale device behavior is derived from known properties of device structure and component behavior. At this intermediate functional level, a device is modeled in three steps. First, a component decomposition of the device is defined. Second, the functionality of each device/subdevice is abstractly identified. Third, the state sequences which implement each function are specified. Given a functional representation and a set of initial conditions, the functional reasoner acts as a consequence finder. The output of the consequence finder can be utilized in diagnostic problem solving. The paper also discussed ways in which this functional approach may find application in the aerospace field.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Le, Hai P.; Cambier, Jean -Luc

    Here, we present a numerical model and a set of conservative algorithms for Non-Maxwellian plasma kinetics with inelastic collisions. These algorithms self-consistently solve for the time evolution of an isotropic electron energy distribution function interacting with an atomic state distribution function of an arbitrary number of levels through collisional excitation, deexcitation, as well as ionization and recombination. Electron-electron collisions, responsible for thermalization of the electron distribution, are also included in the model. The proposed algorithms guarantee mass/charge and energy conservation in a single step, and is applied to the case of non-uniform gridding of the energy axis in the phasemore » space of the electron distribution function. Numerical test cases are shown to demonstrate the accuracy of the method and its conservation properties.« less

  1. A systems level predictive model for global gene regulation of methanogenesis in a hydrogenotrophic methanogen

    PubMed Central

    Yoon, Sung Ho; Turkarslan, Serdar; Reiss, David J.; Pan, Min; Burn, June A.; Costa, Kyle C.; Lie, Thomas J.; Slagel, Joseph; Moritz, Robert L.; Hackett, Murray; Leigh, John A.; Baliga, Nitin S.

    2013-01-01

    Methanogens catalyze the critical methane-producing step (called methanogenesis) in the anaerobic decomposition of organic matter. Here, we present the first predictive model of global gene regulation of methanogenesis in a hydrogenotrophic methanogen, Methanococcus maripaludis. We generated a comprehensive list of genes (protein-coding and noncoding) for M. maripaludis through integrated analysis of the transcriptome structure and a newly constructed Peptide Atlas. The environment and gene-regulatory influence network (EGRIN) model of the strain was constructed from a compendium of transcriptome data that was collected over 58 different steady-state and time-course experiments that were performed in chemostats or batch cultures under a spectrum of environmental perturbations that modulated methanogenesis. Analyses of the EGRIN model have revealed novel components of methanogenesis that included at least three additional protein-coding genes of previously unknown function as well as one noncoding RNA. We discovered that at least five regulatory mechanisms act in a combinatorial scheme to intercoordinate key steps of methanogenesis with different processes such as motility, ATP biosynthesis, and carbon assimilation. Through a combination of genetic and environmental perturbation experiments we have validated the EGRIN-predicted role of two novel transcription factors in the regulation of phosphate-dependent repression of formate dehydrogenase—a key enzyme in the methanogenesis pathway. The EGRIN model demonstrates regulatory affiliations within methanogenesis as well as between methanogenesis and other cellular functions. PMID:24089473

  2. Simplification and analysis of models of calcium dynamics based on IP3-sensitive calcium channel kinetics.

    PubMed

    Tang, Y; Stephenson, J L; Othmer, H G

    1996-01-01

    We study the models for calcium (Ca) dynamics developed in earlier studies, in each of which the key component is the kinetics of intracellular inositol-1,4,5-trisphosphate-sensitive Ca channels. After rapidly equilibrating steps are eliminated, the channel kinetics in these models are represented by a single differential equation that is linear in the state of the channel. In the reduced kinetic model, the graph of the steady-state fraction of conducting channels as a function of log10(Ca) is a bell-shaped curve. Dynamically, a step increase in inositol-1,4,5-trisphosphate induces an incremental increase in the fraction of conducting channels, whereas a step increase in Ca can either potentiate or inhibit channel activation, depending on the Ca level before and after the increase. The relationships among these models are discussed, and experimental tests to distinguish between them are given. Under certain conditions the models for intracellular calcium dynamics are reduced to the singular perturbed form epsilon dx/d tau = f(x, y, p), dy/d tau = g(x, y, p). Phase-plane analysis is applied to a generic form of these simplified models to show how different types of Ca response, such as excitability, oscillations, and a sustained elevation of Ca, can arise. The generic model can also be used to study frequency encoding of hormonal stimuli, to determine the conditions for stable traveling Ca waves, and to understand the effect of channel properties on the wave speed.

  3. Study on Capturing Functional Requirements of the New Product Based on Evolution

    NASA Astrophysics Data System (ADS)

    Liu, Fang; Song, Liya; Bai, Zhonghang; Zhang, Peng

    In order to exist in an increasingly competitive global marketplace, it is important for corporations to forecast the evolutionary direction of new products rapidly and effectively. Most products in the world are developed based on the design of existing products. In the product design, capturing functional requirements is a key step. Function is continuously evolving, which is driven by the evolution of needs and technologies. So the functional requirements of new product can be forecasted based on the functions of existing product. Eight laws of function evolution are put forward in this paper. The process model of capturing the functional requirements of new product based on function evolution is proposed. An example illustrates the design process.

  4. Gummel Symmetry Test on charge based drain current expression using modified first-order hyperbolic velocity-field expression

    NASA Astrophysics Data System (ADS)

    Singh, Kirmender; Bhattacharyya, A. B.

    2017-03-01

    Gummel Symmetry Test (GST) has been a benchmark industry standard for MOSFET models and is considered as one of important tests by the modeling community. BSIM4 MOSFET model fails to pass GST as the drain current equation is not symmetrical because drain and source potentials are not referenced to bulk. BSIM6 MOSFET model overcomes this limitation by taking all terminal biases with reference to bulk and using proper velocity saturation (v -E) model. The drain current equation in BSIM6 is charge based and continuous in all regions of operation. It, however, adopts a complicated method to compute source and drain charges. In this work we propose to use conventional charge based method formulated by Enz for obtaining simpler analytical drain current expression that passes GST. For this purpose we adopt two steps: (i) In the first step we use a modified first-order hyperbolic v -E model with adjustable coefficients which is integrable, simple and accurate, and (ii) In the second we use a multiplying factor in the modified first-order hyperbolic v -E expression to obtain correct monotonic asymptotic behavior around the origin of lateral electric field. This factor is of empirical form, which is a function of drain voltage (vd) and source voltage (vs) . After considering both the above steps we obtain drain current expression whose accuracy is similar to that obtained from second-order hyperbolic v -E model. In modified first-order hyperbolic v -E expression if vd and vs is replaced by smoothing functions for the effective drain voltage (vdeff) and effective source voltage (vseff), it will as well take care of discontinuity between linear to saturation regions of operation. The condition of symmetry is shown to be satisfied by drain current and its higher order derivatives, as both of them are odd functions and their even order derivatives smoothly pass through the origin. In strong inversion region and technology node of 22 nm the GST is shown to pass till sixth-order derivative and for weak inversion it is shown till fifth-order derivative. In the expression of drain current major short channel phenomena like vertical field mobility reduction, velocity saturation and velocity overshoot have been taken into consideration.

  5. Use of the challenge point framework to guide motor learning of stepping reactions for improved balance control in people with stroke: a case series.

    PubMed

    Pollock, Courtney L; Boyd, Lara A; Hunt, Michael A; Garland, S Jayne

    2014-04-01

    Stepping reactions are important for walking balance and community-level mobility. Stepping reactions of people with stroke are characterized by slow reaction times, poor coordination of motor responses, and low amplitude of movements, which may contribute to their decreased ability to recover their balance when challenged. An important aspect of rehabilitation of mobility after stroke is optimizing the motor learning associated with retraining effective stepping reactions. The Challenge Point Framework (CPF) is a model that can be used to promote motor learning through manipulation of conditions of practice to modify task difficulty, that is, the interaction of the skill of the learner and the difficulty of the task to be learned. This case series illustrates how the retraining of multidirectional stepping reactions may be informed by the CPF to improve balance function in people with stroke. Four people (53-68 years of age) with chronic stroke (>1 year) and mild to moderate motor recovery received 4 weeks of multidirectional stepping reaction retraining. Important tenets of motor learning were optimized for each person during retraining in accordance with the CPF. Participants demonstrated improved community-level walking balance, as determined with the Community Balance and Mobility Scale. These improvements were evident 1 year later. Aspects of balance-related self-efficacy and movement kinematics also showed improvements during the course of the intervention. The application of CPF motor learning principles in the retraining of stepping reactions to improve community-level walking balance in people with chronic stroke appears to be promising. The CPF provides a plausible theoretical framework for the progression of functional task training in neurorehabilitation.

  6. Universal analytical scattering form factor for shell-, core-shell, or homogeneous particles with continuously variable density profile shape.

    PubMed

    Foster, Tobias

    2011-09-01

    A novel analytical and continuous density distribution function with a widely variable shape is reported and used to derive an analytical scattering form factor that allows us to universally describe the scattering from particles with the radial density profile of homogeneous spheres, shells, or core-shell particles. Composed by the sum of two Fermi-Dirac distribution functions, the shape of the density profile can be altered continuously from step-like via Gaussian-like or parabolic to asymptotically hyperbolic by varying a single "shape parameter", d. Using this density profile, the scattering form factor can be calculated numerically. An analytical form factor can be derived using an approximate expression for the original Fermi-Dirac distribution function. This approximation is accurate for sufficiently small rescaled shape parameters, d/R (R being the particle radius), up to values of d/R ≈ 0.1, and thus captures step-like, Gaussian-like, and parabolic as well as asymptotically hyperbolic profile shapes. It is expected that this form factor is particularly useful in a model-dependent analysis of small-angle scattering data since the applied continuous and analytical function for the particle density profile can be compared directly with the density profile extracted from the data by model-free approaches like the generalized inverse Fourier transform method. © 2011 American Chemical Society

  7. Derivation of the expressions for γ50 and D50 for different individual TCP and NTCP models

    NASA Astrophysics Data System (ADS)

    Stavreva, N.; Stavrev, P.; Warkentin, B.; Fallone, B. G.

    2002-10-01

    This paper presents a complete set of formulae for the position (D50) and the normalized slope (γ50) of the dose-response relationship based on the most commonly used radiobiological models for tumours as well as for normal tissues. The functional subunit response models (critical element and critical volume) are used in the derivation of the formulae for the normal tissue. Binomial statistics are used to describe the tumour control probability, the functional subunit response as well as the normal tissue complication probability. The formulae are derived for the single hit and linear quadratic models of cell kill in terms of the number of fractions and dose per fraction. It is shown that the functional subunit models predict very steep, almost step-like, normal tissue individual dose-response relationships. Furthermore, the formulae for the normalized gradient depend on the cellular parameters α and β when written in terms of number of fractions, but not when written in terms of dose per fraction.

  8. A first-principles examination of the asymmetric induction model in the binap/Rh(I)-catalysed 1,4-addition of phenylboronic acid to cyclic enones by density functional theory calculations.

    PubMed

    Qin, Hua-Li; Chen, Xiao-Qing; Huang, Yi-Zhen; Kantchev, Eric Assen B

    2014-09-26

    First-principles modelling of the diastereomeric transition states in the enantiodiscrimination stage of the catalytic cycle can reveal intimate details about the mechanism of enantioselection. This information can be invaluable for further improvement of the catalytic protocols by rational design. Herein, we present a density functional theory (IEFPCM/PBE0/DGDZVP level of theory) modelling of the carborhodation step for the asymmetric 1,4-arylation of cyclic α,β-unsaturated ketones mediated by a [(binap)Rh(I)] catalyst. The calculations completely support the older, qualitative, pictorial model predicting the sense of the asymmetric induction for both the chelating diphosphane (binap) and the more recent chiral diene (Phbod) ligands, while also permitting quantification of the enantiomeric excess (ee). The effect of dispersion interaction correction and basis sets has been also investigated. Dispersion-corrected functionals and solvation models significantly improve the predicted ee values. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. A return-to-sport algorithm for acute hamstring injuries.

    PubMed

    Mendiguchia, Jurdan; Brughelli, Matt

    2011-02-01

    Acute hamstring injuries are the most prevalent muscle injuries reported in sport. Despite a thorough and concentrated effort to prevent and rehabilitate hamstring injuries, injury occurrence and re-injury rates have not improved over the past 28 years. This failure is most likely due to the following: 1) an over-reliance on treating the symptoms of injury, such as subjective measures of "pain", with drugs and interventions; 2) the risk factors investigated for hamstring injuries have not been related to the actual movements that cause hamstring injuries i.e. not functional; and, 3) a multi-factorial approach to assessment and treatment has not been utilized. The purpose of this clinical commentary is to introduce a model for progression through a return-to-sport rehabilitation following an acute hamstring injury. This model is developed from objective and quantifiable tests (i.e. clinical and functional tests) that are structured into a step-by-step algorithm. In addition, each step in the algorithm includes a treatment protocol. These protocols are meant to help the athlete to improve through each phase safely so that they can achieve the desired goals and progress through the algorithm and back to their chosen sport. We hope that this algorithm can serve as a foundation for future evidence based research and aid in the development of new objective and quantifiable testing methods. Copyright © 2010 Elsevier Ltd. All rights reserved.

  10. 10 Steps to Building an Architecture for Space Surveillance Projects

    NASA Astrophysics Data System (ADS)

    Gyorko, E.; Barnhart, E.; Gans, H.

    Space surveillance is an increasingly complex task, requiring the coordination of a multitude of organizations and systems, while dealing with competing capabilities, proprietary processes, differing standards, and compliance issues. In order to fully understand space surveillance operations, analysts and engineers need to analyze and break down their operations and systems using what are essentially enterprise architecture processes and techniques. These techniques can be daunting to the first- time architect. This paper provides a summary of simplified steps to analyze a space surveillance system at the enterprise level in order to determine capabilities, services, and systems. These steps form the core of an initial Model-Based Architecting process. For new systems, a well defined, or well architected, space surveillance enterprise leads to an easier transition from model-based architecture to model-based design and provides a greater likelihood that requirements are fulfilled the first time. Both new and existing systems benefit from being easier to manage, and can be sustained more easily using portfolio management techniques, based around capabilities documented in the model repository. The resulting enterprise model helps an architect avoid 1) costly, faulty portfolio decisions; 2) wasteful technology refresh efforts; 3) upgrade and transition nightmares; and 4) non-compliance with DoDAF directives. The Model-Based Architecting steps are based on a process that Harris Corporation has developed from practical experience architecting space surveillance systems and ground systems. Examples are drawn from current work on documenting space situational awareness enterprises. The process is centered on DoDAF 2 and its corresponding meta-model so that terminology is standardized and communicable across any disciplines that know DoDAF architecting, including acquisition, engineering and sustainment disciplines. Each step provides a guideline for the type of data to collect, and also the appropriate views to generate. The steps include 1) determining the context of the enterprise, including active elements and high level capabilities or goals; 2) determining the desired effects of the capabilities and mapping capabilities against the project plan; 3) determining operational performers and their inter-relationships; 4) building information and data dictionaries; 5) defining resources associated with capabilities; 6) determining the operational behavior necessary to achieve each capability; 7) analyzing existing or planned implementations to determine systems, services and software; 8) cross-referencing system behavior to operational behavioral; 9) documenting system threads and functional implementations; and 10) creating any required textual documentation from the model.

  11. From neuro-functional to neuro-computational models. Comment on "The quartet theory of human emotions: An integrative and neurofunctional model" by S. Koelsch et al.

    NASA Astrophysics Data System (ADS)

    Briesemeister, Benny B.

    2015-06-01

    Historically, there has been a strong opposition between psychological theories of human emotion that suggest a limited number of distinct functional categories, such as anger, fear, happiness and so forth (e.g. [1]), and theories that suggest processing along affective dimensions, such as valence and arousal (e.g. [2]). Only few current models acknowledge that both of these perspectives seem to be legitimate [3], and at their core, even fewer models connect these insights with knowledge about neurophysiology [4]. In this regard, the Quartet Theory of Human Emotions (QTHE) [5] makes a very important and useful contribution to the field of emotion research - but in my opinion, there is still at least one more step to go.

  12. The morphing of geographical features by Fourier transformation.

    PubMed

    Li, Jingzhong; Liu, Pengcheng; Yu, Wenhao; Cheng, Xiaoqiang

    2018-01-01

    This paper presents a morphing model of vector geographical data based on Fourier transformation. This model involves three main steps. They are conversion from vector data to Fourier series, generation of intermediate function by combination of the two Fourier series concerning a large scale and a small scale, and reverse conversion from combination function to vector data. By mirror processing, the model can also be used for morphing of linear features. Experimental results show that this method is sensitive to scale variations and it can be used for vector map features' continuous scale transformation. The efficiency of this model is linearly related to the point number of shape boundary and the interceptive value n of Fourier expansion. The effect of morphing by Fourier transformation is plausible and the efficiency of the algorithm is acceptable.

  13. Study of the Bellman equation in a production model with unstable demand

    NASA Astrophysics Data System (ADS)

    Obrosova, N. K.; Shananin, A. A.

    2014-09-01

    A production model with allowance for a working capital deficit and a restricted maximum possible sales volume is proposed and analyzed. The study is motivated by the urgency of analyzing well-known problems of functioning low competitive macroeconomic structures. The original formulation of the task represents an infinite-horizon optimal control problem. As a result, the model is formalized in the form of a Bellman equation. It is proved that the corresponding Bellman operator is a contraction and has a unique fixed point in the chosen class of functions. A closed-form solution of the Bellman equation is found using the method of steps. The influence of the credit interest rate on the firm market value assessment is analyzed by applying the developed model.

  14. Sulfur Atoms Adsorbed on Cu(100) at Low Coverage: Characterization and Stability against Complexation

    DOE PAGES

    Walen, Holly; Liu, Da-Jiang; Oh, Junepyo; ...

    2017-08-22

    By using scanning tunneling microscopy, we characterize the size and bias-dependent shape of sulfur atoms on Cu(100) at low coverage (below 0.1 monolayers) and low temperature (quenched from 300 to 5 K). Sulfur atoms populate the Cu(100) terraces more heavily than steps at low coverage, but as coverage approaches 0.1 monolayers, close-packed step edges become fully populated, with sulfur atoms occupying sites on top of the step. Density functional theory (DFT) corroborates the preferential population of terraces at low coverage as well as the step adsorption site. In experiment, small regions with p(2 × 2)-like atomic arrangements emerge on themore » terraces as sulfur coverage approaches 0.1 monolayer. Using DFT, a lattice gas model has been developed, and Monte Carlo simulations based on this model have been compared with the observed terrace configurations. A model containing eight pairwise interaction energies, all repulsive, gives qualitative agreement. Experiment shows that atomic adsorbed sulfur is the only species on Cu(100) up to a coverage of 0.09 monolayers. There are no Cu–S complexes. Conversely, prior work has shown that a Cu 2S 3 complex forms on Cu(111) under comparable conditions. On the basis of DFT, this difference can be attributed mainly to stronger adsorption of sulfur on Cu(100) as compared with Cu(111).« less

  15. Sulfur Atoms Adsorbed on Cu(100) at Low Coverage: Characterization and Stability against Complexation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Walen, Holly; Liu, Da-Jiang; Oh, Junepyo

    By using scanning tunneling microscopy, we characterize the size and bias-dependent shape of sulfur atoms on Cu(100) at low coverage (below 0.1 monolayers) and low temperature (quenched from 300 to 5 K). Sulfur atoms populate the Cu(100) terraces more heavily than steps at low coverage, but as coverage approaches 0.1 monolayers, close-packed step edges become fully populated, with sulfur atoms occupying sites on top of the step. Density functional theory (DFT) corroborates the preferential population of terraces at low coverage as well as the step adsorption site. In experiment, small regions with p(2 × 2)-like atomic arrangements emerge on themore » terraces as sulfur coverage approaches 0.1 monolayer. Using DFT, a lattice gas model has been developed, and Monte Carlo simulations based on this model have been compared with the observed terrace configurations. A model containing eight pairwise interaction energies, all repulsive, gives qualitative agreement. Experiment shows that atomic adsorbed sulfur is the only species on Cu(100) up to a coverage of 0.09 monolayers. There are no Cu–S complexes. Conversely, prior work has shown that a Cu 2S 3 complex forms on Cu(111) under comparable conditions. On the basis of DFT, this difference can be attributed mainly to stronger adsorption of sulfur on Cu(100) as compared with Cu(111).« less

  16. Load-dependent ADP binding to myosins V and VI: Implications for subunit coordination and function

    PubMed Central

    Oguchi, Yusuke; Mikhailenko, Sergey V.; Ohki, Takashi; Olivares, Adrian O.; De La Cruz, Enrique M.; Ishiwata, Shin'ichi

    2008-01-01

    Dimeric myosins V and VI travel long distances in opposite directions along actin filaments in cells, taking multiple steps in a “hand-over-hand” fashion. The catalytic cycles of both myosins are limited by ADP dissociation, which is considered a key step in the walking mechanism of these motors. Here, we demonstrate that external loads applied to individual actomyosin V or VI bonds asymmetrically affect ADP affinity, such that ADP binds weaker under loads assisting motility. Model-based analysis reveals that forward and backward loads modulate the kinetics of ADP binding to both myosins, although the effect is less pronounced for myosin VI. ADP dissociation is modestly accelerated by forward loads and inhibited by backward loads. Loads applied in either direction slow ADP binding to myosin V but accelerate binding to myosin VI. We calculate that the intramolecular load generated during processive stepping is ≈2 pN for both myosin V and myosin VI. The distinct load dependence of ADP binding allows these motors to perform different cellular functions. PMID:18509050

  17. An Open-Source Auto-Calibration Routine Supporting the Stormwater Management Model

    NASA Astrophysics Data System (ADS)

    Tiernan, E. D.; Hodges, B. R.

    2017-12-01

    The stormwater management model (SWMM) is a clustered model that relies on subcatchment-averaged parameter assignments to correctly capture catchment stormwater runoff behavior. Model calibration is considered a critical step for SWMM performance, an arduous task that most stormwater management designers undertake manually. This research presents an open-source, automated calibration routine that increases the efficiency and accuracy of the model calibration process. The routine makes use of a preliminary sensitivity analysis to reduce the dimensions of the parameter space, at which point a multi-objective function, genetic algorithm (modified Non-dominated Sorting Genetic Algorithm II) determines the Pareto front for the objective functions within the parameter space. The solutions on this Pareto front represent the optimized parameter value sets for the catchment behavior that could not have been reasonably obtained through manual calibration.

  18. Computer-Aided Design of RNA Origami Structures.

    PubMed

    Sparvath, Steffen L; Geary, Cody W; Andersen, Ebbe S

    2017-01-01

    RNA nanostructures can be used as scaffolds to organize, combine, and control molecular functionalities, with great potential for applications in nanomedicine and synthetic biology. The single-stranded RNA origami method allows RNA nanostructures to be folded as they are transcribed by the RNA polymerase. RNA origami structures provide a stable framework that can be decorated with functional RNA elements such as riboswitches, ribozymes, interaction sites, and aptamers for binding small molecules or protein targets. The rich library of RNA structural and functional elements combined with the possibility to attach proteins through aptamer-based binding creates virtually limitless possibilities for constructing advanced RNA-based nanodevices.In this chapter we provide a detailed protocol for the single-stranded RNA origami design method using a simple 2-helix tall structure as an example. The first step involves 3D modeling of a double-crossover between two RNA double helices, followed by decoration with tertiary motifs. The second step deals with the construction of a 2D blueprint describing the secondary structure and sequence constraints that serves as the input for computer programs. In the third step, computer programs are used to design RNA sequences that are compatible with the structure, and the resulting outputs are evaluated and converted into DNA sequences to order.

  19. Polychromatic sparse image reconstruction and mass attenuation spectrum estimation via B-spline basis function expansion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gu, Renliang, E-mail: Venliang@iastate.edu, E-mail: ald@iastate.edu; Dogandžić, Aleksandar, E-mail: Venliang@iastate.edu, E-mail: ald@iastate.edu

    2015-03-31

    We develop a sparse image reconstruction method for polychromatic computed tomography (CT) measurements under the blind scenario where the material of the inspected object and the incident energy spectrum are unknown. To obtain a parsimonious measurement model parameterization, we first rewrite the measurement equation using our mass-attenuation parameterization, which has the Laplace integral form. The unknown mass-attenuation spectrum is expanded into basis functions using a B-spline basis of order one. We develop a block coordinate-descent algorithm for constrained minimization of a penalized negative log-likelihood function, where constraints and penalty terms ensure nonnegativity of the spline coefficients and sparsity of themore » density map image in the wavelet domain. This algorithm alternates between a Nesterov’s proximal-gradient step for estimating the density map image and an active-set step for estimating the incident spectrum parameters. Numerical simulations demonstrate the performance of the proposed scheme.« less

  20. Assessing Forelimb Function after Unilateral Cervical SCI using Novel Tasks: Limb Step-alternation, Postural Instability and Pasta Handling

    PubMed Central

    Schallert, Timothy; Schmidt, Christine E.

    2013-01-01

    Cervical spinal cord injury (cSCI) can cause devastating neurological deficits, including impairment or loss of upper limb and hand function. A majority of the spinal cord injuries in humans occur at the cervical levels. Therefore, developing cervical injury models and developing relevant and sensitive behavioral tests is of great importance. Here we describe the use of a newly developed forelimb step-alternation test after cervical spinal cord injury in rats. In addition, we describe two behavioral tests that have not been used after spinal cord injury: a postural instability test (PIT), and a pasta-handling test. All three behavioral tests are highly sensitive to injury and are easy to use. Therefore, we feel that these behavioral tests can be instrumental in investigating therapeutic strategies after cSCI. PMID:24084700

  1. Assessing forelimb function after unilateral cervical SCI using novel tasks: limb step-alternation, postural instability and pasta handling.

    PubMed

    Khaing, Zin Z; Geissler, Sydney A; Schallert, Timothy; Schmidt, Christine E

    2013-09-16

    Cervical spinal cord injury (cSCI) can cause devastating neurological deficits, including impairment or loss of upper limb and hand function. A majority of the spinal cord injuries in humans occur at the cervical levels. Therefore, developing cervical injury models and developing relevant and sensitive behavioral tests is of great importance. Here we describe the use of a newly developed forelimb step-alternation test after cervical spinal cord injury in rats. In addition, we describe two behavioral tests that have not been used after spinal cord injury: a postural instability test (PIT), and a pasta-handling test. All three behavioral tests are highly sensitive to injury and are easy to use. Therefore, we feel that these behavioral tests can be instrumental in investigating therapeutic strategies after cSCI.

  2. Sensitivity analysis of navy aviation readiness based sparing model

    DTIC Science & Technology

    2017-09-01

    variability. (See Figure 4.) Figure 4. Research design flowchart 18 Figure 4 lays out the four steps of the methodology , starting in the upper left-hand...as a function of changes in key inputs. We develop NAVARM Experimental Designs (NED), a computational tool created by applying a state-of-the-art...experimental design to the NAVARM model. Statistical analysis of the resulting data identifies the most influential cost factors. Those are, in order of

  3. Predictions of the residue cross-sections for the elements Z = 113 and Z = 114

    NASA Astrophysics Data System (ADS)

    Bouriquet, B.; Abe, Y.; Kosenko, G.

    2004-10-01

    A good reproduction of experimental excitation functions is obtained for the 1 n reactions producing the elements with Z = 108, 110, 111 and 112 by the combined usage of the two-step model for fusion and the statistical decay code KEWPIE. Furthermore, the model provides reliable predictions of productions of the elements with Z = 113 and Z = 114 which will be a useful guide for plannings of experiments.

  4. Inclusion Assistants in General Education Settings--A Model for In-Service Training

    ERIC Educational Resources Information Center

    Moshe, Anat

    2017-01-01

    The inclusion assistant (IA) is a fairly new position in the education system and is the outcome of current ideological and legislative steps to include students with special needs into the general educational system. The IA's function is to personally accompany students with severe disabilities--autism, developmental disabilities, physical…

  5. Strategy Execution in Cognitive Skill Learning: An Item-Level Test of Candidate Models

    ERIC Educational Resources Information Center

    Rickard, Timothy C.

    2004-01-01

    This article investigates the transition to memory-based performance that commonly occurs with practice on tasks that initially require use of a multistep algorithm. In an alphabet arithmetic task, item response times exhibited pronounced step-function decreases after moderate practice that were uniquely predicted by T. C. Rickard's (1997)…

  6. Assessing the Usefulness of the Decision Framework for Identifying and Selecting Knowledge Management Projects

    DTIC Science & Technology

    2005-03-01

    team-wide accountability and rewards Functional focus Group accountability and rewards Employee-owner interest conflicts Process focus Lack of...Collaborative and cross-functional work Incompatible IT Need to share Compartmentalization of functional groups Localized decision making Centralized...Steps are: • Step 1: Analyze Corporate Strategic Objectives Using SWOT (Strengths, Weaknesses, Opportunities, Threats) Methodology • Step 2

  7. Assessment of PDF Micromixing Models Using DNS Data for a Two-Step Reaction

    NASA Astrophysics Data System (ADS)

    Tsai, Kuochen; Chakrabarti, Mitali; Fox, Rodney O.; Hill, James C.

    1996-11-01

    Although the probability density function (PDF) method is known to treat the chemical reaction terms exactly, its application to turbulent reacting flows have been overshadowed by the ability to model the molecular mixing terms satisfactorily. In this study, two PDF molecular mixing models, the linear-mean-square-estimation (LMSE or IEM) model and the generalized interaction-by-exchange-with-the-mean (GIEM) model, are compared with the DNS data in decaying turbulence with a two-step parallel-consecutive reaction and two segregated initial conditions: ``slabs" and ``blobs". Since the molecular mixing model is expected to have a strong effect on the mean values of chemical species under such initial conditions, the model evaluation is intended to answer the following questions: Can the PDF models predict the mean values of chemical species correctly with completely segregated initial conditions? (2) Is a single molecular mixing timescale sufficient for the PDF models to predict the mean values with different initial conditions? (3) Will the chemical reactions change the molecular mixing timescales of the reacting species enough to affect the accuracy of the model's prediction for the mean values of chemical species?

  8. A nursing-specific model of EPR documentation: organizational and professional requirements.

    PubMed

    von Krogh, Gunn; Nåden, Dagfinn

    2008-01-01

    To present the Norwegian documentation KPO model (quality assurance, problem solving, and caring). To present the requirements and multiple electronic patient record (EPR) functions the model is designed to address. The model's professional substance, a conceptual framework for nursing practice is developed by examining, reorganizing, and completing existing frameworks. The model's methodology, an information management system, is developed using an expert group. Both model elements were clinically tested over a period of 1 year. The model is designed for nursing documentation in step with statutory, organizational, and professional requirements. Complete documentation is arranged for by incorporating the Nursing Minimum Data Set. A systematic and comprehensive documentation is arranged for by establishing categories as provided in the model's framework domains. Consistent documentation is arranged for by incorporating NANDA-I Nursing Diagnoses, Nursing Intervention Classification, and Nursing Outcome Classification. The model can be used as a tool in cooperation with vendors to ensure the interests of the nursing profession is met when developing EPR solutions in healthcare. The model can provide clinicians with a framework for documentation in step with legal and organizational requirements and at the same time retain the ability to record all aspects of clinical nursing.

  9. Density functional theory study on carbon dioxide absorption into aqueous solutions of 2-amino-2-methyl-1-propanol using a continuum solvation model.

    PubMed

    Yamada, Hidetaka; Matsuzaki, Yoichi; Higashii, Takayuki; Kazama, Shingo

    2011-04-14

    We used density functional theory (DFT) calculations with the latest continuum solvation model (SMD/IEF-PCM) to determine the mechanism of CO(2) absorption into aqueous solutions of 2-amino-2-methyl-1-propanol (AMP). Possible absorption process reactions were investigated by transition-state optimization and intrinsic reaction coordinate (IRC) calculations in the aqueous solution at the SMD/IEF-PCM/B3LYP/6-31G(d) and SMD/IEF-PCM/B3LYP/6-311++G(d,p) levels of theory to determine the absorption pathways. We show that the carbamate anion forms by a two-step reaction via a zwitterion intermediate, and this occurs faster than the formation of the bicarbonate anion. However, we also predict that the carbamate readily decomposes by a reverse reaction rather than by hydrolysis. As a result, the final product is dominated by the thermodynamically stable bicarbonate anion that forms from AMP, H(2)O, and CO(2) in a single-step termolecular reaction.

  10. Clipping in neurocontrol by adaptive dynamic programming.

    PubMed

    Fairbank, Michael; Prokhorov, Danil; Alonso, Eduardo

    2014-10-01

    In adaptive dynamic programming, neurocontrol, and reinforcement learning, the objective is for an agent to learn to choose actions so as to minimize a total cost function. In this paper, we show that when discretized time is used to model the motion of the agent, it can be very important to do clipping on the motion of the agent in the final time step of the trajectory. By clipping, we mean that the final time step of the trajectory is to be truncated such that the agent stops exactly at the first terminal state reached, and no distance further. We demonstrate that when clipping is omitted, learning performance can fail to reach the optimum, and when clipping is done properly, learning performance can improve significantly. The clipping problem we describe affects algorithms that use explicit derivatives of the model functions of the environment to calculate a learning gradient. These include backpropagation through time for control and methods based on dual heuristic programming. However, the clipping problem does not significantly affect methods based on heuristic dynamic programming, temporal differences learning, or policy-gradient learning algorithms.

  11. Finger Vein Segmentation from Infrared Images Based on a Modified Separable Mumford Shah Model and Local Entropy Thresholding

    PubMed Central

    Dermatas, Evangelos

    2015-01-01

    A novel method for finger vein pattern extraction from infrared images is presented. This method involves four steps: preprocessing which performs local normalization of the image intensity, image enhancement, image segmentation, and finally postprocessing for image cleaning. In the image enhancement step, an image which will be both smooth and similar to the original is sought. The enhanced image is obtained by minimizing the objective function of a modified separable Mumford Shah Model. Since, this minimization procedure is computationally intensive for large images, a local application of the Mumford Shah Model in small window neighborhoods is proposed. The finger veins are located in concave nonsmooth regions and, so, in order to distinct them from the other tissue parts, all the differences between the smooth neighborhoods, obtained by the local application of the model, and the corresponding windows of the original image are added. After that, veins in the enhanced image have been sufficiently emphasized. Thus, after image enhancement, an accurate segmentation can be obtained readily by a local entropy thresholding method. Finally, the resulted binary image may suffer from some misclassifications and, so, a postprocessing step is performed in order to extract a robust finger vein pattern. PMID:26120357

  12. Development of numerical model for predicting heat generation and temperatures in MSW landfills.

    PubMed

    Hanson, James L; Yeşiller, Nazli; Onnen, Michael T; Liu, Wei-Lien; Oettle, Nicolas K; Marinos, Janelle A

    2013-10-01

    A numerical modeling approach has been developed for predicting temperatures in municipal solid waste landfills. Model formulation and details of boundary conditions are described. Model performance was evaluated using field data from a landfill in Michigan, USA. The numerical approach was based on finite element analysis incorporating transient conductive heat transfer. Heat generation functions representing decomposition of wastes were empirically developed and incorporated to the formulation. Thermal properties of materials were determined using experimental testing, field observations, and data reported in literature. The boundary conditions consisted of seasonal temperature cycles at the ground surface and constant temperatures at the far-field boundary. Heat generation functions were developed sequentially using varying degrees of conceptual complexity in modeling. First a step-function was developed to represent initial (aerobic) and residual (anaerobic) conditions. Second, an exponential growth-decay function was established. Third, the function was scaled for temperature dependency. Finally, an energy-expended function was developed to simulate heat generation with waste age as a function of temperature. Results are presented and compared to field data for the temperature-dependent growth-decay functions. The formulations developed can be used for prediction of temperatures within various components of landfill systems (liner, waste mass, cover, and surrounding subgrade), determination of frost depths, and determination of heat gain due to decomposition of wastes. Copyright © 2013 Elsevier Ltd. All rights reserved.

  13. Thermodynamics, kinetics, and catalytic effect of dehydrogenation from MgH2 stepped surfaces and nanocluster: a DFT study

    NASA Astrophysics Data System (ADS)

    Reich, Jason; Wang, Linlin; Johnson, Duane

    2013-03-01

    We detail the results of a Density Functional Theory (DFT) based study of hydrogen desorption, including thermodynamics and kinetics with(out) catalytic dopants, on stepped (110) rutile and nanocluster MgH2. We investigate competing configurations (optimal surface and nanoparticle configurations) using simulated annealing with additional converged results at 0 K, necessary for finding the low-energy, doped MgH2 nanostructures. Thermodynamics of hydrogen desorption from unique dopant sites will be shown, as well as activation energies using the Nudged Elastic Band algorithm. To compare to experiment, both stepped structures and nanoclusters are required to understanding and predict the effects of ball milling. We demonstrate how these model systems relate to the intermediary sized structures typically seen in ball milling experiments.

  14. Stability diagrams for the surface patterns of GaN(0001bar) as a function of Schwoebel barrier height

    NASA Astrophysics Data System (ADS)

    Krzyżewski, Filip; Załuska-Kotur, Magdalena A.

    2017-01-01

    Height and type of Schwoebel barriers (direct or inverse) decides about the character of the surface instability. Different surface morphologies are presented. Step bunches, double steps, meanders, mounds and irregular patterns emerge at the surface as a result of step (Schwoebel) barriers at some temperature or miscut values. The study was carried out on the two-component kinetic Monte Carlo (kMC) model of GaN(0001bar) surface grown in nitrogen rich conditions. Diffusion of gallium adatoms over N-polar surface is slow and nitrogen adatoms are almost immobile. We show that in such conditions surfaces remain smooth when gallium adatoms diffuse in the presence of low inverse Schwoebel barrier. It is illustrated by adequate stability diagrams for surface morphologies.

  15. A parallel algorithm for step- and chain-growth polymerization in molecular dynamics.

    PubMed

    de Buyl, Pierre; Nies, Erik

    2015-04-07

    Classical Molecular Dynamics (MD) simulations provide insight into the properties of many soft-matter systems. In some situations, it is interesting to model the creation of chemical bonds, a process that is not part of the MD framework. In this context, we propose a parallel algorithm for step- and chain-growth polymerization that is based on a generic reaction scheme, works at a given intrinsic rate and produces continuous trajectories. We present an implementation in the ESPResSo++ simulation software and compare it with the corresponding feature in LAMMPS. For chain growth, our results are compared to the existing simulation literature. For step growth, a rate equation is proposed for the evolution of the crosslinker population that compares well to the simulations for low crosslinker functionality or for short times.

  16. A parallel algorithm for step- and chain-growth polymerization in molecular dynamics

    NASA Astrophysics Data System (ADS)

    de Buyl, Pierre; Nies, Erik

    2015-04-01

    Classical Molecular Dynamics (MD) simulations provide insight into the properties of many soft-matter systems. In some situations, it is interesting to model the creation of chemical bonds, a process that is not part of the MD framework. In this context, we propose a parallel algorithm for step- and chain-growth polymerization that is based on a generic reaction scheme, works at a given intrinsic rate and produces continuous trajectories. We present an implementation in the ESPResSo++ simulation software and compare it with the corresponding feature in LAMMPS. For chain growth, our results are compared to the existing simulation literature. For step growth, a rate equation is proposed for the evolution of the crosslinker population that compares well to the simulations for low crosslinker functionality or for short times.

  17. Automatic Rooftop Extraction in Stereo Imagery Using Distance and Building Shape Regularized Level Set Evolution

    NASA Astrophysics Data System (ADS)

    Tian, J.; Krauß, T.; d'Angelo, P.

    2017-05-01

    Automatic rooftop extraction is one of the most challenging problems in remote sensing image analysis. Classical 2D image processing techniques are expensive due to the high amount of features required to locate buildings. This problem can be avoided when 3D information is available. In this paper, we show how to fuse the spectral and height information of stereo imagery to achieve an efficient and robust rooftop extraction. In the first step, the digital terrain model (DTM) and in turn the normalized digital surface model (nDSM) is generated by using a newly step-edge approach. In the second step, the initial building locations and rooftop boundaries are derived by removing the low-level pixels and high-level pixels with higher probability to be trees and shadows. This boundary is then served as the initial level set function, which is further refined to fit the best possible boundaries through distance regularized level-set curve evolution. During the fitting procedure, the edge-based active contour model is adopted and implemented by using the edges indicators extracted from panchromatic image. The performance of the proposed approach is tested by using the WorldView-2 satellite data captured over Munich.

  18. Comparison of structural, thermodynamic, kinetic and mass transport properties of Mg(2+) ion models commonly used in biomolecular simulations.

    PubMed

    Panteva, Maria T; Giambaşu, George M; York, Darrin M

    2015-05-15

    The prevalence of Mg(2+) ions in biology and their essential role in nucleic acid structure and function has motivated the development of various Mg(2+) ion models for use in molecular simulations. Currently, the most widely used models in biomolecular simulations represent a nonbonded metal ion as an ion-centered point charge surrounded by a nonelectrostatic pairwise potential that takes into account dispersion interactions and exchange effects that give rise to the ion's excluded volume. One strategy toward developing improved models for biomolecular simulations is to first identify a Mg(2+) model that is consistent with the simulation force fields that closely reproduces a range of properties in aqueous solution, and then, in a second step, balance the ion-water and ion-solute interactions by tuning parameters in a pairwise fashion where necessary. The present work addresses the first step in which we compare 17 different nonbonded single-site Mg(2+) ion models with respect to their ability to simultaneously reproduce structural, thermodynamic, kinetic and mass transport properties in aqueous solution. None of the models based on a 12-6 nonelectrostatic nonbonded potential was able to reproduce the experimental radial distribution function, solvation free energy, exchange barrier and diffusion constant. The models based on a 12-6-4 potential offered improvement, and one model in particular, in conjunction with the SPC/E water model, performed exceptionally well for all properties. The results reported here establish useful benchmark calculations for Mg(2+) ion models that provide insight into the origin of the behavior in aqueous solution, and may aid in the development of next-generation models that target specific binding sites in biomolecules. © 2015 Wiley Periodicals, Inc.

  19. Contributions of Greenhouse Gas Forcing and the Southern Annular Mode to Historical Southern Ocean Surface Temperature Trends

    NASA Astrophysics Data System (ADS)

    Kostov, Yavor; Ferreira, David; Armour, Kyle C.; Marshall, John

    2018-01-01

    We examine the 1979-2014 Southern Ocean (SO) sea surface temperature (SST) trends simulated in an ensemble of coupled general circulation models and evaluate possible causes of the models' inability to reproduce the observed 1979-2014 SO cooling. For each model we estimate the response of SO SST to step changes in greenhouse gas (GHG) forcing and in the seasonal indices of the Southern Annular Mode (SAM). Using these step-response functions, we skillfully reconstruct the models' 1979-2014 SO SST trends. Consistent with the seasonal signature of the Antarctic ozone hole and the seasonality of SO stratification, the summer and fall SAM exert a large impact on the simulated SO SST trends. We further identify conditions that favor multidecadal SO cooling: (1) a weak SO warming response to GHG forcing, (2) a strong multidecadal SO cooling response to a positive SAM trend, and (3) a historical SAM trend as strong as in observations.

  20. Phase-field crystal modeling of heteroepitaxy and exotic modes of crystal nucleation

    NASA Astrophysics Data System (ADS)

    Podmaniczky, Frigyes; Tóth, Gyula I.; Tegze, György; Pusztai, Tamás; Gránásy, László

    2017-01-01

    We review recent advances made in modeling heteroepitaxy, two-step nucleation, and nucleation at the growth front within the framework of a simple dynamical density functional theory, the Phase-Field Crystal (PFC) model. The crystalline substrate is represented by spatially confined periodic potentials. We investigate the misfit dependence of the critical thickness in the StranskiKrastanov growth mode in isothermal studies. Apparently, the simulation results for stress release via the misfit dislocations fit better to the PeopleBean model than to the one by Matthews and Blakeslee. Next, we investigate structural aspects of two-step crystal nucleation at high undercoolings, where an amorphous precursor forms in the first stage. Finally, we present results for the formation of new grains at the solid-liquid interface at high supersaturations/supercoolings, a phenomenon termed Growth Front Nucleation (GFN). Results obtained with diffusive dynamics (applicable to colloids) and with a hydrodynamic extension of the PFC theory (HPFC, developed for simple liquids) will be compared. The HPFC simulations indicate two possible mechanisms for GFN.

  1. Cyclic Plasticity Constitutive Model for Uniaxial Ratcheting Behavior of AZ31B Magnesium Alloy

    NASA Astrophysics Data System (ADS)

    Lin, Y. C.; Liu, Zheng-Hua; Chen, Xiao-Min; Long, Zhi-Li

    2015-05-01

    Investigating the ratcheting behavior of magnesium alloys is significant for the structure's reliable design. The uniaxial ratcheting behavior of AZ31B magnesium alloy is studied by the asymmetric cyclic stress-controlled experiments at room temperature. A modified kinematic hardening model is established to describe the uniaxial ratcheting behavior of the studied alloy. In the modified model, the material parameter m i is improved as an exponential function of the maximum equivalent stress. The modified model can be used to predict the ratcheting strain evolution of the studied alloy under the single-step and multi-step asymmetric stress-controlled cyclic loadings. Additionally, due to the significant effect of twinning on the plastic deformation of magnesium alloy, the relationship between the material parameter m i and the linear density of twins is discussed. It is found that there is a linear relationship between the material parameter m i and the linear density of twins induced by the cyclic loadings.

  2. Mathematical modeling of the whole expanded bed adsorption process to recover and purify chitosanases from the unclarified fermentation broth of Paenibacillus ehimensis.

    PubMed

    de Araújo Padilha, Carlos Eduardo; Fortunato Dantas, Paulo Victor; de Sousa, Francisco Canindé; de Santana Souza, Domingos Fabiano; de Oliveira, Jackson Araújo; de Macedo, Gorete Ribeiro; Dos Santos, Everaldo Silvino

    2016-12-15

    In this study, a general rate model was applied to the entire process of expanded bed adsorption chromatography (EBAC) for the chitosanases purification protocol from unclarified fermentation broth produced by Paenibacillus ehimensis using the anionic adsorbent Streamline ® DEAE. For the experiments performed using the expanded bed, a homemade column (2.6cm×30.0cm) was specially designed. The proposed model predicted the entire EBA process adequately, giving R 2 values higher than 0.85 and χ 2 as low as 0.351 for the elution step. Using the validated model, a 3 3 factorial design was used to investigate other non-tested conditions as input. It was observed that the superficial velocity during loading and washing steps, as well as the settled bed height, has a strong positive effect on the F objective function used to evaluate the production of the purified chitosanases. Copyright © 2016 Elsevier B.V. All rights reserved.

  3. Multidomain proteins under force

    NASA Astrophysics Data System (ADS)

    Valle-Orero, Jessica; Andrés Rivas-Pardo, Jaime; Popa, Ionel

    2017-04-01

    Advancements in single-molecule force spectroscopy techniques such as atomic force microscopy and magnetic tweezers allow investigation of how domain folding under force can play a physiological role. Combining these techniques with protein engineering and HaloTag covalent attachment, we investigate similarities and differences between four model proteins: I10 and I91—two immunoglobulin-like domains from the muscle protein titin, and two α + β fold proteins—ubiquitin and protein L. These proteins show a different mechanical response and have unique extensions under force. Remarkably, when normalized to their contour length, the size of the unfolding and refolding steps as a function of force reduces to a single master curve. This curve can be described using standard models of polymer elasticity, explaining the entropic nature of the measured steps. We further validate our measurements with a simple energy landscape model, which combines protein folding with polymer physics and accounts for the complex nature of tandem domains under force. This model can become a useful tool to help in deciphering the complexity of multidomain proteins operating under force.

  4. Global Sensitivity Analysis as Good Modelling Practices tool for the identification of the most influential process parameters of the primary drying step during freeze-drying.

    PubMed

    Van Bockstal, Pieter-Jan; Mortier, Séverine Thérèse F C; Corver, Jos; Nopens, Ingmar; Gernaey, Krist V; De Beer, Thomas

    2018-02-01

    Pharmaceutical batch freeze-drying is commonly used to improve the stability of biological therapeutics. The primary drying step is regulated by the dynamic settings of the adaptable process variables, shelf temperature T s and chamber pressure P c . Mechanistic modelling of the primary drying step leads to the optimal dynamic combination of these adaptable process variables in function of time. According to Good Modelling Practices, a Global Sensitivity Analysis (GSA) is essential for appropriate model building. In this study, both a regression-based and variance-based GSA were conducted on a validated mechanistic primary drying model to estimate the impact of several model input parameters on two output variables, the product temperature at the sublimation front T i and the sublimation rate ṁ sub . T s was identified as most influential parameter on both T i and ṁ sub , followed by P c and the dried product mass transfer resistance α Rp for T i and ṁ sub , respectively. The GSA findings were experimentally validated for ṁ sub via a Design of Experiments (DoE) approach. The results indicated that GSA is a very useful tool for the evaluation of the impact of different process variables on the model outcome, leading to essential process knowledge, without the need for time-consuming experiments (e.g., DoE). Copyright © 2017 Elsevier B.V. All rights reserved.

  5. A frequentist approach to computer model calibration

    DOE PAGES

    Wong, Raymond K. W.; Storlie, Curtis Byron; Lee, Thomas C. M.

    2016-05-05

    The paper considers the computer model calibration problem and provides a general frequentist solution. Under the framework proposed, the data model is semiparametric with a non-parametric discrepancy function which accounts for any discrepancy between physical reality and the computer model. In an attempt to solve a fundamentally important (but often ignored) identifiability issue between the computer model parameters and the discrepancy function, the paper proposes a new and identifiable parameterization of the calibration problem. It also develops a two-step procedure for estimating all the relevant quantities under the new parameterization. This estimation procedure is shown to enjoy excellent rates ofmore » convergence and can be straightforwardly implemented with existing software. For uncertainty quantification, bootstrapping is adopted to construct confidence regions for the quantities of interest. As a result, the practical performance of the methodology is illustrated through simulation examples and an application to a computational fluid dynamics model.« less

  6. A neuro-mechanical model of a single leg joint highlighting the basic physiological role of fast and slow muscle fibres of an insect muscle system.

    PubMed

    Toth, Tibor Istvan; Schmidt, Joachim; Büschges, Ansgar; Daun-Gruhn, Silvia

    2013-01-01

    In legged animals, the muscle system has a dual function: to produce forces and torques necessary to move the limbs in a systematic way, and to maintain the body in a static position. These two functions are performed by the contribution of specialized motor units, i.e. motoneurons driving sets of specialized muscle fibres. With reference to their overall contraction and metabolic properties they are called fast and slow muscle fibres and can be found ubiquitously in skeletal muscles. Both fibre types are active during stepping, but only the slow ones maintain the posture of the body. From these findings, the general hypothesis on a functional segregation between both fibre types and their neuronal control has arisen. Earlier muscle models did not fully take this aspect into account. They either focused on certain aspects of muscular function or were developed to describe specific behaviours only. By contrast, our neuro-mechanical model is more general as it allows functionally to differentiate between static and dynamic aspects of movement control. It does so by including both muscle fibre types and separate motoneuron drives. Our model helps to gain a deeper insight into how the nervous system might combine neuronal control of locomotion and posture. It predicts that (1) positioning the leg at a specific retraction angle in steady state is most likely due to the extent of recruitment of slow muscle fibres and not to the force developed in the individual fibres of the antagonistic muscles; (2) the fast muscle fibres of antagonistic muscles contract alternately during stepping, while co-contraction of the slow muscle fibres takes place during steady state; (3) there are several possible ways of transition between movement and steady state of the leg achieved by varying the time course of recruitment of the fibres in the participating muscles.

  7. Piezoelectroluminescent Optical Fiber Sensor for Diagnostics of the Stress State and Defectoscopy of Composites

    NASA Astrophysics Data System (ADS)

    Pan'kov, A. A.

    2017-05-01

    A mathematical model is developed for a piezoelectroluminescent optical fiber pressure sensor is developed in which the mechanoluminescence effect results from the interaction of electroluminescent and piezoelectric coverings put on an optical fiber. The additional control electrodes expand the possibilities of analyzing the distribution of pressure along the fiber. The probability density function of pressure distribution along the sensor is found from results of the measured intensity of light coming from the optical fiber. The problem is reduced to the solution of the Fredholm integral equation of the first kind with a difference kernel depending on the effective parameters of the sensor and properties of an electroluminophor. An algorithm of step-by-step scanning of the nonuniform pressure along the sensor by using the running wave of control voltage is developed. On each step, the amplitude of the wave is increased by a small value, which leads to the appearance of additional luminescence sections of the electroluminophor and the corresponding "glow pulses" at the output of the optical fiber sensor. The sought-for nodal values of pressure and their locations are calculated according to the form of the glow pulses with account of amplitude of the wave at each scanning step. Results of numerical modeling of the process of location of pressure nonuniformities along the sensor by the running wave are found for different scanning steps.

  8. Exploring predictive performance: A reanalysis of the geospace model transition challenge

    NASA Astrophysics Data System (ADS)

    Welling, D. T.; Anderson, B. J.; Crowley, G.; Pulkkinen, A. A.; Rastätter, L.

    2017-01-01

    The Pulkkinen et al. (2013) study evaluated the ability of five different geospace models to predict surface dB/dt as a function of upstream solar drivers. This was an important step in the assessment of research models for predicting and ultimately preventing the damaging effects of geomagnetically induced currents. Many questions remain concerning the capabilities of these models. This study presents a reanalysis of the Pulkkinen et al. (2013) results in an attempt to better understand the models' performance. The range of validity of the models is determined by examining the conditions corresponding to the empirical input data. It is found that the empirical conductance models on which global magnetohydrodynamic models rely are frequently used outside the limits of their input data. The prediction error for the models is sorted as a function of solar driving and geomagnetic activity. It is found that all models show a bias toward underprediction, especially during active times. These results have implications for future research aimed at improving operational forecast models.

  9. Progress Implementing a Model-Based Iterative Reconstruction Algorithm for Ultrasound Imaging of Thick Concrete

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Almansouri, Hani; Johnson, Christi R; Clayton, Dwight A

    All commercial nuclear power plants (NPPs) in the United States contain concrete structures. These structures provide important foundation, support, shielding, and containment functions. Identification and management of aging and the degradation of concrete structures is fundamental to the proposed long-term operation of NPPs. Concrete structures in NPPs are often inaccessible and contain large volumes of massively thick concrete. While acoustic imaging using the synthetic aperture focusing technique (SAFT) works adequately well for thin specimens of concrete such as concrete transportation structures, enhancements are needed for heavily reinforced, thick concrete. We argue that image reconstruction quality for acoustic imaging in thickmore » concrete could be improved with Model-Based Iterative Reconstruction (MBIR) techniques. MBIR works by designing a probabilistic model for the measurements (forward model) and a probabilistic model for the object (prior model). Both models are used to formulate an objective function (cost function). The final step in MBIR is to optimize the cost function. Previously, we have demonstrated a first implementation of MBIR for an ultrasonic transducer array system. The original forward model has been upgraded to account for direct arrival signal. Updates to the forward model will be documented and the new algorithm will be assessed with synthetic and empirical samples.« less

  10. Progress implementing a model-based iterative reconstruction algorithm for ultrasound imaging of thick concrete

    NASA Astrophysics Data System (ADS)

    Almansouri, Hani; Johnson, Christi; Clayton, Dwight; Polsky, Yarom; Bouman, Charles; Santos-Villalobos, Hector

    2017-02-01

    All commercial nuclear power plants (NPPs) in the United States contain concrete structures. These structures provide important foundation, support, shielding, and containment functions. Identification and management of aging and the degradation of concrete structures is fundamental to the proposed long-term operation of NPPs. Concrete structures in NPPs are often inaccessible and contain large volumes of massively thick concrete. While acoustic imaging using the synthetic aperture focusing technique (SAFT) works adequately well for thin specimens of concrete such as concrete transportation structures, enhancements are needed for heavily reinforced, thick concrete. We argue that image reconstruction quality for acoustic imaging in thick concrete could be improved with Model-Based Iterative Reconstruction (MBIR) techniques. MBIR works by designing a probabilistic model for the measurements (forward model) and a probabilistic model for the object (prior model). Both models are used to formulate an objective function (cost function). The final step in MBIR is to optimize the cost function. Previously, we have demonstrated a first implementation of MBIR for an ultrasonic transducer array system. The original forward model has been upgraded to account for direct arrival signal. Updates to the forward model will be documented and the new algorithm will be assessed with synthetic and empirical samples.

  11. An Optimal Free Energy Dissipation Strategy of the MinCDE Oscillator in Regulating Symmetric Bacterial Cell Division

    PubMed Central

    Xiong, Liping; Lan, Ganhui

    2015-01-01

    Sustained molecular oscillations are ubiquitous in biology. The obtained oscillatory patterns provide vital functions as timekeepers, pacemakers and spacemarkers. Models based on control theory have been introduced to explain how specific oscillatory behaviors stem from protein interaction feedbacks, whereas the energy dissipation through the oscillating processes and its role in the regulatory function remain unexplored. Here we developed a general framework to assess an oscillator’s regulation performance at different dissipation levels. Using the Escherichia coli MinCDE oscillator as a model system, we showed that a sufficient amount of energy dissipation is needed to switch on the oscillation, which is tightly coupled to the system’s regulatory performance. Once the dissipation level is beyond this threshold, unlike stationary regulators’ monotonic performance-to-cost relation, excess dissipation at certain steps in the oscillating process damages the oscillator’s regulatory performance. We further discovered that the chemical free energy from ATP hydrolysis has to be strategically assigned to the MinE-aided MinD release and the MinD immobilization steps for optimal performance, and a higher energy budget improves the robustness of the oscillator. These results unfold a novel mode by which living systems trade energy for regulatory function. PMID:26317492

  12. Sage Simulation Model for Technology Demonstration Convertor by a Step-by-Step Approach

    NASA Technical Reports Server (NTRS)

    Demko, Rikako; Penswick, L. Barry

    2006-01-01

    The development of a Stirling model using the 1-D Saga design code was completed using a step-by-step approach. This is a method of gradually increasing the complexity of the Saga model while observing the energy balance and energy losses at each step of the development. This step-by-step model development and energy-flow analysis can clarify where the losses occur, their impact, and suggest possible opportunities for design improvement.

  13. Minimum Performance on Clinical Tests of Physical Function to Predict Walking 6,000 Steps/Day in Knee Osteoarthritis: An Observational Study.

    PubMed

    Master, Hiral; Thoma, Louise M; Christiansen, Meredith B; Polakowski, Emily; Schmitt, Laura A; White, Daniel K

    2018-07-01

    Evidence of physical function difficulties, such as difficulty rising from a chair, may limit daily walking for people with knee osteoarthritis (OA). The purpose of this study was to identify minimum performance thresholds on clinical tests of physical function predictive to walking ≥6,000 steps/day. This benchmark is known to discriminate people with knee OA who develop functional limitation over time from those who do not. Using data from the Osteoarthritis Initiative, we quantified daily walking as average steps/day from an accelerometer (Actigraph GT1M) worn for ≥10 hours/day over 1 week. Physical function was quantified using 3 performance-based clinical tests: 5 times sit-to-stand test, walking speed (tested over 20 meters), and 400-meter walk test. To identify minimum performance thresholds for daily walking, we calculated physical function values corresponding to high specificity (80-95%) to predict walking ≥6,000 steps/day. Among 1,925 participants (mean ± SD age 65.1 ± 9.1 years, mean ± SD body mass index 28.4 ± 4.8 kg/m 2 , and 55% female) with valid accelerometer data, 54.9% walked ≥6,000 steps/day. High specificity thresholds of physical function for walking ≥6,000 steps/day ranged 11.4-14.0 seconds on the 5 times sit-to-stand test, 1.13-1.26 meters/second for walking speed, or 315-349 seconds on the 400-meter walk test. Not meeting these minimum performance thresholds on clinical tests of physical function may indicate inadequate physical ability to walk ≥6,000 steps/day for people with knee OA. Rehabilitation may be indicated to address underlying impairments limiting physical function. © 2017, American College of Rheumatology.

  14. Step climbing capacity in patients with pulmonary hypertension.

    PubMed

    Fox, Benjamin Daniel; Langleben, David; Hirsch, Andrew; Boutet, Kim; Shimony, Avi

    2013-01-01

    Patients with pulmonary hypertension (PH) typically have exercise intolerance and limitation in climbing steps. To explore the exercise physiology of step climbing in PH patients, on a laboratory-based step test. We built a step oximetry system from an 'aerobics' step equipped with pressure sensors and pulse oximeter linked to a computer. Subjects mounted and dismounted from the step until their maximal exercise capacity or 200 steps was achieved. Step-count, SpO(2) and heart rate were monitored throughout exercise and recovery. We derived indices of exercise performance, desaturation and heart rate. A 6-min walk test and serum NT-proBrain Natriuretic Peptide (BNP) level were measured. Lung function tests and hemodynamic parameters were extracted from the medical record. Eighty-six subjects [52 pulmonary arterial hypertension (PAH), 14 chronic thromboembolic PH (CTEPH), 20 controls] were recruited. Exercise performance (climbing time, height gained, velocity, energy expenditure, work-rate and climbing index) on the step test was significantly worse with PH and/or worsening WHO functional class (ANOVA, p < 0.001). There was a good correlation between exercise performance on the step and 6-min walking distance-climb index (r = -0.77, p < 0.0001). The saturation deviation (mean of SpO(2) values <95 %) on the step test correlated with diffusion capacity of the lung (ρ = -0.49, p = 0.001). No correlations were found between the step test indices and other lung function tests, hemodynamic parameters or NT-proBNP levels. Patients with PAH/CTEPH have significant limitation in step climbing ability that correlates with functional class and 6-min walking distance. This is a significant impediment to their daily activities.

  15. Computational principles underlying recognition of acoustic signals in grasshoppers and crickets.

    PubMed

    Ronacher, Bernhard; Hennig, R Matthias; Clemens, Jan

    2015-01-01

    Grasshoppers and crickets independently evolved hearing organs and acoustic communication. They differ considerably in the organization of their auditory pathways, and the complexity of their songs, which are essential for mate attraction. Recent approaches aimed at describing the behavioral preference functions of females in both taxa by a simple modeling framework. The basic structure of the model consists of three processing steps: (1) feature extraction with a bank of 'LN models'-each containing a linear filter followed by a nonlinearity, (2) temporal integration, and (3) linear combination. The specific properties of the filters and nonlinearities were determined using a genetic learning algorithm trained on a large set of different song features and the corresponding behavioral response scores. The model showed an excellent prediction of the behavioral responses to the tested songs. Most remarkably, in both taxa the genetic algorithm found Gabor-like functions as the optimal filter shapes. By slight modifications of Gabor filters several types of preference functions could be modeled, which are observed in different cricket species. Furthermore, this model was able to explain several so far enigmatic results in grasshoppers. The computational approach offered a remarkably simple framework that can account for phenotypically rather different preference functions across several taxa.

  16. The use of experimental structures to model protein dynamics.

    PubMed

    Katebi, Ataur R; Sankar, Kannan; Jia, Kejue; Jernigan, Robert L

    2015-01-01

    The number of solved protein structures submitted in the Protein Data Bank (PDB) has increased dramatically in recent years. For some specific proteins, this number is very high-for example, there are over 550 solved structures for HIV-1 protease, one protein that is essential for the life cycle of human immunodeficiency virus (HIV) which causes acquired immunodeficiency syndrome (AIDS) in humans. The large number of structures for the same protein and its variants include a sample of different conformational states of the protein. A rich set of structures solved experimentally for the same protein has information buried within the dataset that can explain the functional dynamics and structural mechanism of the protein. To extract the dynamics information and functional mechanism from the experimental structures, this chapter focuses on two methods-Principal Component Analysis (PCA) and Elastic Network Models (ENM). PCA is a widely used statistical dimensionality reduction technique to classify and visualize high-dimensional data. On the other hand, ENMs are well-established simple biophysical method for modeling the functionally important global motions of proteins. This chapter covers the basics of these two. Moreover, an improved ENM version that utilizes the variations found within a given set of structures for a protein is described. As a practical example, we have extracted the functional dynamics and mechanism of HIV-1 protease dimeric structure by using a set of 329 PDB structures of this protein. We have described, step by step, how to select a set of protein structures, how to extract the needed information from the PDB files for PCA, how to extract the dynamics information using PCA, how to calculate ENM modes, how to measure the congruency between the dynamics computed from the principal components (PCs) and the ENM modes, and how to compute entropies using the PCs. We provide the computer programs or references to software tools to accomplish each step and show how to use these programs and tools. We also include computer programs to generate movies based on PCs and ENM modes and describe how to visualize them.

  17. Coupled-cluster sum-frequency generation nonlinear susceptibilities of methyl (CH3) and methylene (CH2) groups.

    PubMed

    Tetsassi Feugmo, Conrard Giresse; Liégeois, Vincent; Champagne, Benoît

    2017-11-15

    The first vibrational sum frequency generation (SFG) spectra based on molecular properties calculated at the coupled cluster singles and doubles (CCSD) level of approximation have been simulated for interfacial model alkyl chains, providing benchmark data for comparisons with approximate methods, including density functional theory (DFT). The approach proceeds in three steps. In the first two steps, the molecular spectral properties are determined: the vibrational normal modes and frequencies and then the derivatives of the dipole moment and of the polarizability with respect to the normal coordinates. These derivatives are evaluated with a numerical differentiation approach, of which the accuracy was monitored using Romberg's procedure. Then, in the last step, a three-layer model is employed to evaluate the macroscopic second-order nonlinear optical responses and thereby the simulated SFG spectra of the alkyl interface. Results emphasize the following facts: (i) the dipole and polarizability derivatives calculated at the DFT level with the B3LYP exchange-correlation functional can differ, with respect to CCSD, by as much as ±10 to 20% and ±20 to 50% for the CH 3 and CH 2 vibrations, respectively; (ii) these differences are enhanced when considering the SFG intensities as well as their variations as a function of the experimental configuration (ppp versus ssp) and as a function of the tilt and rotation angles, defining the orientation of the alkyl chain at the interface; (iii) these differences originate from both the vibrational normal coordinates and the Cartesian derivatives of the dipole moment and polarizability; (iv) freezing the successive fragments of the alkyl chain strongly modifies the SFG spectrum and enables highlighting the delocalization effects between the terminal CH 3 group and its neighboring CH 2 units; and finally (v) going from the free chain to the free methyl model, and further to C 3v constraints on leads to large variations of two ratios that are frequently used to probe the molecular orientation at the interface, the (r + r)/r + ratio for both antisymmetric and symmetric CH 3 vibrations and the I ppp /I ssp ratio.

  18. The Use of Experimental Structures to Model Protein Dynamics

    PubMed Central

    Katebi, Ataur R.; Sankar, Kannan; Jia, Kejue; Jernigan, Robert L.

    2014-01-01

    Summary The number of solved protein structures submitted in the Protein Data Bank (PDB) has increased dramatically in recent years. For some specific proteins, this number is very high – for example, there are over 550 solved structures for HIV-1 protease, one protein that is essential for the life cycle of human immunodeficiency virus (HIV) which causes acquired immunodeficiency syndrome (AIDS) in humans. The large number of structures for the same protein and its variants include a sample of different conformational states of the protein. A rich set of structures solved experimentally for the same protein has information buried within the dataset that can explain the functional dynamics and structural mechanism of the protein. To extract the dynamics information and functional mechanism from the experimental structures, this chapter focuses on two methods – Principal Component Analysis (PCA) and Elastic Network Models (ENM). PCA is a widely used statistical dimensionality reduction technique to classify and visualize high-dimensional data. On the other hand, ENMs are well-established simple biophysical method for modeling the functionally important global motions of proteins. This chapter covers the basics of these two. Moreover, an improved ENM version that utilizes the variations found within a given set of structures for a protein is described. As a practical example, we have extracted the functional dynamics and mechanism of HIV-1 protease dimeric structure by using a set of 329 PDB structures of this protein. We have described, step by step, how to select a set of protein structures, how to extract the needed information from the PDB files for PCA, how to extract the dynamics information using PCA, how to calculate ENM modes, how to measure the congruency between the dynamics computed from the principal components (PCs) and the ENM modes, and how to compute entropies using the PCs. We provide the computer programs or references to software tools to accomplish each step and show how to use these programs and tools. We also include computer programs to generate movies based on PCs and ENM modes and describe how to visualize them. PMID:25330965

  19. Integrating K-means Clustering with Kernel Density Estimation for the Development of a Conditional Weather Generation Downscaling Model

    NASA Astrophysics Data System (ADS)

    Chen, Y.; Ho, C.; Chang, L.

    2011-12-01

    In previous decades, the climate change caused by global warming increases the occurrence frequency of extreme hydrological events. Water supply shortages caused by extreme events create great challenges for water resource management. To evaluate future climate variations, general circulation models (GCMs) are the most wildly known tools which shows possible weather conditions under pre-defined CO2 emission scenarios announced by IPCC. Because the study area of GCMs is the entire earth, the grid sizes of GCMs are much larger than the basin scale. To overcome the gap, a statistic downscaling technique can transform the regional scale weather factors into basin scale precipitations. The statistic downscaling technique can be divided into three categories include transfer function, weather generator and weather type. The first two categories describe the relationships between the weather factors and precipitations respectively based on deterministic algorithms, such as linear or nonlinear regression and ANN, and stochastic approaches, such as Markov chain theory and statistical distributions. In the weather type, the method has ability to cluster weather factors, which are high dimensional and continuous variables, into weather types, which are limited number of discrete states. In this study, the proposed downscaling model integrates the weather type, using the K-means clustering algorithm, and the weather generator, using the kernel density estimation. The study area is Shihmen basin in northern of Taiwan. In this study, the research process contains two steps, a calibration step and a synthesis step. Three sub-steps were used in the calibration step. First, weather factors, such as pressures, humidities and wind speeds, obtained from NCEP and the precipitations observed from rainfall stations were collected for downscaling. Second, the K-means clustering grouped the weather factors into four weather types. Third, the Markov chain transition matrixes and the conditional probability density function (PDF) of precipitations approximated by the kernel density estimation are calculated respectively for each weather types. In the synthesis step, 100 patterns of synthesis data are generated. First, the weather type of the n-th day are determined by the results of K-means clustering. The associated transition matrix and PDF of the weather type were also determined for the usage of the next sub-step in the synthesis process. Second, the precipitation condition, dry or wet, can be synthesized basing on the transition matrix. If the synthesized condition is dry, the quantity of precipitation is zero; otherwise, the quantity should be further determined in the third sub-step. Third, the quantity of the synthesized precipitation is assigned as the random variable of the PDF defined above. The synthesis efficiency compares the gap of the monthly mean curves and monthly standard deviation curves between the historical precipitation data and the 100 patterns of synthesis data.

  20. The effect of Chinese Jinzhida recipe on the hippocampus in a rat model of diabetes-associated cognitive decline

    PubMed Central

    2013-01-01

    Background To investigate the effects of treatment with Multi component Chinese Medicine Jinzhida (JZD) on behavioral deficits in diabetes-associated cognitive decline (DACD) rats and verify our hypothesis that JZD treatment improves cognitive function by suppressing the endoplasmic reticulum stress (ERS) and improving insulin signaling transduction in the rats’ hippocampus. Methods A rat model of type 2 diabetes mellitus (T2DM) was established using high fat diet and streptozotocin (30 mg/kg, ip). Insulin sensitivity was evaluated by the oral glucose tolerance test and the insulin tolerance test. After 7 weeks, the T2DM rats were treated with JZD. The step-down test and Morris water maze were used to evaluate behavior in T2DM rats after 5 weeks of treatment with JZD. Levels of phosphorylated proteins involved in the ERS and in insulin signaling transduction pathways were assessed by Western blot for T2DM rats’ hippocampus. Results Compared to healthy control rats, T2DM rats initially showed insulin resistance and had declines in acquisition and retrieval processes in the step-down test and in spatial memory in the Morris water maze after 12 weeks. Performance on both the step-down test and Morris water maze tasks improved after JZD treatment. In T2DM rats, the ERS was activated, and then inhibited the insulin signal transduction pathways through the Jun NH2-terminal kinases (JNK) mediated. JZD treatment suppressed the ERS, increased insulin signal transduction, and improved insulin resistance in the rats’ hippocampus. Conclusions Treatment with JZD improved cognitive function in the T2DM rat model. The possible mechanism for DACD was related with ERS inducing the insulin signal transduction dysfunction in T2DM rats’ hippocampus. The JZD could reduce ERS and improve insulin signal transduction and insulin resistance in T2DM rats’ hippocampus and as a result improved the cognitive function. PMID:23829668

  1. Extracting the normal lung dose-response curve from clinical DVH data: a possible role for low dose hyper-radiosensitivity, increased radioresistance

    NASA Astrophysics Data System (ADS)

    Gordon, J. J.; Snyder, K.; Zhong, H.; Barton, K.; Sun, Z.; Chetty, I. J.; Matuszak, M.; Ten Haken, R. K.

    2015-09-01

    In conventionally fractionated radiation therapy for lung cancer, radiation pneumonitis’ (RP) dependence on the normal lung dose-volume histogram (DVH) is not well understood. Complication models alternatively make RP a function of a summary statistic, such as mean lung dose (MLD). This work searches over damage profiles, which quantify sub-volume damage as a function of dose. Profiles that achieve best RP predictive accuracy on a clinical dataset are hypothesized to approximate DVH dependence. Step function damage rate profiles R(D) are generated, having discrete steps at several dose points. A range of profiles is sampled by varying the step heights and dose point locations. Normal lung damage is the integral of R(D) with the cumulative DVH. Each profile is used in conjunction with a damage cutoff to predict grade 2 plus (G2+) RP for DVHs from a University of Michigan clinical trial dataset consisting of 89 CFRT patients, of which 17 were diagnosed with G2+ RP. Optimal profiles achieve a modest increase in predictive accuracy—erroneous RP predictions are reduced from 11 (using MLD) to 8. A novel result is that optimal profiles have a similar distinctive shape: enhanced damage contribution from low doses (<20 Gy), a flat contribution from doses in the range ~20-40 Gy, then a further enhanced contribution from doses above 40 Gy. These features resemble the hyper-radiosensitivity / increased radioresistance (HRS/IRR) observed in some cell survival curves, which can be modeled using Joiner’s induced repair model. A novel search strategy is employed, which has the potential to estimate RP dependence on the normal lung DVH. When applied to a clinical dataset, identified profiles share a characteristic shape, which resembles HRS/IRR. This suggests that normal lung may have enhanced sensitivity to low doses, and that this sensitivity can affect RP risk.

  2. From MIMO-OFDM Algorithms to a Real-Time Wireless Prototype: A Systematic Matlab-to-Hardware Design Flow

    NASA Astrophysics Data System (ADS)

    Weijers, Jan-Willem; Derudder, Veerle; Janssens, Sven; Petré, Frederik; Bourdoux, André

    2006-12-01

    To assess the performance of forthcoming 4th generation wireless local area networks, the algorithmic functionality is usually modelled using a high-level mathematical software package, for instance, Matlab. In order to validate the modelling assumptions against the real physical world, the high-level functional model needs to be translated into a prototype. A systematic system design methodology proves very valuable, since it avoids, or, at least reduces, numerous design iterations. In this paper, we propose a novel Matlab-to-hardware design flow, which allows to map the algorithmic functionality onto the target prototyping platform in a systematic and reproducible way. The proposed design flow is partly manual and partly tool assisted. It is shown that the proposed design flow allows to use the same testbench throughout the whole design flow and avoids time-consuming and error-prone intermediate translation steps.

  3. Penalized spline estimation for functional coefficient regression models.

    PubMed

    Cao, Yanrong; Lin, Haiqun; Wu, Tracy Z; Yu, Yan

    2010-04-01

    The functional coefficient regression models assume that the regression coefficients vary with some "threshold" variable, providing appreciable flexibility in capturing the underlying dynamics in data and avoiding the so-called "curse of dimensionality" in multivariate nonparametric estimation. We first investigate the estimation, inference, and forecasting for the functional coefficient regression models with dependent observations via penalized splines. The P-spline approach, as a direct ridge regression shrinkage type global smoothing method, is computationally efficient and stable. With established fixed-knot asymptotics, inference is readily available. Exact inference can be obtained for fixed smoothing parameter λ, which is most appealing for finite samples. Our penalized spline approach gives an explicit model expression, which also enables multi-step-ahead forecasting via simulations. Furthermore, we examine different methods of choosing the important smoothing parameter λ: modified multi-fold cross-validation (MCV), generalized cross-validation (GCV), and an extension of empirical bias bandwidth selection (EBBS) to P-splines. In addition, we implement smoothing parameter selection using mixed model framework through restricted maximum likelihood (REML) for P-spline functional coefficient regression models with independent observations. The P-spline approach also easily allows different smoothness for different functional coefficients, which is enabled by assigning different penalty λ accordingly. We demonstrate the proposed approach by both simulation examples and a real data application.

  4. Timing paradox of stepping and falls in ageing: not so quick and quick(er) on the trigger

    PubMed Central

    Mille, Marie‐Laure

    2016-01-01

    Abstract Physiological and degenerative changes affecting human standing balance are major contributors to falls with ageing. During imbalance, stepping is a powerful protective action for preserving balance that may be voluntarily initiated in recognition of a balance threat, or be induced by an externally imposed mechanical or sensory perturbation. Paradoxically, with ageing and falls, initiation slowing of voluntary stepping is observed together with perturbation‐induced steps that are triggered as fast as or faster than for younger adults. While age‐associated changes in sensorimotor conduction, central neuronal processing and cognitive functions are linked to delayed voluntary stepping, alterations in the coupling of posture and locomotion may also prolong step triggering. It is less clear, however, how these factors may explain the accelerated triggering of induced stepping. We present a conceptual model that addresses this issue. For voluntary stepping, a disruption in the normal coupling between posture and locomotion may underlie step‐triggering delays through suppression of the locomotion network based on an estimation of the evolving mechanical state conditions for stability. During induced stepping, accelerated step initiation may represent an event‐triggering process whereby stepping is released according to the occurrence of a perturbation rather than to the specific sensorimotor information reflecting the evolving instability. In this case, errors in the parametric control of induced stepping and its effectiveness in stabilizing balance would be likely to occur. We further suggest that there is a residual adaptive capacity with ageing that could be exploited to improve paradoxical triggering and other changes in protective stepping to impact fall risk. PMID:26915664

  5. Transfer Relations Between Landscape Functions - The Hydrological Point of View

    NASA Astrophysics Data System (ADS)

    Fohrer, N.; Lenhart, T.; Eckhardt, K.; Frede, H.-G.

    EC market policies and regional subsidy programs have an enormous impact on local land use. This has far reaching consequences on various landscape functions. In the joint research project SFB299 at the Giessen University the effect of land use options on economic, ecological and hydrological landscape functions are under investigation. The continuous time step model SWAT-G (Eckhardt et al., 2000; Arnold et al., 1998) is employed to characterize the influence of land use patterns on hydrological processes. The model was calibrated and validated employing a split sample approach. For two mesoscale watersheds (Aar, 60 km2; Dietzhölze, 81 km2) located in the Lahn-Dill- Bergland, Germany, different land use scenarios were analyzed with regard to their hydrological impact. Additionally the effect of land use change was analyzed with an ecological and an agro-economic model. The impact of the stepwise changing land use was expressed as trade off relations between different landscape functions.

  6. Consistency of internal fluxes in a hydrological model running at multiple time steps

    NASA Astrophysics Data System (ADS)

    Ficchi, Andrea; Perrin, Charles; Andréassian, Vazken

    2016-04-01

    Improving hydrological models remains a difficult task and many ways can be explored, among which one can find the improvement of spatial representation, the search for more robust parametrization, the better formulation of some processes or the modification of model structures by trial-and-error procedure. Several past works indicate that model parameters and structure can be dependent on the modelling time step, and there is thus some rationale in investigating how a model behaves across various modelling time steps, to find solutions for improvements. Here we analyse the impact of data time step on the consistency of the internal fluxes of a rainfall-runoff model run at various time steps, by using a large data set of 240 catchments. To this end, fine time step hydro-climatic information at sub-hourly resolution is used as input of a parsimonious rainfall-runoff model (GR) that is run at eight different model time steps (from 6 minutes to one day). The initial structure of the tested model (i.e. the baseline) corresponds to the daily model GR4J (Perrin et al., 2003), adapted to be run at variable sub-daily time steps. The modelled fluxes considered are interception, actual evapotranspiration and intercatchment groundwater flows. Observations of these fluxes are not available, but the comparison of modelled fluxes at multiple time steps gives additional information for model identification. The joint analysis of flow simulation performance and consistency of internal fluxes at different time steps provides guidance to the identification of the model components that should be improved. Our analysis indicates that the baseline model structure is to be modified at sub-daily time steps to warrant the consistency and realism of the modelled fluxes. For the baseline model improvement, particular attention is devoted to the interception model component, whose output flux showed the strongest sensitivity to modelling time step. The dependency of the optimal model complexity on time step is also analysed. References: Perrin, C., Michel, C., Andréassian, V., 2003. Improvement of a parsimonious model for streamflow simulation. Journal of Hydrology, 279(1-4): 275-289. DOI:10.1016/S0022-1694(03)00225-7

  7. Considerations for the independent reaction times and step-by-step methods for radiation chemistry simulations

    NASA Astrophysics Data System (ADS)

    Plante, Ianik; Devroye, Luc

    2017-10-01

    Ionizing radiation interacts with the water molecules of the tissues mostly by ionizations and excitations, which result in the formation of the radiation track structure and the creation of radiolytic species such as H.,.OH, H2, H2O2, and e-aq. After their creation, these species diffuse and may chemically react with the neighboring species and with the molecules of the medium. Therefore radiation chemistry is of great importance in radiation biology. As the chemical species are not distributed homogeneously, the use of conventional models of homogeneous reactions cannot completely describe the reaction kinetics of the particles. Actually, many simulations of radiation chemistry are done using the Independent Reaction Time (IRT) method, which is a very fast technique to calculate radiochemical yields but which do not calculate the positions of the radiolytic species as a function of time. Step-by-step (SBS) methods, which are able to provide such information, have been used only sparsely because these are time-consuming in terms of calculation. Recent improvements in computer performance now allow the regular use of the SBS method in radiation chemistry. The SBS and IRT methods are both based on the Green's functions of the diffusion equation (GFDE). In this paper, several sampling algorithms of the GFDE and for the IRT method are presented. We show that the IRT and SBS methods are exactly equivalent for 2-particles systems for diffusion and partially diffusion-controlled reactions between non-interacting particles. We also show that the results obtained with the SBS simulation method with periodic boundary conditions are in agreement with the predictions by classical reaction kinetics theory, which is an important step towards using this method for modelling of biochemical networks and metabolic pathways involved in oxidative stress. Finally, the first simulation results obtained with the code RITRACKS (Relativistic Ion Tracks) are presented.

  8. Exploring image data assimilation in the prospect of high-resolution satellite oceanic observations

    NASA Astrophysics Data System (ADS)

    Durán Moro, Marina; Brankart, Jean-Michel; Brasseur, Pierre; Verron, Jacques

    2017-07-01

    Satellite sensors increasingly provide high-resolution (HR) observations of the ocean. They supply observations of sea surface height (SSH) and of tracers of the dynamics such as sea surface salinity (SSS) and sea surface temperature (SST). In particular, the Surface Water Ocean Topography (SWOT) mission will provide measurements of the surface ocean topography at very high-resolution (HR) delivering unprecedented information on the meso-scale and submeso-scale dynamics. This study investigates the feasibility to use these measurements to reconstruct meso-scale features simulated by numerical models, in particular on the vertical dimension. A methodology to reconstruct three-dimensional (3D) multivariate meso-scale scenes is developed by using a HR numerical model of the Solomon Sea region. An inverse problem is defined in the framework of a twin experiment where synthetic observations are used. A true state is chosen among the 3D multivariate states which is considered as a reference state. In order to correct a first guess of this true state, a two-step analysis is carried out. A probability distribution of the first guess is defined and updated at each step of the analysis: (i) the first step applies the analysis scheme of a reduced-order Kalman filter to update the first guess probability distribution using SSH observation; (ii) the second step minimizes a cost function using observations of HR image structure and a new probability distribution is estimated. The analysis is extended to the vertical dimension using 3D multivariate empirical orthogonal functions (EOFs) and the probabilistic approach allows the update of the probability distribution through the two-step analysis. Experiments show that the proposed technique succeeds in correcting a multivariate state using meso-scale and submeso-scale information contained in HR SSH and image structure observations. It also demonstrates how the surface information can be used to reconstruct the ocean state below the surface.

  9. Construction of a 3D structural model based on balanced cross sections and borehole data to create a fundament for further geolocial and hydrological simulations

    NASA Astrophysics Data System (ADS)

    Donndorf, St.; Malz, A.; Kley, J.

    2012-04-01

    Cross section balancing is a generally accepted method for studying fault zone geometries. We show a method for the construction of structural 3D models of complex fault zones using a combination of gOcad modelling and balanced cross sections. In this work a 3D model of the Schlotheim graben in the Thuringian basin was created from serial, parallel cross sections and existing borehole data. The Thuringian Basin is originally a part of the North German Basin, which was separated from it by the Harz uplift in the Late Cretaceous. It comprises several parallel NW-trending inversion structures. The Schlotheim graben is one example of these inverted graben zones, whose structure poses special challenges to 3D modelling. The fault zone extends 30 km in NW-SE direction and 1 km in NE-SW direction. This project was split into two parts: data management and model building. To manage the fundamental data a central database was created in ESRI's ArcGIS. The development of a scripting interface handles the data exchange between the different steps of modelling. The first step is the pre-processing of the base data in ArcGIS, followed by cross section balancing with Midland Valley's Move software and finally the construction of the 3D model in Paradigm's gOcad. With the specific aim of constructing a 3D model based on cross sections, the functionality of the gOcad software had to be extended. These extensions include pre-processing functions to create a simplified and usable data base for gOcad as well as construction functions to create surfaces based on linearly distributed data and processing functions to create the 3D model from different surfaces. In order to use the model for further geological and hydrological simulations, special requirements apply to the surface properties. The first characteristic of the surfaces should be a quality mesh, which contains triangles with maximized internal angles. To achieve that, an external meshing tool was included in gOcad. The second characteristic is that intersecting lines between two surfaces must be included in both surfaces and share nodes with them. To finish the modelling process 3D balancing was performed to further improve the model quality.

  10. Convection Regularization of High Wavenumbers in Turbulence ANS Shocks

    DTIC Science & Technology

    2011-07-31

    dynamics of particles that adhere to one another upon collision and has been studied as a simple cosmological model for describing the nonlinear formation of...solution we mean a solution to the Cauchy problem in the following sense. Definition 5.1. A function u : R × [0, T ] 7→ RN is a weak solution of the...step 2 the limit function in the α → 0 limit is shown to satisfy the definition of a weak solution for the Cauchy problem. Without loss of generality

  11. Hydrological model parameter dimensionality is a weak measure of prediction uncertainty

    NASA Astrophysics Data System (ADS)

    Pande, S.; Arkesteijn, L.; Savenije, H.; Bastidas, L. A.

    2015-04-01

    This paper shows that instability of hydrological system representation in response to different pieces of information and associated prediction uncertainty is a function of model complexity. After demonstrating the connection between unstable model representation and model complexity, complexity is analyzed in a step by step manner. This is done measuring differences between simulations of a model under different realizations of input forcings. Algorithms are then suggested to estimate model complexity. Model complexities of the two model structures, SAC-SMA (Sacramento Soil Moisture Accounting) and its simplified version SIXPAR (Six Parameter Model), are computed on resampled input data sets from basins that span across the continental US. The model complexities for SIXPAR are estimated for various parameter ranges. It is shown that complexity of SIXPAR increases with lower storage capacity and/or higher recession coefficients. Thus it is argued that a conceptually simple model structure, such as SIXPAR, can be more complex than an intuitively more complex model structure, such as SAC-SMA for certain parameter ranges. We therefore contend that magnitudes of feasible model parameters influence the complexity of the model selection problem just as parameter dimensionality (number of parameters) does and that parameter dimensionality is an incomplete indicator of stability of hydrological model selection and prediction problems.

  12. Evaluation of interpolation methods for TG-43 dosimetric parameters based on comparison with Monte Carlo data for high-energy brachytherapy sources.

    PubMed

    Pujades-Claumarchirant, Ma Carmen; Granero, Domingo; Perez-Calatayud, Jose; Ballester, Facundo; Melhus, Christopher; Rivard, Mark

    2010-03-01

    The aim of this work was to determine dose distributions for high-energy brachytherapy sources at spatial locations not included in the radial dose function g L ( r ) and 2D anisotropy function F ( r , θ ) table entries for radial distance r and polar angle θ . The objectives of this study are as follows: 1) to evaluate interpolation methods in order to accurately derive g L ( r ) and F ( r , θ ) from the reported data; 2) to determine the minimum number of entries in g L ( r ) and F ( r , θ ) that allow reproduction of dose distributions with sufficient accuracy. Four high-energy photon-emitting brachytherapy sources were studied: 60 Co model Co0.A86, 137 Cs model CSM-3, 192 Ir model Ir2.A85-2, and 169 Yb hypothetical model. The mesh used for r was: 0.25, 0.5, 0.75, 1, 1.5, 2-8 (integer steps) and 10 cm. Four different angular steps were evaluated for F ( r , θ ): 1°, 2°, 5° and 10°. Linear-linear and logarithmic-linear interpolation was evaluated for g L ( r ). Linear-linear interpolation was used to obtain F ( r , θ ) with resolution of 0.05 cm and 1°. Results were compared with values obtained from the Monte Carlo (MC) calculations for the four sources with the same grid. Linear interpolation of g L ( r ) provided differences ≤ 0.5% compared to MC for all four sources. Bilinear interpolation of F ( r , θ ) using 1° and 2° angular steps resulted in agreement ≤ 0.5% with MC for 60 Co, 192 Ir, and 169 Yb, while 137 Cs agreement was ≤ 1.5% for θ < 15°. The radial mesh studied was adequate for interpolating g L ( r ) for high-energy brachytherapy sources, and was similar to commonly found examples in the published literature. For F ( r , θ ) close to the source longitudinal-axis, polar angle step sizes of 1°-2° were sufficient to provide 2% accuracy for all sources.

  13. Gene Function Hypotheses for the Campylobacter jejuni Glycome Generated by a Logic-Based Approach

    PubMed Central

    Sternberg, Michael J.E.; Tamaddoni-Nezhad, Alireza; Lesk, Victor I.; Kay, Emily; Hitchen, Paul G.; Cootes, Adrian; van Alphen, Lieke B.; Lamoureux, Marc P.; Jarrell, Harold C.; Rawlings, Christopher J.; Soo, Evelyn C.; Szymanski, Christine M.; Dell, Anne; Wren, Brendan W.; Muggleton, Stephen H.

    2013-01-01

    Increasingly, experimental data on biological systems are obtained from several sources and computational approaches are required to integrate this information and derive models for the function of the system. Here, we demonstrate the power of a logic-based machine learning approach to propose hypotheses for gene function integrating information from two diverse experimental approaches. Specifically, we use inductive logic programming that automatically proposes hypotheses explaining the empirical data with respect to logically encoded background knowledge. We study the capsular polysaccharide biosynthetic pathway of the major human gastrointestinal pathogen Campylobacter jejuni. We consider several key steps in the formation of capsular polysaccharide consisting of 15 genes of which 8 have assigned function, and we explore the extent to which functions can be hypothesised for the remaining 7. Two sources of experimental data provide the information for learning—the results of knockout experiments on the genes involved in capsule formation and the absence/presence of capsule genes in a multitude of strains of different serotypes. The machine learning uses the pathway structure as background knowledge. We propose assignments of specific genes to five previously unassigned reaction steps. For four of these steps, there was an unambiguous optimal assignment of gene to reaction, and to the fifth, there were three candidate genes. Several of these assignments were consistent with additional experimental results. We therefore show that the logic-based methodology provides a robust strategy to integrate results from different experimental approaches and propose hypotheses for the behaviour of a biological system. PMID:23103756

  14. Gene function hypotheses for the Campylobacter jejuni glycome generated by a logic-based approach.

    PubMed

    Sternberg, Michael J E; Tamaddoni-Nezhad, Alireza; Lesk, Victor I; Kay, Emily; Hitchen, Paul G; Cootes, Adrian; van Alphen, Lieke B; Lamoureux, Marc P; Jarrell, Harold C; Rawlings, Christopher J; Soo, Evelyn C; Szymanski, Christine M; Dell, Anne; Wren, Brendan W; Muggleton, Stephen H

    2013-01-09

    Increasingly, experimental data on biological systems are obtained from several sources and computational approaches are required to integrate this information and derive models for the function of the system. Here, we demonstrate the power of a logic-based machine learning approach to propose hypotheses for gene function integrating information from two diverse experimental approaches. Specifically, we use inductive logic programming that automatically proposes hypotheses explaining the empirical data with respect to logically encoded background knowledge. We study the capsular polysaccharide biosynthetic pathway of the major human gastrointestinal pathogen Campylobacter jejuni. We consider several key steps in the formation of capsular polysaccharide consisting of 15 genes of which 8 have assigned function, and we explore the extent to which functions can be hypothesised for the remaining 7. Two sources of experimental data provide the information for learning-the results of knockout experiments on the genes involved in capsule formation and the absence/presence of capsule genes in a multitude of strains of different serotypes. The machine learning uses the pathway structure as background knowledge. We propose assignments of specific genes to five previously unassigned reaction steps. For four of these steps, there was an unambiguous optimal assignment of gene to reaction, and to the fifth, there were three candidate genes. Several of these assignments were consistent with additional experimental results. We therefore show that the logic-based methodology provides a robust strategy to integrate results from different experimental approaches and propose hypotheses for the behaviour of a biological system. Copyright © 2012 Elsevier Ltd. All rights reserved.

  15. Random regression analyses using B-splines functions to model growth from birth to adult age in Canchim cattle.

    PubMed

    Baldi, F; Alencar, M M; Albuquerque, L G

    2010-12-01

    The objective of this work was to estimate covariance functions using random regression models on B-splines functions of animal age, for weights from birth to adult age in Canchim cattle. Data comprised 49,011 records on 2435 females. The model of analysis included fixed effects of contemporary groups, age of dam as quadratic covariable and the population mean trend taken into account by a cubic regression on orthogonal polynomials of animal age. Residual variances were modelled through a step function with four classes. The direct and maternal additive genetic effects, and animal and maternal permanent environmental effects were included as random effects in the model. A total of seventeen analyses, considering linear, quadratic and cubic B-splines functions and up to seven knots, were carried out. B-spline functions of the same order were considered for all random effects. Random regression models on B-splines functions were compared to a random regression model on Legendre polynomials and with a multitrait model. Results from different models of analyses were compared using the REML form of the Akaike Information criterion and Schwarz' Bayesian Information criterion. In addition, the variance components and genetic parameters estimated for each random regression model were also used as criteria to choose the most adequate model to describe the covariance structure of the data. A model fitting quadratic B-splines, with four knots or three segments for direct additive genetic effect and animal permanent environmental effect and two knots for maternal additive genetic effect and maternal permanent environmental effect, was the most adequate to describe the covariance structure of the data. Random regression models using B-spline functions as base functions fitted the data better than Legendre polynomials, especially at mature ages, but higher number of parameters need to be estimated with B-splines functions. © 2010 Blackwell Verlag GmbH.

  16. Functional Response and Matrix Population Model of Podisus nigrispinus (Dallas, 1851) (Hemiptera: Pentatomidae) fed on Chrysomya putoria (Wiedemann, 1818) (Diptera: Calliphoridae) as Alternative Prey.

    PubMed

    Botteon, V W; Neves, J A; Godoy, W A C

    2017-04-01

    Among the predators with high potential for use in biological control, the species of the genus Podisus (Hemiptera: Pentatomidae) have received special attention for laboratory rearing, since they feed on different agricultural and forestry pest insects. However, the type of diet offered to insects in the laboratory may affect the viability of populations, expressed essentially by demographic parameters such as survival and fecundity. This study assessed demographic and development aspects in experimental populations of Podisus nigrispinus (Dallas, 1851) fed on larvae of Chrysomya putoria (Wiedemann, 1818) (Diptera: Calliphoridae) as an alternative prey. The demographic parameters fecundity and survival were investigated in life stages of P. nigrispinus with ecological modeling, by applying the Leslie matrix population model, producing histograms of life stages in successive time steps. The functional response of P. nigrispinus was also investigated on seven densities of C. putoria third-instar larvae at 24 and 48 h. The survival of predators that reached adulthood was 65% and the development time from egg to adult was 23.15 days. The predator showed a type III functional response for consumption of C. putoria at 24 and 48 h. The Leslie-matrix simulation of the age structure provided perpetuation of the predator population over time steps and the prey proved to be feasible for use in rearing and maintenance of P. nigrispinus in the laboratory.

  17. Chemical silicon surface modification and bioreceptor attachment to develop competitive integrated photonic biosensors.

    PubMed

    Escorihuela, Jorge; Bañuls, María José; García Castelló, Javier; Toccafondo, Veronica; García-Rupérez, Jaime; Puchades, Rosa; Maquieira, Ángel

    2012-12-01

    Methodology for the functionalization of silicon-based materials employed for the development of photonic label-free nanobiosensors is reported. The studied functionalization based on organosilane chemistry allowed the direct attachment of biomolecules in a single step, maintaining their bioavailability. Using this immobilization approach in probe microarrays, successful specific detection of bacterial DNA is achieved, reaching hybridization sensitivities of 10 pM. The utility of the immobilization approach for the functionalization of label-free nanobiosensors based on photonic crystals and ring resonators was demonstrated using bovine serum albumin (BSA)/anti-BSA as a model system.

  18. Crosstalk Cancellation for a Simultaneous Phase Shifting Interferometer

    NASA Technical Reports Server (NTRS)

    Olczak, Eugene (Inventor)

    2014-01-01

    A method of minimizing fringe print-through in a phase-shifting interferometer, includes the steps of: (a) determining multiple transfer functions of pixels in the phase-shifting interferometer; (b) computing a crosstalk term for each transfer function; and (c) displaying, to a user, a phase-difference map using the crosstalk terms computed in step (b). Determining a transfer function in step (a) includes measuring intensities of a reference beam and a test beam at the pixels, and measuring an optical path difference between the reference beam and the test beam at the pixels. Computing crosstalk terms in step (b) includes computing an N-dimensional vector, where N corresponds to the number of transfer functions, and the N-dimensional vector is obtained by minimizing a variance of a modulation function in phase shifted images.

  19. Conservative algorithms for non-Maxwellian plasma kinetics

    DOE PAGES

    Le, Hai P.; Cambier, Jean -Luc

    2017-12-08

    Here, we present a numerical model and a set of conservative algorithms for Non-Maxwellian plasma kinetics with inelastic collisions. These algorithms self-consistently solve for the time evolution of an isotropic electron energy distribution function interacting with an atomic state distribution function of an arbitrary number of levels through collisional excitation, deexcitation, as well as ionization and recombination. Electron-electron collisions, responsible for thermalization of the electron distribution, are also included in the model. The proposed algorithms guarantee mass/charge and energy conservation in a single step, and is applied to the case of non-uniform gridding of the energy axis in the phasemore » space of the electron distribution function. Numerical test cases are shown to demonstrate the accuracy of the method and its conservation properties.« less

  20. Modeling non-linear growth responses to temperature and hydrology in wetland trees

    NASA Astrophysics Data System (ADS)

    Keim, R.; Allen, S. T.

    2016-12-01

    Growth responses of wetland trees to flooding and climate variations are difficult to model because they depend on multiple, apparently interacting factors, but are a critical link in hydrological control of wetland carbon budgets. To more generally understand tree growth to hydrological forcing, we modeled non-linear responses of tree ring growth to flooding and climate at sub-annual time steps, using Vaganov-Shashkin response functions. We calibrated the model to six baldcypress tree-ring chronologies from two hydrologically distinct sites in southern Louisiana, and tested several hypotheses of plasticity in wetlands tree responses to interacting environmental variables. The model outperformed traditional multiple linear regression. More importantly, optimized response parameters were generally similar among sites with varying hydrological conditions, suggesting generality to the functions. Model forms that included interacting responses to multiple forcing factors were more effective than were single response functions, indicating the principle of a single limiting factor is not correct in wetlands and both climatic and hydrological variables must be considered in predicting responses to hydrological or climate change.

  1. R package PRIMsrc: Bump Hunting by Patient Rule Induction Method for Survival, Regression and Classification

    PubMed Central

    Dazard, Jean-Eudes; Choe, Michael; LeBlanc, Michael; Rao, J. Sunil

    2015-01-01

    PRIMsrc is a novel implementation of a non-parametric bump hunting procedure, based on the Patient Rule Induction Method (PRIM), offering a unified treatment of outcome variables, including censored time-to-event (Survival), continuous (Regression) and discrete (Classification) responses. To fit the model, it uses a recursive peeling procedure with specific peeling criteria and stopping rules depending on the response. To validate the model, it provides an objective function based on prediction-error or other specific statistic, as well as two alternative cross-validation techniques, adapted to the task of decision-rule making and estimation in the three types of settings. PRIMsrc comes as an open source R package, including at this point: (i) a main function for fitting a Survival Bump Hunting model with various options allowing cross-validated model selection to control model size (#covariates) and model complexity (#peeling steps) and generation of cross-validated end-point estimates; (ii) parallel computing; (iii) various S3-generic and specific plotting functions for data visualization, diagnostic, prediction, summary and display of results. It is available on CRAN and GitHub. PMID:26798326

  2. Flight Dynamic Model Exchange using XML

    NASA Technical Reports Server (NTRS)

    Jackson, E. Bruce; Hildreth, Bruce L.

    2002-01-01

    The AIAA Modeling and Simulation Technical Committee has worked for several years to develop a standard by which the information needed to develop physics-based models of aircraft can be specified. The purpose of this standard is to provide a well-defined set of information, definitions, data tables and axis systems so that cooperating organizations can transfer a model from one simulation facility to another with maximum efficiency. This paper proposes using an application of the eXtensible Markup Language (XML) to implement the AIAA simulation standard. The motivation and justification for using a standard such as XML is discussed. Necessary data elements to be supported are outlined. An example of an aerodynamic model as an XML file is given. This example includes definition of independent and dependent variables for function tables, definition of key variables used to define the model, and axis systems used. The final steps necessary for implementation of the standard are presented. Software to take an XML-defined model and import/export it to/from a given simulation facility is discussed, but not demonstrated. That would be the next step in final implementation of standards for physics-based aircraft dynamic models.

  3. Velocity and stress autocorrelation decay in isothermal dissipative particle dynamics

    NASA Astrophysics Data System (ADS)

    Chaudhri, Anuj; Lukes, Jennifer R.

    2010-02-01

    The velocity and stress autocorrelation decay in a dissipative particle dynamics ideal fluid model is analyzed in this paper. The autocorrelation functions are calculated at three different friction parameters and three different time steps using the well-known Groot/Warren algorithm and newer algorithms including self-consistent leap-frog, self-consistent velocity Verlet and Shardlow first and second order integrators. At low friction values, the velocity autocorrelation function decays exponentially at short times, shows slower-than exponential decay at intermediate times, and approaches zero at long times for all five integrators. As friction value increases, the deviation from exponential behavior occurs earlier and is more pronounced. At small time steps, all the integrators give identical decay profiles. As time step increases, there are qualitative and quantitative differences between the integrators. The stress correlation behavior is markedly different for the algorithms. The self-consistent velocity Verlet and the Shardlow algorithms show very similar stress autocorrelation decay with change in friction parameter, whereas the Groot/Warren and leap-frog schemes show variations at higher friction factors. Diffusion coefficients and shear viscosities are calculated using Green-Kubo integration of the velocity and stress autocorrelation functions. The diffusion coefficients match well-known theoretical results at low friction limits. Although the stress autocorrelation function is different for each integrator, fluctuates rapidly, and gives poor statistics for most of the cases, the calculated shear viscosities still fall within range of theoretical predictions and nonequilibrium studies.

  4. Gait cycle analysis: parameters sensitive for functional evaluation of peripheral nerve recovery in rat hind limbs.

    PubMed

    Rui, Jing; Runge, M Brett; Spinner, Robert J; Yaszemski, Michael J; Windebank, Anthony J; Wang, Huan

    2014-10-01

    Video-assisted gait kinetics analysis has been a sensitive method to assess rat sciatic nerve function after injury and repair. However, in conduit repair of sciatic nerve defects, previously reported kinematic measurements failed to be a sensitive indicator because of the inferior recovery and inevitable joint contracture. This study aimed to explore the role of physiotherapy in mitigating joint contracture and to seek motion analysis indices that can sensitively reflect motor function. Data were collected from 26 rats that underwent sciatic nerve transection and conduit repair. Regular postoperative physiotherapy was applied. Parameters regarding step length, phase duration, and ankle angle were acquired and analyzed from video recording of gait kinetics preoperatively and at regular postoperative intervals. Stride length ratio (step length of uninjured foot/step length of injured foot), percent swing of the normal paw (percentage of the total stride duration when the uninjured paw is in the air), propulsion angle (toe-off angle subtracted by midstance angle), and clearance angle (ankle angle change from toe off to midswing) decreased postoperatively comparing with baseline values. The gradual recovery of these measurements had a strong correlation with the post-nerve repair time course. Ankle joint contracture persisted despite rigorous physiotherapy. Parameters acquired from a 2-dimensional motion analysis system, that is, stride length ratio, percent swing of the normal paw, propulsion angle, and clearance angle, could sensitively reflect nerve function impairment and recovery in the rat sciatic nerve conduit repair model despite the existence of joint contractures.

  5. Modelling nematode movement using time-fractional dynamics.

    PubMed

    Hapca, Simona; Crawford, John W; MacMillan, Keith; Wilson, Mike J; Young, Iain M

    2007-09-07

    We use a correlated random walk model in two dimensions to simulate the movement of the slug parasitic nematode Phasmarhabditis hermaphrodita in homogeneous environments. The model incorporates the observed statistical distributions of turning angle and speed derived from time-lapse studies of individual nematode trails. We identify strong temporal correlations between the turning angles and speed that preclude the case of a simple random walk in which successive steps are independent. These correlated random walks are appropriately modelled using an anomalous diffusion model, more precisely using a fractional sub-diffusion model for which the associated stochastic process is characterised by strong memory effects in the probability density function.

  6. A 3D Kinematic Measurement of Knee Prosthesis Using X-ray Projection Images

    NASA Astrophysics Data System (ADS)

    Hirokawa, Shunji; Ariyoshi, Shogo; Hossain, Mohammad Abrar

    We have developed a technique for estimating 3D motion of knee prosthesis from its 2D perspective projections. As Fourier descriptors were used for compact representation of library templates and contours extracted from the prosthetic X-ray images, the entire silhouette contour of each prosthetic component was required. This caused such a problem as our algorithm did not function when the silhouettes of tibio and femoral components overlapped with each other. Here we planned a novel method to overcome it; which was processed in two steps. First, the missing part of silhouette contour due to overlap was interpolated using a free-formed curvature such as Bezier. Then the first step position/orientation estimation was performed. In the next step, a clipping window was set in the projective coordinate so as to separate the overlapped silhouette drawn using the first step estimates. After that the localized library whose templates were clipped in shape was prepared and the second step estimation was performed. Computer model simulation demonstrated sufficient accuracies of position/orientation estimation even for overlapped silhouettes; equivalent to those without overlap.

  7. Stepped care versus face-to-face cognitive behavior therapy for panic disorder and social anxiety disorder: Predictors and moderators of outcome.

    PubMed

    Haug, Thomas; Nordgreen, Tine; Öst, Lars-Göran; Kvale, Gerd; Tangen, Tone; Andersson, Gerhard; Carlbring, Per; Heiervang, Einar R; Havik, Odd E

    2015-08-01

    To investigate predictors and moderators of treatment outcome by comparing immediate face-to-face cognitive behavioral therapy (FtF-CBT) to a Stepped Care treatment model comprising three steps: Psychoeducation, Internet-delivered CBT, and FtF-CBT for panic disorder (PD) and social anxiety disorder (SAD). Patients (N = 173) were recruited from nine public mental health out-patient clinics and randomized to immediate FtF-CBT or Stepped Care treatment. Characteristics related to social functioning, impairment from the anxiety disorder, and comorbidity was investigated as predictors and moderators by treatment format and diagnosis in multiple regression analyses. Lower social functioning, higher impairment from the anxiety disorder, and a comorbid cluster C personality disorder were associated with significantly less improvement, particularly among patients with PD. Furthermore, having a comorbid anxiety disorder was associated with a better treatment outcome among patients with PD but not patients with SAD. Patients with a comorbid depression had similar outcomes from the different treatments, but patients without comorbid depression had better outcomes from immediate FtF-CBT compared to guided self-help. In general, the same patient characteristics appear to be associated with the treatment outcome for CBT provided in low- and high-intensity formats when treated in public mental health care clinics. The findings suggest that patients with lower social functioning and higher impairment from their anxiety disorder benefit less from these treatments and may require more adapted and extensive treatment. CLINICALTRIALS.GOV: Identifier: NCT00619138. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. Comparative Approaches to Understanding the Relation Between Aging and Physical Function

    PubMed Central

    Cesari, Matteo; Seals, Douglas R.; Shively, Carol A.; Carter, Christy S.

    2016-01-01

    Despite dedicated efforts to identify interventions to delay aging, most promising interventions yielding dramatic life-span extension in animal models of aging are often ineffective when translated to clinical trials. This may be due to differences in primary outcomes between species and difficulties in determining the optimal clinical trial paradigms for translation. Measures of physical function, including brief standardized testing batteries, are currently being proposed as biomarkers of aging in humans, are predictive of adverse health events, disability, and mortality, and are commonly used as functional outcomes for clinical trials. Motor outcomes are now being incorporated into preclinical testing, a positive step toward enhancing our ability to translate aging interventions to clinical trials. To further these efforts, we begin a discussion of physical function and disability assessment across species, with special emphasis on mice, rats, monkeys, and man. By understanding how physical function is assessed in humans, we can tailor measurements in animals to better model those outcomes to establish effective, standardized translational functional assessments with aging. PMID:25910845

  9. The uncertainty of crop yield projections is reduced by improved temperature response functions.

    PubMed

    Wang, Enli; Martre, Pierre; Zhao, Zhigan; Ewert, Frank; Maiorano, Andrea; Rötter, Reimund P; Kimball, Bruce A; Ottman, Michael J; Wall, Gerard W; White, Jeffrey W; Reynolds, Matthew P; Alderman, Phillip D; Aggarwal, Pramod K; Anothai, Jakarat; Basso, Bruno; Biernath, Christian; Cammarano, Davide; Challinor, Andrew J; De Sanctis, Giacomo; Doltra, Jordi; Fereres, Elias; Garcia-Vila, Margarita; Gayler, Sebastian; Hoogenboom, Gerrit; Hunt, Leslie A; Izaurralde, Roberto C; Jabloun, Mohamed; Jones, Curtis D; Kersebaum, Kurt C; Koehler, Ann-Kristin; Liu, Leilei; Müller, Christoph; Naresh Kumar, Soora; Nendel, Claas; O'Leary, Garry; Olesen, Jørgen E; Palosuo, Taru; Priesack, Eckart; Eyshi Rezaei, Ehsan; Ripoche, Dominique; Ruane, Alex C; Semenov, Mikhail A; Shcherbak, Iurii; Stöckle, Claudio; Stratonovitch, Pierre; Streck, Thilo; Supit, Iwan; Tao, Fulu; Thorburn, Peter; Waha, Katharina; Wallach, Daniel; Wang, Zhimin; Wolf, Joost; Zhu, Yan; Asseng, Senthold

    2017-07-17

    Increasing the accuracy of crop productivity estimates is a key element in planning adaptation strategies to ensure global food security under climate change. Process-based crop models are effective means to project climate impact on crop yield, but have large uncertainty in yield simulations. Here, we show that variations in the mathematical functions currently used to simulate temperature responses of physiological processes in 29 wheat models account for >50% of uncertainty in simulated grain yields for mean growing season temperatures from 14 °C to 33 °C. We derived a set of new temperature response functions that when substituted in four wheat models reduced the error in grain yield simulations across seven global sites with different temperature regimes by 19% to 50% (42% average). We anticipate the improved temperature responses to be a key step to improve modelling of crops under rising temperature and climate change, leading to higher skill of crop yield projections.

  10. The Uncertainty of Crop Yield Projections Is Reduced by Improved Temperature Response Functions

    NASA Technical Reports Server (NTRS)

    Wang, Enli; Martre, Pierre; Zhao, Zhigan; Ewert, Frank; Maiorano, Andrea; Rotter, Reimund P.; Kimball, Bruce A.; Ottman, Michael J.; White, Jeffrey W.; Reynolds, Matthew P.; hide

    2017-01-01

    Increasing the accuracy of crop productivity estimates is a key element in planning adaptation strategies to ensure global food security under climate change. Process-based crop models are effective means to project climate impact on crop yield, but have large uncertainty in yield simulations. Here, we show that variations in the mathematical functions currently used to simulate temperature responses of physiological processes in 29 wheat models account for is greater than 50% of uncertainty in simulated grain yields for mean growing season temperatures from 14 C to 33 C. We derived a set of new temperature response functions that when substituted in four wheat models reduced the error in grain yield simulations across seven global sites with different temperature regimes by 19% to 50% (42% average). We anticipate the improved temperature responses to be a key step to improve modelling of crops under rising temperature and climate change, leading to higher skill of crop yield projections.

  11. Molecular mechanism for generation of antibody memory.

    PubMed

    Shivarov, Velizar; Shinkura, Reiko; Doi, Tomomitsu; Begum, Nasim A; Nagaoka, Hitoshi; Okazaki, Il-Mi; Ito, Satomi; Nonaka, Taichiro; Kinoshita, Kazuo; Honjo, Tasuku

    2009-03-12

    Activation-induced cytidine deaminase (AID) is the essential enzyme inducing the DNA cleavage required for both somatic hypermutation and class switch recombination (CSR) of the immunoglobulin gene. We originally proposed the RNA-editing model for the mechanism of DNA cleavage by AID. We obtained evidence that fulfils three requirements for CSR by this model, namely (i) AID shuttling between nucleus and cytoplasm, (ii) de novo protein synthesis for CSR, and (iii) AID-RNA complex formation. The alternative hypothesis, designated as the DNA-deamination model, assumes that the in vitro DNA deamination activity of AID is representative of its physiological function in vivo. Furthermore, the resulting dU was removed by uracil DNA glycosylase (UNG) to generate a basic site, followed by phosphodiester bond cleavage by AP endonuclease. We critically examined each of these provisional steps. We identified a cluster of mutants (H48A, L49A, R50A and N51A) that had particularly higher CSR activities than expected from their DNA deamination activities. The most striking was the N51A mutant that had no ability to deaminate DNA in vitro but retained approximately 50 per cent of the wild-type level of CSR activity. We also provide further evidence that UNG plays a non-canonical role in CSR, namely in the repair step of the DNA breaks. Taking these results together, we favour the RNA-editing model for the function of AID in CSR.

  12. A 3D particle Monte Carlo approach to studying nucleation

    NASA Astrophysics Data System (ADS)

    Köhn, Christoph; Enghoff, Martin Bødker; Svensmark, Henrik

    2018-06-01

    The nucleation of sulphuric acid molecules plays a key role in the formation of aerosols. We here present a three dimensional particle Monte Carlo model to study the growth of sulphuric acid clusters as well as its dependence on the ambient temperature and the initial particle density. We initiate a swarm of sulphuric acid-water clusters with a size of 0.329 nm with densities between 107 and 108 cm-3 at temperatures between 200 and 300 K and a relative humidity of 50%. After every time step, we update the position of particles as a function of size-dependent diffusion coefficients. If two particles encounter, we merge them and add their volumes and masses. Inversely, we check after every time step whether a polymer evaporates liberating a molecule. We present the spatial distribution as well as the size distribution calculated from individual clusters. We also calculate the nucleation rate of clusters with a radius of 0.85 nm as a function of time, initial particle density and temperature. The nucleation rates obtained from the presented model agree well with experimentally obtained values and those of a numerical model which serves as a benchmark of our code. In contrast to previous nucleation models, we here present for the first time a code capable of tracing individual particles and thus of capturing the physics related to the discrete nature of particles.

  13. Modelling the interaction between flooding events and economic growth

    NASA Astrophysics Data System (ADS)

    Grames, J.; Prskawetz, A.; Grass, D.; Blöschl, G.

    2015-06-01

    Socio-hydrology describes the interaction between the socio-economy and water. Recent models analyze the interplay of community risk-coping culture, flooding damage and economic growth (Di Baldassarre et al., 2013; Viglione et al., 2014). These models descriptively explain the feedbacks between socio-economic development and natural disasters like floods. Contrary to these descriptive models, our approach develops an optimization model, where the intertemporal decision of an economic agent interacts with the hydrological system. In order to build this first economic growth model describing the interaction between the consumption and investment decisions of an economic agent and the occurrence of flooding events, we transform an existing descriptive stochastic model into an optimal deterministic model. The intermediate step is to formulate and simulate a descriptive deterministic model. We develop a periodic water function to approximate the former discrete stochastic time series of rainfall events. Due to the non-autonomous exogenous periodic rainfall function the long-term path of consumption and investment will be periodic.

  14. A continuous damage model based on stepwise-stress creep rupture tests

    NASA Technical Reports Server (NTRS)

    Robinson, D. N.

    1985-01-01

    A creep damage accumulation model is presented that makes use of the Kachanov damage rate concept with a provision accounting for damage that results from a variable stress history. This is accomplished through the introduction of an additional term in the Kachanov rate equation that is linear in the stress rate. Specification of the material functions and parameters in the model requires two types of constituting a data base: (1) standard constant-stress creep rupture tests, and (2) a sequence of two-step creep rupture tests.

  15. Stepped care in the treatment of trichotillomania.

    PubMed

    Rogers, Kate; Banis, Maria; Falkenstein, Martha J; Malloy, Elizabeth J; McDonough, Lauren; Nelson, Samuel O; Rusch, Natalie; Haaga, David A F

    2014-04-01

    There are effective treatments of trichotillomania (TTM), but access to expert providers is limited. This study tested a stepped care model aimed at improving access. Participants were 60 (95% women, 75% Caucasian, 2% Hispanic) adults (M = 33.18 years) with TTM. They were randomly assigned to immediate versus waitlist (WL) conditions for Step 1 (10 weeks of web-based self-help via StopPulling.com). After Step 1, participants chose whether to engage in Step 2 (8 sessions of in-person habit reversal training [HRT]). In Step 1, the immediate condition had a small (d = .21) but significant advantage, relative to WL, in reducing TTM symptom ratings by interviewers (masked to experimental condition but not to assessment point); there were no differences in self-reported TTM symptoms, alopecia, functional impairment, or quality of life. Step 1 was more effective for those who used the site more often. Stepped care was highly acceptable: Motivation did not decrease during Step 1; treatment satisfaction was high, and 76% enrolled in Step 2. More symptomatic patients self-selected into HRT, and on average they improved significantly. Over one third (36%) made clinically significant improvement in self-reported TTM symptoms. Considering the entire stepped care program, participants significantly reduced symptoms, alopecia, and impairment, and increased quality of life. For quality of life and symptom severity, there was some relapse by 3-month follow-up. Stepped care is acceptable, and HRT was associated with improvement. Further work is needed to determine which patients with TTM can benefit from self-help and how to reduce relapse.

  16. On contact modelling in isogeometric analysis

    NASA Astrophysics Data System (ADS)

    Cardoso, R. P. R.; Adetoro, O. B.

    2017-11-01

    IsoGeometric Analysis (IGA) has proved to be a reliable numerical tool for the simulation of structural behaviour and fluid mechanics. The main reasons for this popularity are essentially due to: (i) the possibility of using higher order polynomials for the basis functions; (ii) the high convergence rates possible to achieve; (iii) the possibility to operate directly on CAD geometry without the need to resort to a mesh of elements. The major drawback of IGA is the non-interpolatory characteristic of the basis functions, which adds a difficulty in handling essential boundary conditions and makes it particularly challenging for contact analysis. In this work, the IGA is expanded to include frictionless contact procedures for sheet metal forming analyses. Non-Uniform Rational B-Splines (NURBS) are going to be used for the modelling of rigid tools as well as for the modelling of the deformable blank sheet. The contact methods developed are based on a two-step contact search scheme, where during the first step a global search algorithm is used for the allocation of contact knots into potential contact faces and a second (local) contact search scheme where point inversion techniques are used for the calculation of the contact penetration gap. For completeness, elastoplastic procedures are also included for a proper description of the entire IGA of sheet metal forming processes.

  17. Modifications to the Conduit Flow Process Mode 2 for MODFLOW-2005

    USGS Publications Warehouse

    Reimann, T.; Birk, S.; Rehrl, C.; Shoemaker, W.B.

    2012-01-01

    As a result of rock dissolution processes, karst aquifers exhibit highly conductive features such as caves and conduits. Within these structures, groundwater flow can become turbulent and therefore be described by nonlinear gradient functions. Some numerical groundwater flow models explicitly account for pipe hydraulics by coupling the continuum model with a pipe network that represents the conduit system. In contrast, the Conduit Flow Process Mode 2 (CFPM2) for MODFLOW-2005 approximates turbulent flow by reducing the hydraulic conductivity within the existing linear head gradient of the MODFLOW continuum model. This approach reduces the practical as well as numerical efforts for simulating turbulence. The original formulation was for large pore aquifers where the onset of turbulence is at low Reynolds numbers (1 to 100) and not for conduits or pipes. In addition, the existing code requires multiple time steps for convergence due to iterative adjustment of the hydraulic conductivity. Modifications to the existing CFPM2 were made by implementing a generalized power function with a user-defined exponent. This allows for matching turbulence in porous media or pipes and eliminates the time steps required for iterative adjustment of hydraulic conductivity. The modified CFPM2 successfully replicated simple benchmark test problems. ?? 2011 The Author(s). Ground Water ?? 2011, National Ground Water Association.

  18. The morphing of geographical features by Fourier transformation

    PubMed Central

    Liu, Pengcheng; Yu, Wenhao; Cheng, Xiaoqiang

    2018-01-01

    This paper presents a morphing model of vector geographical data based on Fourier transformation. This model involves three main steps. They are conversion from vector data to Fourier series, generation of intermediate function by combination of the two Fourier series concerning a large scale and a small scale, and reverse conversion from combination function to vector data. By mirror processing, the model can also be used for morphing of linear features. Experimental results show that this method is sensitive to scale variations and it can be used for vector map features’ continuous scale transformation. The efficiency of this model is linearly related to the point number of shape boundary and the interceptive value n of Fourier expansion. The effect of morphing by Fourier transformation is plausible and the efficiency of the algorithm is acceptable. PMID:29351344

  19. Lead facilitates foci formation in a Balb/c-3T3 two-step cell transformation model: role of Ape1 function.

    PubMed

    Hernández-Franco, Pablo; Silva, Martín; Franco, Rodrigo; Valverde, Mahara; Rojas, Emilio

    2018-04-01

    Several possible mechanisms have been examined to gain an understanding on the carcinogenic properties of lead, which include among others, mitogenesis, alteration of gene expression, oxidative damage, and inhibition of DNA repair. The aim of the present study was to explore if low concentrations of lead, relevant for human exposure, interfere with Ape1 function, a base excision repair enzyme, and its role in cell transformation in Balb/c-3T3. Lead acetate 5 and 30 μM induced APE1 mRNA and upregulation of protein expression. This increase in mRNA expression is consistent throughout the chronic exposure. Additionally, we also found an impaired function of Ape1 through molecular beacon-based assay. To evaluate the impact of lead on foci formation, a Balb/c-3T3 two-step transformation model was used. Balb/c-3T3 cells were pretreated 1 week with low concentrations of lead before induction of transformation with n-methyl-n-nitrosoguanidine (MNNG) (0.5 μg/mL) and 12-O-tetradecanoylphorbol-13-acetate (TPA) (0.1 μg/mL) (a classical two-step protocol). Morphological cell transformation increased in response to lead pretreatment that was paralleled with an increase in Ape1 mRNA and protein overexpression and an impairment of Ape1 activity and correlating with foci number. In addition, we found that lead pretreatment and MNNG (transformation initiator) increased DNA damage, determined by comet assay. Our data suggest that low lead concentrations (5, 30 μM) could play a facilitating role in cellular transformation, probably through the impaired function of housekeeping genes such as Ape1, leading to DNA damage accumulation and chromosomal instability, one of the most important hallmarks of cancer induced by chronic exposures.

  20. Stages of functional processing and the bihemispheric recognition of Japanese Kana script.

    PubMed

    Yoshizaki, K

    2000-04-01

    Two experiments were carried out in order to examine the effects of functional steps on the benefits of interhemispheric integration. The purpose of Experiment 1 was to investigate the validity of the Banich (1995a) model, where the benefits of interhemispheric processing increase as the task involves more functional steps. The 16 right-handed subjects were given two types of Hiragana-Katakana script matching tasks. One was the Name Identity (NI) task, and the other was the vowel matching (VM) task, which involved more functional steps compared to the NI task. The VM task required subjects to make a decision whether or not a pair of Katakana-Hiragana scripts had a common vowel. In both tasks, a pair of Kana scripts (Katakana-Hiragana scripts) was tachistoscopically presented in the unilateral visual fields or the bilateral visual fields, where each letter was presented in each visual field. A bilateral visual fields advantage (BFA) was found in both tasks, and the size of this did not differ between the tasks, suggesting that these findings did not support the Banich model. The purpose of Experiment 2 was to examine the effects of imbalanced processing load between the hemispheres on the benefits of interhemispheric integration. In order to manipulate the balance of processing load across the hemispheres, the revised vowel matching (r-VM) task was developed by amending the VM task. The r-VM task was the same as the VM task in Experiment 1, except that a script that has only vowel sound was presented as a counterpart of a pair of Kana scripts. The 24 right-handed subjects were given the r-VM and NI tasks. The results showed that although a BFA showed up in the NI task, it did not in the r-VM task. These results suggested that the balance of processing load between hemispheres would have an influence on the bilateral hemispheric processing.

  1. Summary of the white paper of DICOM WG24 'DICOM in Surgery'

    NASA Astrophysics Data System (ADS)

    Lemke, Heinz U.

    2007-03-01

    Standards for creating and integrating information about patients, equipment, and procedures are vitally needed when planning for an efficient Operating Room (OR). The DICOM Working Group 24 (WG24) has been established to develop DICOM objects and services related to Image Guided Surgery (IGS). To determine these standards, it is important to define day-to-day, step-by-step surgical workflow practices and create surgery workflow models per procedures or per variable cases. A well-defined workflow and a high fidelity patient model will be the base of activities for both, radiation therapy and surgery. Considering the present and future requirements for surgical planning and intervention, such a patient model must be n-dimensional, were n may include the spatial and temporal dimensions as well as a number of functional variables. As the boundaries between radiation therapy, surgery and interventional radiology are becoming less well-defined, precise patient models will become the greatest common denominator for all therapeutic disciplines. In addition to imaging, the focus of WG24 should, therefore, also be to serve the therapeutic disciplines by enabling modelling technology to be based on standards.

  2. Defect interactions with stepped CeO₂/SrTiO₃ interfaces: implications for radiation damage evolution and fast ion conduction.

    PubMed

    Dholabhai, Pratik P; Aguiar, Jeffery A; Misra, Amit; Uberuaga, Blas P

    2014-05-21

    Due to reduced dimensions and increased interfacial content, nanocomposite oxides offer improved functionalities in a wide variety of advanced technological applications, including their potential use as radiation tolerant materials. To better understand the role of interface structures in influencing the radiation damage tolerance of oxides, we have conducted atomistic calculations to elucidate the behavior of radiation-induced point defects (vacancies and interstitials) at interface steps in a model CeO2/SrTiO3 system. We find that atomic-scale steps at the interface have substantial influence on the defect behavior, which ultimately dictate the material performance in hostile irradiation environments. Distinctive steps react dissimilarly to cation and anion defects, effectively becoming biased sinks for different types of defects. Steps also attract cation interstitials, leaving behind an excess of immobile vacancies. Further, defects introduce significant structural and chemical distortions primarily at the steps. These two factors are plausible origins for the enhanced amorphization at steps seen in our recent experiments. The present work indicates that comprehensive examination of the interaction of radiation-induced point defects with the atomic-scale topology and defect structure of heterointerfaces is essential to evaluate the radiation tolerance of nanocomposites. Finally, our results have implications for other applications, such as fast ion conduction.

  3. From actors to agents in socio-ecological systems models

    PubMed Central

    Rounsevell, M. D. A.; Robinson, D. T.; Murray-Rust, D.

    2012-01-01

    The ecosystem service concept has emphasized the role of people within socio-ecological systems (SESs). In this paper, we review and discuss alternative ways of representing people, their behaviour and decision-making processes in SES models using an agent-based modelling (ABM) approach. We also explore how ABM can be empirically grounded using information from social survey. The capacity for ABM to be generalized beyond case studies represents a crucial next step in modelling SESs, although this comes with considerable intellectual challenges. We propose the notion of human functional types, as an analogy of plant functional types, to support the expansion (scaling) of ABM to larger areas. The expansion of scope also implies the need to represent institutional agents in SES models in order to account for alternative governance structures and policy feedbacks. Further development in the coupling of human-environment systems would contribute considerably to better application and use of the ecosystem service concept. PMID:22144388

  4. From actors to agents in socio-ecological systems models.

    PubMed

    Rounsevell, M D A; Robinson, D T; Murray-Rust, D

    2012-01-19

    The ecosystem service concept has emphasized the role of people within socio-ecological systems (SESs). In this paper, we review and discuss alternative ways of representing people, their behaviour and decision-making processes in SES models using an agent-based modelling (ABM) approach. We also explore how ABM can be empirically grounded using information from social survey. The capacity for ABM to be generalized beyond case studies represents a crucial next step in modelling SESs, although this comes with considerable intellectual challenges. We propose the notion of human functional types, as an analogy of plant functional types, to support the expansion (scaling) of ABM to larger areas. The expansion of scope also implies the need to represent institutional agents in SES models in order to account for alternative governance structures and policy feedbacks. Further development in the coupling of human-environment systems would contribute considerably to better application and use of the ecosystem service concept.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wong, Raymond K. W.; Storlie, Curtis Byron; Lee, Thomas C. M.

    The paper considers the computer model calibration problem and provides a general frequentist solution. Under the framework proposed, the data model is semiparametric with a non-parametric discrepancy function which accounts for any discrepancy between physical reality and the computer model. In an attempt to solve a fundamentally important (but often ignored) identifiability issue between the computer model parameters and the discrepancy function, the paper proposes a new and identifiable parameterization of the calibration problem. It also develops a two-step procedure for estimating all the relevant quantities under the new parameterization. This estimation procedure is shown to enjoy excellent rates ofmore » convergence and can be straightforwardly implemented with existing software. For uncertainty quantification, bootstrapping is adopted to construct confidence regions for the quantities of interest. As a result, the practical performance of the methodology is illustrated through simulation examples and an application to a computational fluid dynamics model.« less

  6. Novel functionalized pyridine-containing DTPA-like ligand. Synthesis, computational studies and characterization of the corresponding Gd(III) complex.

    PubMed

    Artali, Roberto; Botta, Mauro; Cavallotti, Camilla; Giovenzana, Giovanni B; Palmisano, Giovanni; Sisti, Massimo

    2007-08-07

    A novel pyridine-containing DTPA-like ligand, carrying additional hydroxymethyl groups on the pyridine side-arms, was synthesized in 5 steps. The corresponding Gd(III) complex, potentially useful as an MRI contrast agent, was prepared and characterized in detail by relaxometric methods and its structure modeled by computational methods.

  7. Perry's Scheme of Intellectual and Epistemological Development as a Framework for Describing Student Difficulties in Learning Organic Chemistry

    ERIC Educational Resources Information Center

    Grove, Nathaniel P.; Bretz, Stacey Lowery

    2010-01-01

    We have investigated student difficulties with the learning of organic chemistry. Using Perry's Model of Intellectual Development as a framework revealed that organic chemistry students who function as dualistic thinkers struggle with the complexity of the subject matter. Understanding substitution/elimination reactions and multi-step syntheses is…

  8. And never the twain shall meet? Integrating revenue cycle and supply chain functions.

    PubMed

    Matjucha, Karen A; Chung, Bianca

    2008-09-01

    Four initial steps to implementing a profit and loss management model are: Identify the supplies clinicians are using. Empower stakeholders to remove items that are not commonly used. Reduce factors driving wasted product. Review the chargemaster to ensure that supplies used in selected procedures are represented. Strategically set prices that optimize maximum allowable reimbursement.

  9. Public Relations for Brazilian Libraries: Process, Principles, Program Planning, Planning Techniques and Suggestions.

    ERIC Educational Resources Information Center

    Kies, Cosette N.

    A brief overview of the functions of public relations in libraries introduces this manual, which provides an explanation of the public relations (PR) process, including fact-finding, planning, communicating, evaluating, and marketing; some PR principles; a 10-step program that could serve as a model for planning a PR program; a discussion of PR…

  10. Dynamic data-driven integrated flare model based on self-organized criticality

    NASA Astrophysics Data System (ADS)

    Dimitropoulou, M.; Isliker, H.; Vlahos, L.; Georgoulis, M. K.

    2013-05-01

    Context. We interpret solar flares as events originating in active regions that have reached the self-organized critical state. We describe them with a dynamic integrated flare model whose initial conditions and driving mechanism are derived from observations. Aims: We investigate whether well-known scaling laws observed in the distribution functions of characteristic flare parameters are reproduced after the self-organized critical state has been reached. Methods: To investigate whether the distribution functions of total energy, peak energy, and event duration follow the expected scaling laws, we first applied the previously reported static cellular automaton model to a time series of seven solar vector magnetograms of the NOAA active region 8210 recorded by the Imaging Vector Magnetograph on May 1 1998 between 18:59 UT and 23:16 UT until the self-organized critical state was reached. We then evolved the magnetic field between these processed snapshots through spline interpolation, mimicking a natural driver in our dynamic model. We identified magnetic discontinuities that exceeded a threshold in the Laplacian of the magnetic field after each interpolation step. These discontinuities were relaxed in local diffusion events, implemented in the form of cellular automaton evolution rules. Subsequent interpolation and relaxation steps covered all transitions until the end of the processed magnetograms' sequence. We additionally advanced each magnetic configuration that has reached the self-organized critical state (SOC configuration) by the static model until 50 more flares were triggered, applied the dynamic model again to the new sequence, and repeated the same process sufficiently often to generate adequate statistics. Physical requirements, such as the divergence-free condition for the magnetic field, were approximately imposed. Results: We obtain robust power laws in the distribution functions of the modeled flaring events with scaling indices that agree well with observations. Peak and total flare energy obey single power laws with indices -1.65 ± 0.11 and -1.47 ± 0.13, while the flare duration is best fitted with a double power law (-2.15 ± 0.15 and -3.60 ± 0.09 for the flatter and steeper parts, respectively). Conclusions: We conclude that well-known statistical properties of flares are reproduced after active regions reach the state of self-organized criticality. A significant enhancement of our refined cellular automaton model is that it initiates and further drives the simulation from observed evolving vector magnetograms, thus facilitating energy calculation in physical units, while a separation between MHD and kinetic timescales is possible by assigning distinct MHD timestamps to each interpolation step.

  11. Effects of wide step walking on swing phase hip muscle forces and spatio-temporal gait parameters.

    PubMed

    Bajelan, Soheil; Nagano, Hanatsu; Sparrow, Tony; Begg, Rezaul K

    2017-07-01

    Human walking can be viewed essentially as a continuum of anterior balance loss followed by a step that re-stabilizes balance. To secure balance an extended base of support can be assistive but healthy young adults tend to walk with relatively narrower steps compared to vulnerable populations (e.g. older adults and patients). It was, therefore, hypothesized that wide step walking may enhance dynamic balance at the cost of disturbed optimum coupling of muscle functions, leading to additional muscle work and associated reduction of gait economy. Young healthy adults may select relatively narrow steps for a more efficient gait. The current study focused on the effects of wide step walking on hip abductor and adductor muscles and spatio-temporal gait parameters. To this end, lower body kinematic data and ground reaction forces were obtained using an Optotrak motion capture system and AMTI force plates, respectively, while AnyBody software was employed for muscle force simulation. A single step of four healthy young male adults was captured during preferred walking and wide step walking. Based on preferred walking data, two parallel lines were drawn on the walkway to indicate 50% larger step width and participants targeted the lines with their heels as they walked. In addition to step width that defined walking conditions, other spatio-temporal gait parameters including step length, double support time and single support time were obtained. Average hip muscle forces during swing were modeled. Results showed that in wide step walking step length increased, Gluteus Minimus muscles were more active while Gracilis and Adductor Longus revealed considerably reduced forces. In conclusion, greater use of abductors and loss of adductor forces were found in wide step walking. Further validation is needed in future studies involving older adults and other pathological populations.

  12. Timing paradox of stepping and falls in ageing: not so quick and quick(er) on the trigger.

    PubMed

    Rogers, Mark W; Mille, Marie-Laure

    2016-08-15

    Physiological and degenerative changes affecting human standing balance are major contributors to falls with ageing. During imbalance, stepping is a powerful protective action for preserving balance that may be voluntarily initiated in recognition of a balance threat, or be induced by an externally imposed mechanical or sensory perturbation. Paradoxically, with ageing and falls, initiation slowing of voluntary stepping is observed together with perturbation-induced steps that are triggered as fast as or faster than for younger adults. While age-associated changes in sensorimotor conduction, central neuronal processing and cognitive functions are linked to delayed voluntary stepping, alterations in the coupling of posture and locomotion may also prolong step triggering. It is less clear, however, how these factors may explain the accelerated triggering of induced stepping. We present a conceptual model that addresses this issue. For voluntary stepping, a disruption in the normal coupling between posture and locomotion may underlie step-triggering delays through suppression of the locomotion network based on an estimation of the evolving mechanical state conditions for stability. During induced stepping, accelerated step initiation may represent an event-triggering process whereby stepping is released according to the occurrence of a perturbation rather than to the specific sensorimotor information reflecting the evolving instability. In this case, errors in the parametric control of induced stepping and its effectiveness in stabilizing balance would be likely to occur. We further suggest that there is a residual adaptive capacity with ageing that could be exploited to improve paradoxical triggering and other changes in protective stepping to impact fall risk. © 2016 The Authors. The Journal of Physiology © 2016 The Physiological Society.

  13. Modeling the Dynamics of Soil Structure and Water in Agricultural Soil

    NASA Astrophysics Data System (ADS)

    Weller, U.; Lang, B.; Rabot, E.; Stössel, B.; Urbanski, L.; Vogel, H. J.; Wiesmeier, M.; Wollschlaeger, U.

    2017-12-01

    The impact of agricultural management on soil functions is manifold and severe. It has both positive and adverse influence. Our goal is to develop model tools quantifying the agricultural impact on soil functions based on a mechanistic understanding of soil processes to support farmers and decision makers. The modeling approach is based on defining relevant soil components, i.e. soil matrix, macropores, organisms, roots and organic matter. They interact and form the soil's macroscopic properties and functions including water and gas dynamics, and biochemical cycles. Based on existing literature information we derive functional interaction processes and combine them in a network of dynamic soil components. In agricultural soils, a major issue is linked to changes in soil structure and their influence on water dynamics. Compaction processes are well studied in literature, but for the resilience due to root growth and activity of soil organisms the information is scarcer. We implement structural dynamics into soil water and gas simulations using a lumped model that is both coarse enough to allow extensive model runs while still preserving some important, yet rarely modeled phenomenons like preferential flow, hysteretic and dynamic behavior. For simulating water dynamics, at each depth, the model assumes water at different binding energies depending on soil structure, i.e. the pore size distribution. Non-equilibrium is postulated, meaning that free water may occur even if the soil is not fully saturated. All energy levels are interconnected allowing water to move, both within a spatial node, and between neighboring nodes (adding gravity). Structure dynamics alters the capacity of this water compartments, and the conductance of its connections. Connections are switched on and off depending on whether their sources contain water or their targets have free capacity. This leads to piecewise linear system behavior that allows fast calculation for extended time steps. Based on this concept, the dynamics of soil structure can be directly linked to soil water dynamics as a main driver for other soil processes. Further steps will include integration of temperature and solute leaching as well as defining the feedback of the water regime on the structure forming processes.

  14. Identifying elderly people at risk for cognitive decline by using the 2-step test.

    PubMed

    Maruya, Kohei; Fujita, Hiroaki; Arai, Tomoyuki; Hosoi, Toshiki; Ogiwara, Kennichi; Moriyama, Shunnichiro; Ishibashi, Hideaki

    2018-01-01

    [Purpose] The purpose is to verify the effectiveness of the 2-step test in predicting cognitive decline in elderly individuals. [Subjects and Methods] One hundred eighty-two participants aged over 65 years underwent the 2-step test, cognitive function tests and higher level competence testing. Participants were classified as Robust, <1.3, and <1.1 using criteria regarding the locomotive syndrome risk stage for the 2-step test, variables were compared between groups. In addition, ordered logistic analysis was used to analyze cognitive functions as independent variables in the three groups, using the 2-step test results as the dependent variable, with age, gender, etc. as adjustment factors. [Results] In the crude data, the <1.3 and <1.1 groups were older and displayed lower motor and cognitive functions than did the Robust group. Furthermore, the <1.3 group exhibited significantly lower memory retention than did the Robust group. The 2-step test was related to the Stroop test (β: 0.06, 95% confidence interval: 0.01-0.12). [Conclusion] The finding is that the risk stage of the 2-step test is related to cognitive functions, even at an initial risk stage. The 2-step test may help with earlier detection and implementation of prevention measures for locomotive syndrome and mild cognitive impairment.

  15. Considering dominance in reduced single-step genomic evaluations.

    PubMed

    Ertl, J; Edel, C; Pimentel, E C G; Emmerling, R; Götz, K-U

    2018-06-01

    Single-step models including dominance can be an enormous computational task and can even be prohibitive for practical application. In this study, we try to answer the question whether a reduced single-step model is able to estimate breeding values of bulls and breeding values, dominance deviations and total genetic values of cows with acceptable quality. Genetic values and phenotypes were simulated (500 repetitions) for a small Fleckvieh pedigree consisting of 371 bulls (180 thereof genotyped) and 553 cows (40 thereof genotyped). This pedigree was virtually extended for 2,407 non-genotyped daughters. Genetic values were estimated with the single-step model and with different reduced single-step models. Including more relatives of genotyped cows in the reduced single-step model resulted in a better agreement of results with the single-step model. Accuracies of genetic values were largest with single-step and smallest with reduced single-step when only the cows genotyped were modelled. The results indicate that a reduced single-step model is suitable to estimate breeding values of bulls and breeding values, dominance deviations and total genetic values of cows with acceptable quality. © 2018 Blackwell Verlag GmbH.

  16. In vivo RNAi in the Drosophila Follicular Epithelium: Analysis of Stem Cell Maintenance, Proliferation, and Differentiation.

    PubMed

    Riechmann, Veit

    2017-01-01

    In vivo RNAi in Drosophila facilitates simple and rapid analysis of gene functions in a cell- or tissue-specific manner. The versatility of the UAS-GAL4 system allows to control exactly where and when during development the function of a gene is depleted. The epithelium of the ovary is a particularly good model to study in a living animal how stem cells are maintained and how their descendants proliferate and differentiate. Here I provide basic information about the publicly available reagents for in vivo RNAi, and I describe how the oogenesis system can be applied to analyze stem cells and epithelial development at a histological level. Moreover, I give helpful hints to optimize the use of the UAS-GAL4 system for RNAi induction in the follicular epithelium. Finally, I provide detailed step-by-step protocols for ovary dissection, antibody stainings, and ovary mounting for microscopic analysis.

  17. Steady-State Density Functional Theory for Finite Bias Conductances.

    PubMed

    Stefanucci, G; Kurth, S

    2015-12-09

    In the framework of density functional theory, a formalism to describe electronic transport in the steady state is proposed which uses the density on the junction and the steady current as basic variables. We prove that, in a finite window around zero bias, there is a one-to-one map between the basic variables and both local potential on as well as bias across the junction. The resulting Kohn-Sham system features two exchange-correlation (xc) potentials, a local xc potential, and an xc contribution to the bias. For weakly coupled junctions the xc potentials exhibit steps in the density-current plane which are shown to be crucial to describe the Coulomb blockade diamonds. At small currents these steps emerge as the equilibrium xc discontinuity bifurcates. The formalism is applied to a model benzene junction, finding perfect agreement with the orthodox theory of Coulomb blockade.

  18. Recyclable crosslinked polymer networks with full property recovery made via one-step controlled radical polymerization

    NASA Astrophysics Data System (ADS)

    Jin, Kailong; Li, Lingqiao; Torkelson, John

    Rubber tires illustrate well the issues ranging from economic loss to environmental problems and sustainability issues that arise with spent, covalently crosslinked polymers. A nitroxide-mediated polymerization (NMP) strategy has been developed that allows for one-step synthesis of recyclable crosslinked polymers from monomers or polymers that contain carbon-carbon double bonds amenable to radical polymerization. Resulting materials possess dynamic alkoxyamine crosslinks that undergo reversible decrosslinking as a function of temperature. Using polybutadiene as starting material, and styrene, an appropriate nitroxide molecule and bifunctional initiator for initial crosslinking, a model for tire rubber can be produced by reaction at temperatures comparable to those employed in tire molding. Upon cooling, the crosslinks are made permanent due to the extraordinarily strong temperature dependence of the reverisible nitroxide capping and uncapping reaction. Based on thermomechanical property characterization, when the original crosslinked model rubber is chopped into bits and remolded in the melt state, a well-consolidated material is obtained which exhibits full recovery of properties reflecting crosslink density after multiple recycling steps.

  19. Neural and Decision Theoretic Approaches for the Automated Segmentation of Radiodense Tissue in Digitized Mammograms

    NASA Astrophysics Data System (ADS)

    Eckert, R.; Neyhart, J. T.; Burd, L.; Polikar, R.; Mandayam, S. A.; Tseng, M.

    2003-03-01

    Mammography is the best method available as a non-invasive technique for the early detection of breast cancer. The radiographic appearance of the female breast consists of radiolucent (dark) regions due to fat and radiodense (light) regions due to connective and epithelial tissue. The amount of radiodense tissue can be used as a marker for predicting breast cancer risk. Previously, we have shown that the use of statistical models is a reliable technique for segmenting radiodense tissue. This paper presents improvements in the model that allow for further development of an automated system for segmentation of radiodense tissue. The segmentation algorithm employs a two-step process. In the first step, segmentation of tissue and non-tissue regions of a digitized X-ray mammogram image are identified using a radial basis function neural network. The second step uses a constrained Neyman-Pearson algorithm, developed especially for this research work, to determine the amount of radiodense tissue. Results obtained using the algorithm have been validated by comparing with estimates provided by a radiologist employing previously established methods.

  20. Multi-step cure kinetic model of ultra-thin glass fiber epoxy prepreg exhibiting both autocatalytic and diffusion-controlled regimes under isothermal and dynamic-heating conditions

    NASA Astrophysics Data System (ADS)

    Kim, Ye Chan; Min, Hyunsung; Hong, Sungyong; Wang, Mei; Sun, Hanna; Park, In-Kyung; Choi, Hyouk Ryeol; Koo, Ja Choon; Moon, Hyungpil; Kim, Kwang J.; Suhr, Jonghwan; Nam, Jae-Do

    2017-08-01

    As packaging technologies are demanded that reduce the assembly area of substrate, thin composite laminate substrates require the utmost high performance in such material properties as the coefficient of thermal expansion (CTE), and stiffness. Accordingly, thermosetting resin systems, which consist of multiple fillers, monomers and/or catalysts in thermoset-based glass fiber prepregs, are extremely complicated and closely associated with rheological properties, which depend on the temperature cycles for cure. For the process control of these complex systems, it is usually required to obtain a reliable kinetic model that could be used for the complex thermal cycles, which usually includes both the isothermal and dynamic-heating segments. In this study, an ultra-thin prepreg with highly loaded silica beads and glass fibers in the epoxy/amine resin system was investigated as a model system by isothermal/dynamic heating experiments. The maximum degree of cure was obtained as a function of temperature. The curing kinetics of the model prepreg system exhibited a multi-step reaction and a limited conversion as a function of isothermal curing temperatures, which are often observed in epoxy cure system because of the rate-determining diffusion of polymer chain growth. The modified kinetic equation accurately described the isothermal behavior and the beginning of the dynamic-heating behavior by integrating the obtained maximum degree of cure into the kinetic model development.

  1. Observing health professionals' workflow patterns for diabetes care - First steps towards an ontology for EHR services.

    PubMed

    Schweitzer, M; Lasierra, N; Hoerbst, A

    2015-01-01

    Increasing the flexibility from a user-perspective and enabling a workflow based interaction, facilitates an easy user-friendly utilization of EHRs for healthcare professionals' daily work. To offer such versatile EHR-functionality, our approach is based on the execution of clinical workflows by means of a composition of semantic web-services. The backbone of such architecture is an ontology which enables to represent clinical workflows and facilitates the selection of suitable services. In this paper we present the methods and results after running observations of diabetes routine consultations which were conducted in order to identify those workflows and the relation among the included tasks. Mentioned workflows were first modeled by BPMN and then generalized. As a following step in our study, interviews will be conducted with clinical personnel to validate modeled workflows.

  2. Adaptive Shape Functions and Internal Mesh Adaptation for Modelling Progressive Failure in Adhesively Bonded Joints

    NASA Technical Reports Server (NTRS)

    Stapleton, Scott; Gries, Thomas; Waas, Anthony M.; Pineda, Evan J.

    2014-01-01

    Enhanced finite elements are elements with an embedded analytical solution that can capture detailed local fields, enabling more efficient, mesh independent finite element analysis. The shape functions are determined based on the analytical model rather than prescribed. This method was applied to adhesively bonded joints to model joint behavior with one element through the thickness. This study demonstrates two methods of maintaining the fidelity of such elements during adhesive non-linearity and cracking without increasing the mesh needed for an accurate solution. The first method uses adaptive shape functions, where the shape functions are recalculated at each load step based on the softening of the adhesive. The second method is internal mesh adaption, where cracking of the adhesive within an element is captured by further discretizing the element internally to represent the partially cracked geometry. By keeping mesh adaptations within an element, a finer mesh can be used during the analysis without affecting the global finite element model mesh. Examples are shown which highlight when each method is most effective in reducing the number of elements needed to capture adhesive nonlinearity and cracking. These methods are validated against analogous finite element models utilizing cohesive zone elements.

  3. Modeling interactions between political parties and electors

    NASA Astrophysics Data System (ADS)

    Bagarello, F.; Gargano, F.

    2017-09-01

    In this paper we extend some recent results on an operatorial approach to the description of alliances between political parties interacting among themselves and with a basin of electors. In particular, we propose and compare three different models, deducing the dynamics of their related decision functions, i.e. the attitude of each party to form or not an alliance. In the first model the interactions between each party and their electors are considered. We show that these interactions drive the decision functions toward certain asymptotic values depending on the electors only: this is the perfect party, which behaves following the electors' suggestions. The second model is an extension of the first one in which we include a rule which modifies the status of the electors, and of the decision functions as a consequence, at some specific time step. In the third model we neglect the interactions with the electors while we consider cubic and quartic interactions between the parties and we show that we get (slightly oscillating) asymptotic values for the decision functions, close to their initial values. This is the real party, which does not listen to the electors. Several explicit situations are considered in details and numerical results are also shown.

  4. Retention and release of hydrogen isotopes in tungsten plasma-facing components: the role of grain boundaries and the native oxide layer from a joint experiment-simulation integrated approach

    NASA Astrophysics Data System (ADS)

    Hodille, E. A.; Ghiorghiu, F.; Addab, Y.; Založnik, A.; Minissale, M.; Piazza, Z.; Martin, C.; Angot, T.; Gallais, L.; Barthe, M.-F.; Becquart, C. S.; Markelj, S.; Mougenot, J.; Grisolia, C.; Bisson, R.

    2017-07-01

    Fusion fuel retention (trapping) and release (desorption) from plasma-facing components are critical issues for ITER and for any future industrial demonstration reactors such as DEMO. Therefore, understanding the fundamental mechanisms behind the retention of hydrogen isotopes in first wall and divertor materials is necessary. We developed an approach that couples dedicated experimental studies with modelling at all relevant scales, from microscopic elementary steps to macroscopic observables, in order to build a reliable and predictive fusion reactor wall model. This integrated approach is applied to the ITER divertor material (tungsten), and advances in the development of the wall model are presented. An experimental dataset, including focused ion beam scanning electron microscopy, isothermal desorption, temperature programmed desorption, nuclear reaction analysis and Auger electron spectroscopy, is exploited to initialize a macroscopic rate equation wall model. This model includes all elementary steps of modelled experiments: implantation of fusion fuel, fuel diffusion in the bulk or towards the surface, fuel trapping on defects and release of trapped fuel during a thermal excursion of materials. We were able to show that a single-trap-type single-detrapping-energy model is not able to reproduce an extended parameter space study of a polycrystalline sample exhibiting a single desorption peak. It is therefore justified to use density functional theory to guide the initialization of a more complex model. This new model still contains a single type of trap, but includes the density functional theory findings that the detrapping energy varies as a function of the number of hydrogen isotopes bound to the trap. A better agreement of the model with experimental results is obtained when grain boundary defects are included, as is consistent with the polycrystalline nature of the studied sample. Refinement of this grain boundary model is discussed as well as the inclusion in the model of a thin defective oxide layer following the experimental observation of the presence of an oxygen layer on the surface even after annealing to 1300 K.

  5. Research on the effect of coverage rate on the surface quality in laser direct writing process

    NASA Astrophysics Data System (ADS)

    Pan, Xuetao; Tu, Dawei

    2017-07-01

    Direct writing technique is usually used in femtosecond laser two-photon micromachining. The size of the scanning step is an important factor affecting the surface quality and machining efficiency of micro devices. According to the mechanism of two-photon polymerization, combining the distribution function of light intensity and the free radical concentration theory, we establish the mathematical model of coverage of solidification unit, then analyze the effect of coverage on the machining quality and efficiency. Using the principle of exposure equivalence, we also obtained the analytic expressions of the relationship among the surface quality characteristic parameters of microdevices and the scanning step, and carried out the numerical simulation and experiment. The results show that the scanning step has little influence on the surface quality of the line when it is much smaller than the size of the solidification unit. However, with increasing scanning step, the smoothness of line surface is reduced rapidly, and the surface quality becomes much worse.

  6. Dynamic RSA for the evaluation of inducible micromotion of Oxford UKA during step-up and step-down motion.

    PubMed

    Horsager, Kristian; Kaptein, Bart L; Rømer, Lone; Jørgensen, Peter B; Stilling, Maiken

    2017-06-01

    Background and purpose - Implant inducible micromotions have been suggested to reflect the quality of the fixation interface. We investigated the usability of dynamic RSA for evaluation of inducible micromotions of the Oxford Unicompartmental Knee Arthroplasty (UKA) tibial component, and evaluated factors that have been suggested to compromise the fixation, such as fixation method, component alignment, and radiolucent lines (RLLs). Patients and methods - 15 patients (12 men) with a mean age of 69 (55-86) years, with an Oxford UKA (7 cemented), were studied after a mean time in situ of 4.4 (3.6-5.1) years. 4 had tibial RLLs. Each patient was recorded with dynamic RSA (10 frames/second) during a step-up/step-down motion. Inducible micromotions were calculated for the tibial component with respect to the tibia bone. Postoperative component alignment was measured with model-based RSA and RLLs were measured on screened radiographs. Results - All tibial components showed inducible micromotions as a function of the step-cycle motion with a mean subsidence of up to -0.06 mm (95% CI: -0.10 to -0.03). Tibial component inducible micromotions were similar for cemented fixation and cementless fixation. Patients with tibial RLLs had 0.5° (95% CI: 0.18-0.81) greater inducible medio-lateral tilt of the tibial component. There was a correlation between postoperative posterior slope of the tibial plateau and inducible anterior-posterior tilt. Interpretation - All patients had inducible micromotions of the tibial component during step-cycle motion. RLLs and a high posterior slope increased the magnitude of inducible micromotions. This suggests that dynamic RSA is a valuable clinical tool for the evaluation of functional implant fixation.

  7. Energy hyperspace for stacking interaction in AU/AU dinucleotide step: Dispersion-corrected density functional theory study.

    PubMed

    Mukherjee, Sanchita; Kailasam, Senthilkumar; Bansal, Manju; Bhattacharyya, Dhananjay

    2014-01-01

    Double helical structures of DNA and RNA are mostly determined by base pair stacking interactions, which give them the base sequence-directed features, such as small roll values for the purine-pyrimidine steps. Earlier attempts to characterize stacking interactions were mostly restricted to calculations on fiber diffraction geometries or optimized structure using ab initio calculations lacking variation in geometry to comment on rather unusual large roll values observed in AU/AU base pair step in crystal structures of RNA double helices. We have generated stacking energy hyperspace by modeling geometries with variations along the important degrees of freedom, roll, and slide, which were chosen via statistical analysis as maximally sequence dependent. Corresponding energy contours were constructed by several quantum chemical methods including dispersion corrections. This analysis established the most suitable methods for stacked base pair systems despite the limitation imparted by number of atom in a base pair step to employ very high level of theory. All the methods predict negative roll value and near-zero slide to be most favorable for the purine-pyrimidine steps, in agreement with Calladine's steric clash based rule. Successive base pairs in RNA are always linked by sugar-phosphate backbone with C3'-endo sugars and this demands C1'-C1' distance of about 5.4 Å along the chains. Consideration of an energy penalty term for deviation of C1'-C1' distance from the mean value, to the recent DFT-D functionals, specifically ωB97X-D appears to predict reliable energy contour for AU/AU step. Such distance-based penalty improves energy contours for the other purine-pyrimidine sequences also. © 2013 Wiley Periodicals, Inc. Biopolymers 101: 107-120, 2014. Copyright © 2013 Wiley Periodicals, Inc.

  8. Coincidence measurements following 2p photoionization in Mg

    NASA Astrophysics Data System (ADS)

    Sokell, E.; Bolognesi, P.; Safgren, S.; Avaldi, L.

    2014-04-01

    Triple Differential Cross-Section (TDCS) measurements have been made to investigate the 2p pho-toionization of Magnesium. In the experiment the photoelectron and the L3-M1M1 Auger electron have been detected in coincidence at four distinct photon energies from 7 to 40 eV above the 2p threshold. Auger decay is usually treated as a two-step process, where the intermediate single hole-state makes the link between the pho-toionization and decay processes. These measurements allow the investigation of the process as a function of excess energy, and specifically to test the validity of the two-step model as the ionization threshold is approached.

  9. Disruption of striatal-enriched protein tyrosine phosphatase (STEP) function in neuropsychiatric disorders

    PubMed Central

    Karasawa, Takatoshi; Lombroso, Paul J.

    2014-01-01

    Striatal-enriched protein tyrosine phosphatase (STEP) is a brain-specific tyrosine phosphatase that plays a major role in the development of synaptic plasticity. Recent findings have implicated STEP in several psychiatric and neurological disorders, including Alzheimer’s disease, schizophrenia, fragile X syndrome, Huntington’s disease, stroke/ischemia, and stress-related psychiatric disorders. In these disorders, STEP protein expression levels and activity are dysregulated, contributing to the cognitive deficits that are present. In this review, we focus on the most recent findings on STEP, discuss how STEP expression and activity are maintained during normal cognitive function, and how disruptions in STEP activity contribute to a number of illnesses. PMID:25218562

  10. GOMA: functional enrichment analysis tool based on GO modules

    PubMed Central

    Huang, Qiang; Wu, Ling-Yun; Wang, Yong; Zhang, Xiang-Sun

    2013-01-01

    Analyzing the function of gene sets is a critical step in interpreting the results of high-throughput experiments in systems biology. A variety of enrichment analysis tools have been developed in recent years, but most output a long list of significantly enriched terms that are often redundant, making it difficult to extract the most meaningful functions. In this paper, we present GOMA, a novel enrichment analysis method based on the new concept of enriched functional Gene Ontology (GO) modules. With this method, we systematically revealed functional GO modules, i.e., groups of functionally similar GO terms, via an optimization model and then ranked them by enrichment scores. Our new method simplifies enrichment analysis results by reducing redundancy, thereby preventing inconsistent enrichment results among functionally similar terms and providing more biologically meaningful results. PMID:23237213

  11. On the shape of martian dust and water ice aerosols

    NASA Astrophysics Data System (ADS)

    Pitman, K. M.; Wolff, M. J.; Clancy, R. T.; Clayton, G. C.

    2000-10-01

    Researchers have often calculated radiative properties of Martian aerosols using either Mie theory for homogeneous spheres or semi-empirical theories. Given that these atmospheric particles are randomly oriented, this approach seems fairly reasonable. However, the idea that randomly oriented nonspherical particles have scattering properties equivalent to even a select subset of spheres is demonstratably false} (Bohren and Huffman 1983; Bohren and Koh 1985, Appl. Optics, 24, 1023). Fortunately, recent computational developments now enable us to directly compute scattering properties for nonspherical particles. We have combined a numerical approach for axisymmetric particle shapes, i.e., cylinders, disks, spheroids (Waterman's T-Matrix approach as improved by Mishchenko and collaborators; cf., Mishchenko et al. 1997, JGR, 102, D14, 16,831), with a multiple-scattering radiative transfer algorithm to constrain the shape of water ice and dust aerosols. We utilize a two-stage iterative process. First, we empirically derive a scattering phase function for each aerosol component (starting with some ``guess'') from radiative transfer models of MGS Thermal Emission Spectrometer Emission Phase Function (EPF) sequences (for details on this step, see Clancy et al., DPS 2000). Next, we perform a series of scattering calculations, adjusting our parameters to arrive at a ``best-fit'' theoretical phase function. In this presentation, we provide details on the second step in our analysis, including the derived phase functions (for several characteristic EPF sequences) as well as the particle properties of the best-fit theoretical models. We provide a sensitivity analysis for the EPF model-data comparisons in terms of perturbations in the particle properties (i.e., range of axial ratios, sizes, refractive indices, etc). This work is supported through NASA grant NAGS-9820 (MJW) and JPL contract no. 961471 (RTC).

  12. A step-by-step guide to non-linear regression analysis of experimental data using a Microsoft Excel spreadsheet.

    PubMed

    Brown, A M

    2001-06-01

    The objective of this present study was to introduce a simple, easily understood method for carrying out non-linear regression analysis based on user input functions. While it is relatively straightforward to fit data with simple functions such as linear or logarithmic functions, fitting data with more complicated non-linear functions is more difficult. Commercial specialist programmes are available that will carry out this analysis, but these programmes are expensive and are not intuitive to learn. An alternative method described here is to use the SOLVER function of the ubiquitous spreadsheet programme Microsoft Excel, which employs an iterative least squares fitting routine to produce the optimal goodness of fit between data and function. The intent of this paper is to lead the reader through an easily understood step-by-step guide to implementing this method, which can be applied to any function in the form y=f(x), and is well suited to fast, reliable analysis of data in all fields of biology.

  13. Computational Phenotyping in Psychiatry: A Worked Example

    PubMed Central

    2016-01-01

    Abstract Computational psychiatry is a rapidly emerging field that uses model-based quantities to infer the behavioral and neuronal abnormalities that underlie psychopathology. If successful, this approach promises key insights into (pathological) brain function as well as a more mechanistic and quantitative approach to psychiatric nosology—structuring therapeutic interventions and predicting response and relapse. The basic procedure in computational psychiatry is to build a computational model that formalizes a behavioral or neuronal process. Measured behavioral (or neuronal) responses are then used to infer the model parameters of a single subject or a group of subjects. Here, we provide an illustrative overview over this process, starting from the modeling of choice behavior in a specific task, simulating data, and then inverting that model to estimate group effects. Finally, we illustrate cross-validation to assess whether between-subject variables (e.g., diagnosis) can be recovered successfully. Our worked example uses a simple two-step maze task and a model of choice behavior based on (active) inference and Markov decision processes. The procedural steps and routines we illustrate are not restricted to a specific field of research or particular computational model but can, in principle, be applied in many domains of computational psychiatry. PMID:27517087

  14. A Near-Wall Reynolds-Stress Closure without Wall Normals

    NASA Technical Reports Server (NTRS)

    Yuan, S. P.; So, R. M. C.

    1997-01-01

    With the aid of near-wall asymptotic analysis and results of direct numerical simulation, a new near-wall Reynolds stress model (NNWRS) is formulated based on the SSG high-Reynolds-stress model with wall-independent near-wall corrections. Only one damping function is used for flows with a wide range of Reynolds numbers to ensure that the near-wall modifications diminish away from the walls. The model is able to reproduce complicated flow phenomena induced by complex geometry, such as flow recirculation, reattachment and boundary-layer redevelopment in backward-facing step flow and secondary flow in three-dimensional square duct flow. In simple flows, including fully developed channel/pipe flow, Couette flow and boundary-layer flow, the wall effects are dominant, and the NNWRS model predicts less degree of turbulent anisotropy in the near-wall region compared with a wall-dependent near-wall Reynolds Stress model (NWRS) developed by So and colleagues. The comparison of the predictions given by the two models rectifies the misconception that the overshooting of skin friction coefficient in backward-facing step flow prevalent in those near-wall, models with wall normal is caused by he use of wall normal.

  15. Computational Phenotyping in Psychiatry: A Worked Example.

    PubMed

    Schwartenbeck, Philipp; Friston, Karl

    2016-01-01

    Computational psychiatry is a rapidly emerging field that uses model-based quantities to infer the behavioral and neuronal abnormalities that underlie psychopathology. If successful, this approach promises key insights into (pathological) brain function as well as a more mechanistic and quantitative approach to psychiatric nosology-structuring therapeutic interventions and predicting response and relapse. The basic procedure in computational psychiatry is to build a computational model that formalizes a behavioral or neuronal process. Measured behavioral (or neuronal) responses are then used to infer the model parameters of a single subject or a group of subjects. Here, we provide an illustrative overview over this process, starting from the modeling of choice behavior in a specific task, simulating data, and then inverting that model to estimate group effects. Finally, we illustrate cross-validation to assess whether between-subject variables (e.g., diagnosis) can be recovered successfully. Our worked example uses a simple two-step maze task and a model of choice behavior based on (active) inference and Markov decision processes. The procedural steps and routines we illustrate are not restricted to a specific field of research or particular computational model but can, in principle, be applied in many domains of computational psychiatry.

  16. Modeling cereal starch hydrolysis during simultaneous saccharification and lactic acid fermentation; case of a sorghum-based fermented beverage, gowé.

    PubMed

    Mestres, Christian; Bettencourt, Munanga de J C; Loiseau, Gérard; Matignon, Brigitte; Grabulos, Joël; Achir, Nawel

    2017-10-01

    Gowé is an acidic beverage obtained after simultaneous saccharification and fermentation (SSF) of sorghum. A previous paper focused on modeling the growth of lactic acid bacteria during gowé processing. This paper focuses on modeling starch amylolysis to build an aggregated SSF model. The activity of α-amylase was modeled as a function of temperature and pH, and the hydrolysis rates of both native and soluble starch were modeled via a Michaelis-Menten equation taking into account the maltose and glucose inhibition constants. The robustness of the parameter estimators was ensured by step by step identification in sets of experiments conducted with different proportions of native and gelatinized starch by modifying the pre-cooking temperature. The aggregated model was validated on experimental data and showed that both the pre-cooking and fermentation parameters, particularly temperature, are significant levers for controlling not only acid and sugar contents but also the expected viscosity of the final product. This generic approach could be used as a tool to optimize the sanitary and sensory quality of fermentation of other starchy products. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Metal Transport across Biomembranes: Emerging Models for a Distinct Chemistry*

    PubMed Central

    Argüello, José M.; Raimunda, Daniel; González-Guerrero, Manuel

    2012-01-01

    Transition metals are essential components of important biomolecules, and their homeostasis is central to many life processes. Transmembrane transporters are key elements controlling the distribution of metals in various compartments. However, due to their chemical properties, transition elements require transporters with different structural-functional characteristics from those of alkali and alkali earth ions. Emerging structural information and functional studies have revealed distinctive features of metal transport. Among these are the relevance of multifaceted events involving metal transfer among participating proteins, the importance of coordination geometry at transmembrane transport sites, and the presence of the largely irreversible steps associated with vectorial transport. Here, we discuss how these characteristics shape novel transition metal ion transport models. PMID:22389499

  18. Metal transport across biomembranes: emerging models for a distinct chemistry.

    PubMed

    Argüello, José M; Raimunda, Daniel; González-Guerrero, Manuel

    2012-04-20

    Transition metals are essential components of important biomolecules, and their homeostasis is central to many life processes. Transmembrane transporters are key elements controlling the distribution of metals in various compartments. However, due to their chemical properties, transition elements require transporters with different structural-functional characteristics from those of alkali and alkali earth ions. Emerging structural information and functional studies have revealed distinctive features of metal transport. Among these are the relevance of multifaceted events involving metal transfer among participating proteins, the importance of coordination geometry at transmembrane transport sites, and the presence of the largely irreversible steps associated with vectorial transport. Here, we discuss how these characteristics shape novel transition metal ion transport models.

  19. Drought impact functions as intermediate step towards drought damage assessment

    NASA Astrophysics Data System (ADS)

    Bachmair, Sophie; Svensson, Cecilia; Prosdocimi, Ilaria; Hannaford, Jamie; Helm Smith, Kelly; Svoboda, Mark; Stahl, Kerstin

    2016-04-01

    While damage or vulnerability functions for floods and seismic hazards have gained considerable attention, there is comparably little knowledge on drought damage or loss. On the one hand this is due to the complexity of the drought hazard affecting different domains of the hydrological cycle and different sectors of human activity. Hence, a single hazard indicator is likely not able to fully capture this multifaceted hazard. On the other hand, drought impacts are often non-structural and hard to quantify or monetize. Examples are impaired navigability of streams, restrictions on domestic water use, reduced hydropower production, reduced tree growth, and irreversible deterioration/loss of wetlands. Apart from reduced crop yield, data about drought damage or loss with adequate spatial and temporal resolution is scarce, making the development of drought damage functions difficult. As an intermediate step towards drought damage functions we exploit text-based reports on drought impacts from the European Drought Impact report Inventory and the US Drought Impact Reporter to derive surrogate information for drought damage or loss. First, text-based information on drought impacts is converted into timeseries of absence versus presence of impacts, or number of impact occurrences. Second, meaningful hydro-meteorological indicators characterizing drought intensity are identified. Third, different statistical models are tested as link functions relating drought hazard indicators with drought impacts: 1) logistic regression for drought impacts coded as binary response variable; and 2) mixture/hurdle models (zero-inflated/zero-altered negative binomial regression) and an ensemble regression tree approach for modeling the number of drought impact occurrences. Testing the predictability of (number of) drought impact occurrences based on cross-validation revealed a good agreement between observed and modeled (number of) impacts for regions at the scale of federal states or provinces with good data availability. Impact functions representing localized drought impacts are more challenging to construct given that less data is available, yet may provide information that more directly addresses stakeholders' needs. Overall, our study contributes insights into how drought intensity translates into ecological and socioeconomic impacts, and how such information may be used for enhancing drought monitoring and early warning.

  20. Nonlinear mathematical modeling and sensitivity analysis of hydraulic drive unit

    NASA Astrophysics Data System (ADS)

    Kong, Xiangdong; Yu, Bin; Quan, Lingxiao; Ba, Kaixian; Wu, Liujie

    2015-09-01

    The previous sensitivity analysis researches are not accurate enough and also have the limited reference value, because those mathematical models are relatively simple and the change of the load and the initial displacement changes of the piston are ignored, even experiment verification is not conducted. Therefore, in view of deficiencies above, a nonlinear mathematical model is established in this paper, including dynamic characteristics of servo valve, nonlinear characteristics of pressure-flow, initial displacement of servo cylinder piston and friction nonlinearity. The transfer function block diagram is built for the hydraulic drive unit closed loop position control, as well as the state equations. Through deriving the time-varying coefficient items matrix and time-varying free items matrix of sensitivity equations respectively, the expression of sensitivity equations based on the nonlinear mathematical model are obtained. According to structure parameters of hydraulic drive unit, working parameters, fluid transmission characteristics and measured friction-velocity curves, the simulation analysis of hydraulic drive unit is completed on the MATLAB/Simulink simulation platform with the displacement step 2 mm, 5 mm and 10 mm, respectively. The simulation results indicate that the developed nonlinear mathematical model is sufficient by comparing the characteristic curves of experimental step response and simulation step response under different constant load. Then, the sensitivity function time-history curves of seventeen parameters are obtained, basing on each state vector time-history curve of step response characteristic. The maximum value of displacement variation percentage and the sum of displacement variation absolute values in the sampling time are both taken as sensitivity indexes. The sensitivity indexes values above are calculated and shown visually in histograms under different working conditions, and change rules are analyzed. Then the sensitivity indexes values of four measurable parameters, such as supply pressure, proportional gain, initial position of servo cylinder piston and load force, are verified experimentally on test platform of hydraulic drive unit, and the experimental research shows that the sensitivity analysis results obtained through simulation are approximate to the test results. This research indicates each parameter sensitivity characteristics of hydraulic drive unit, the performance-affected main parameters and secondary parameters are got under different working conditions, which will provide the theoretical foundation for the control compensation and structure optimization of hydraulic drive unit.

  1. e-CBT (myCompass), Antidepressant Medication, and Face-to-Face Psychological Treatment for Depression in Australia: A Cost-Effectiveness Comparison

    PubMed Central

    2015-01-01

    Background The economic cost of depression is becoming an ever more important determinant for health policy and decision makers. Internet-based interventions with and without therapist support have been found to be effective options for the treatment of mild to moderate depression. With increasing demands on health resources and shortages of mental health care professionals, the integration of cost-effective treatment options such as Internet-based programs into primary health care could increase efficiency in terms of resource use and costs. Objective Our aim was to evaluate the cost-effectiveness of an Internet-based intervention (myCompass) for the treatment of mild-to-moderate depression compared to treatment as usual and cognitive behavior therapy in a stepped care model. Methods A decision model was constructed using a cost utility framework to show both costs and health outcomes. In accordance with current treatment guidelines, a stepped care model included myCompass as the first low-intervention step in care for a proportion of the model cohort, with participants beginning from a low-intensity intervention to increasing levels of treatment. Model parameters were based on data from the recent randomized controlled trial of myCompass, which showed that the intervention reduced symptoms of depression, anxiety, and stress and improved work and social functioning for people with symptoms in the mild-to-moderate range. Results The average net monetary benefit (NMB) was calculated, identifying myCompass as the strategy with the highest net benefit. The mean incremental NMB per individual for the myCompass group was AUD 1165.88 compared to treatment as usual and AUD 522.58 for the cognitive behavioral therapy model. Conclusions Internet-based interventions can provide cost-effective access to treatment when provided as part of a stepped care model. Widespread dissemination of Internet-based programs can potentially reduce demands on primary and tertiary services and reduce unmet need. PMID:26561555

  2. e-CBT (myCompass), Antidepressant Medication, and Face-to-Face Psychological Treatment for Depression in Australia: A Cost-Effectiveness Comparison.

    PubMed

    Solomon, Daniela; Proudfoot, Judith; Clarke, Janine; Christensen, Helen

    2015-11-11

    The economic cost of depression is becoming an ever more important determinant for health policy and decision makers. Internet-based interventions with and without therapist support have been found to be effective options for the treatment of mild to moderate depression. With increasing demands on health resources and shortages of mental health care professionals, the integration of cost-effective treatment options such as Internet-based programs into primary health care could increase efficiency in terms of resource use and costs. Our aim was to evaluate the cost-effectiveness of an Internet-based intervention (myCompass) for the treatment of mild-to-moderate depression compared to treatment as usual and cognitive behavior therapy in a stepped care model. A decision model was constructed using a cost utility framework to show both costs and health outcomes. In accordance with current treatment guidelines, a stepped care model included myCompass as the first low-intervention step in care for a proportion of the model cohort, with participants beginning from a low-intensity intervention to increasing levels of treatment. Model parameters were based on data from the recent randomized controlled trial of myCompass, which showed that the intervention reduced symptoms of depression, anxiety, and stress and improved work and social functioning for people with symptoms in the mild-to-moderate range. The average net monetary benefit (NMB) was calculated, identifying myCompass as the strategy with the highest net benefit. The mean incremental NMB per individual for the myCompass group was AUD 1165.88 compared to treatment as usual and AUD 522.58 for the cognitive behavioral therapy model. Internet-based interventions can provide cost-effective access to treatment when provided as part of a stepped care model. Widespread dissemination of Internet-based programs can potentially reduce demands on primary and tertiary services and reduce unmet need.

  3. Modeling bed load transport and step-pool morphology with a reduced-complexity approach

    NASA Astrophysics Data System (ADS)

    Saletti, Matteo; Molnar, Peter; Hassan, Marwan A.; Burlando, Paolo

    2016-04-01

    Steep mountain channels are complex fluvial systems, where classical methods developed for lowland streams fail to capture the dynamics of sediment transport and bed morphology. Estimations of sediment transport based on average conditions have more than one order of magnitude of uncertainty because of the wide grain-size distribution of the bed material, the small relative submergence of coarse grains, the episodic character of sediment supply, and the complex boundary conditions. Most notably, bed load transport is modulated by the structure of the bed, where grains are imbricated in steps and similar bedforms and, therefore, they are much more stable then predicted. In this work we propose a new model based on a reduced-complexity (RC) approach focused on the reproduction of the step-pool morphology. In our 2-D cellular-automaton model entrainment, transport and deposition of particles are considered via intuitive rules based on physical principles. A parsimonious set of parameters allows the control of the behavior of the system, and the basic processes can be considered in a deterministic or stochastic way. The probability of entrainment of grains (and, as a consequence, particle travel distances and resting times) is a function of flow conditions and bed topography. Sediment input is fed at the upper boundary of the channel at a constant or variable rate. Our model yields realistic results in terms of longitudinal bed profiles and sediment transport trends. Phases of aggradation and degradation can be observed in the channel even under a constant input and the memory of the morphology can be quantified with long-range persistence indicators. Sediment yield at the channel outlet shows intermittency as observed in natural streams. Steps are self-formed in the channel and their stability is tested against the model parameters. Our results show the potential of RC models as complementary tools to more sophisticated models. They provide a realistic description of complex morphological systems and help to better identify the key physical principles that rule their dynamics.

  4. Method of development of the program of forming of parametrical drawings of details in the AutoCAD software product

    NASA Astrophysics Data System (ADS)

    Alshakova, E. L.

    2017-01-01

    The program in the AutoLISP language allows automatically to form parametrical drawings during the work in the AutoCAD software product. Students study development of programs on AutoLISP language with the use of the methodical complex containing methodical instructions in which real examples of creation of images and drawings are realized. Methodical instructions contain reference information necessary for the performance of the offered tasks. The method of step-by-step development of the program is the basis for training in programming on AutoLISP language: the program draws elements of the drawing of a detail by means of definitely created function which values of arguments register in that sequence in which AutoCAD gives out inquiries when performing the corresponding command in the editor. The process of the program design is reduced to the process of step-by-step formation of functions and sequence of their calls. The author considers the development of the AutoLISP program for the creation of parametrical drawings of details, the defined design, the user enters the dimensions of elements of details. These programs generate variants of tasks of the graphic works performed in educational process of "Engineering graphics", "Engineering and computer graphics" disciplines. Individual tasks allow to develop at students skills of independent work in reading and creation of drawings, as well as 3D modeling.

  5. An impact of environmental changes on flows in the reach scale under a range of climatic conditions

    NASA Astrophysics Data System (ADS)

    Karamuz, Emilia; Romanowicz, Renata J.

    2016-04-01

    The present paper combines detection and adequate identification of causes of changes in flow regime at cross-sections along the Middle River Vistula reach using different methods. Two main experimental set ups (designs) have been applied to study the changes, a moving three-year window and low- and high-flow event based approach. In the first experiment, a Stochastic Transfer Function (STF) model and a quantile-based statistical analysis of flow patterns were compared. These two methods are based on the analysis of changes of the STF model parameters and standardised differences of flow quantile values. In the second experiment, in addition to the STF-based also a 1-D distributed model, MIKE11 was applied. The first step of the procedure used in the study is to define the river reaches that have recorded information on land use and water management changes. The second task is to perform the moving window analysis of standardised differences of flow quantiles and moving window optimisation of the STF model for flow routing. The third step consists of an optimisation of the STF and MIKE11 models for high- and low-flow events. The final step is to analyse the results and relate the standardised quantile changes and model parameter changes to historical land use changes and water management practices. Results indicate that both models give consistent assessment of changes in the channel for medium and high flows. ACKNOWLEDGEMENTS This research was supported by the Institute of Geophysics Polish Academy of Sciences through the Young Scientist Grant no. 3b/IGF PAN/2015.

  6. Three Classes of Nonparametric Differential Step Functioning Effect Estimators

    ERIC Educational Resources Information Center

    Penfield, Randall D.

    2008-01-01

    The examination of measurement invariance in polytomous items is complicated by the possibility that the magnitude and sign of lack of invariance may vary across the steps underlying the set of polytomous response options, a concept referred to as differential step functioning (DSF). This article describes three classes of nonparametric DSF effect…

  7. The ac propulsion system for an electric vehicle, phase 1

    NASA Astrophysics Data System (ADS)

    Geppert, S.

    1981-08-01

    A functional prototype of an electric vehicle ac propulsion system was built consisting of a 18.65 kW rated ac induction traction motor, pulse width modulated (PWM) transistorized inverter, two speed mechanically shifted automatic transmission, and an overall drive/vehicle controller. Design developmental steps, and test results of individual components and the complex system on an instrumented test frame are described. Computer models were developed for the inverter, motor and a representative vehicle. A preliminary reliability model and failure modes effects analysis are given.

  8. Theory of fiber-optic, evanescent-wave spectroscopy and sensors

    NASA Astrophysics Data System (ADS)

    Messica, A.; Greenstein, A.; Katzir, A.

    1996-05-01

    A general theory for fiber-optic, evanescent-wave spectroscopy and sensors is presented for straight, uncladded, step-index, multimode fibers. A three-dimensional model is formulated within the framework of geometric optics. The model includes various launching conditions, input and output end-face Fresnel transmission losses, multiple Fresnel reflections, bulk absorption, and evanescent-wave absorption. An evanescent-wave sensor response is analyzed as a function of externally controlled parameters such as coupling angle, f number, fiber length, and diameter. Conclusions are drawn for several experimental apparatuses.

  9. The ac propulsion system for an electric vehicle, phase 1

    NASA Technical Reports Server (NTRS)

    Geppert, S.

    1981-01-01

    A functional prototype of an electric vehicle ac propulsion system was built consisting of a 18.65 kW rated ac induction traction motor, pulse width modulated (PWM) transistorized inverter, two speed mechanically shifted automatic transmission, and an overall drive/vehicle controller. Design developmental steps, and test results of individual components and the complex system on an instrumented test frame are described. Computer models were developed for the inverter, motor and a representative vehicle. A preliminary reliability model and failure modes effects analysis are given.

  10. One-Dimensional Fast Transient Simulator for Modeling Cadmium Sulfide/Cadmium Telluride Solar Cells

    NASA Astrophysics Data System (ADS)

    Guo, Da

    Solar energy, including solar heating, solar architecture, solar thermal electricity and solar photovoltaics, is one of the primary alternative energy sources to fossil fuel. Being one of the most important techniques, significant research has been conducted in solar cell efficiency improvement. Simulation of various structures and materials of solar cells provides a deeper understanding of device operation and ways to improve their efficiency. Over the last two decades, polycrystalline thin-film Cadmium-Sulfide and Cadmium-Telluride (CdS/CdTe) solar cells fabricated on glass substrates have been considered as one of the most promising candidate in the photovoltaic technologies, for their similar efficiency and low costs when compared to traditional silicon-based solar cells. In this work a fast one dimensional time-dependent/steady-state drift-diffusion simulator, accelerated by adaptive non-uniform mesh and automatic time-step control, for modeling solar cells has been developed and has been used to simulate a CdS/CdTe solar cell. These models are used to reproduce transients of carrier transport in response to step-function signals of different bias and varied light intensity. The time-step control models are also used to help convergence in steady-state simulations where constrained material constants, such as carrier lifetimes in the order of nanosecond and carrier mobility in the order of 100 cm2/Vs, must be applied.

  11. Ankle Joint Intrinsic Dynamics is More Complex than a Mass-Spring-Damper Model.

    PubMed

    Sobhani Tehrani, Ehsan; Jalaleddini, Kian; Kearney, Robert E

    2017-09-01

    This paper describes a new small signal parametric model of ankle joint intrinsic mechanics in normal subjects. We found that intrinsic ankle mechanics is a third-order system and the second-order mass-spring-damper model, referred to as IBK, used by many researchers in the literature cannot adequately represent ankle dynamics at all frequencies in a number of important tasks. This was demonstrated using experimental data from five healthy subjects with no voluntary muscle contraction and at seven ankle positions covering the range of motion. We showed that the difference between the new third-order model and the conventional IBK model increased from dorsi to plantarflexed position. The new model was obtained using a multi-step identification procedure applied to experimental input/output data of the ankle joint. The procedure first identifies a non-parametric model of intrinsic joint stiffness where ankle position is the input and torque is the output. Then, in several steps, the model is converted into a continuous-time transfer function of ankle compliance, which is the inverse of stiffness. Finally, we showed that the third-order model is indeed structurally consistent with agonist-antagonist musculoskeletal structure of human ankle, which is not the case for the IBK model.

  12. Modified ADM1 disintegration/hydrolysis structures for modeling batch thermophilic anaerobic digestion of thermally pretreated waste activated sludge.

    PubMed

    Ramirez, Ivan; Mottet, Alexis; Carrère, Hélène; Déléris, Stéphane; Vedrenne, Fabien; Steyer, Jean-Philippe

    2009-08-01

    Anaerobic digestion disintegration and hydrolysis have been traditionally modeled according to first-order kinetics assuming that their rates do not depend on disintegration/hydrolytic biomass concentrations. However, the typical sigmoid-shape increase in time of the disintegration/hydrolysis rates cannot be described with first-order models. For complex substrates, first-order kinetics should thus be modified to account for slowly degradable material. In this study, a slightly modified IWA ADM1 model is presented to simulate thermophilic anaerobic digestion of thermally pretreated waste activated sludge. Contois model is first included for disintegration and hydrolysis steps instead of first-order kinetics and Hill function is then used to model ammonia inhibition of aceticlastic methanogens instead of a non-competitive function. One batch experimental data set of anaerobic degradation of a raw waste activated sludge is used to calibrate the proposed model and three additional data sets from similar sludge thermally pretreated at three different temperatures are used to validate the parameters values.

  13. Secondary mediation and regression analyses of the PTClinResNet database: determining causal relationships among the International Classification of Functioning, Disability and Health levels for four physical therapy intervention trials.

    PubMed

    Mulroy, Sara J; Winstein, Carolee J; Kulig, Kornelia; Beneck, George J; Fowler, Eileen G; DeMuth, Sharon K; Sullivan, Katherine J; Brown, David A; Lane, Christianne J

    2011-12-01

    Each of the 4 randomized clinical trials (RCTs) hosted by the Physical Therapy Clinical Research Network (PTClinResNet) targeted a different disability group (low back disorder in the Muscle-Specific Strength Training Effectiveness After Lumbar Microdiskectomy [MUSSEL] trial, chronic spinal cord injury in the Strengthening and Optimal Movements for Painful Shoulders in Chronic Spinal Cord Injury [STOMPS] trial, adult stroke in the Strength Training Effectiveness Post-Stroke [STEPS] trial, and pediatric cerebral palsy in the Pediatric Endurance and Limb Strengthening [PEDALS] trial for children with spastic diplegic cerebral palsy) and tested the effectiveness of a muscle-specific or functional activity-based intervention on primary outcomes that captured pain (STOMPS, MUSSEL) or locomotor function (STEPS, PEDALS). The focus of these secondary analyses was to determine causal relationships among outcomes across levels of the International Classification of Functioning, Disability and Health (ICF) framework for the 4 RCTs. With the database from PTClinResNet, we used 2 separate secondary statistical approaches-mediation analysis for the MUSSEL and STOMPS trials and regression analysis for the STEPS and PEDALS trials-to test relationships among muscle performance, primary outcomes (pain related and locomotor related), activity and participation measures, and overall quality of life. Predictive models were stronger for the 2 studies with pain-related primary outcomes. Change in muscle performance mediated or predicted reductions in pain for the MUSSEL and STOMPS trials and, to some extent, walking speed for the STEPS trial. Changes in primary outcome variables were significantly related to changes in activity and participation variables for all 4 trials. Improvement in activity and participation outcomes mediated or predicted increases in overall quality of life for the 3 trials with adult populations. Variables included in the statistical models were limited to those measured in the 4 RCTs. It is possible that other variables also mediated or predicted the changes in outcomes. The relatively small sample size in the PEDALS trial limited statistical power for those analyses. Evaluating the mediators or predictors of change between each ICF level and for 2 fundamentally different outcome variables (pain versus walking) provided insights into the complexities inherent across 4 prevalent disability groups.

  14. A PDMS/paper/glass hybrid microfluidic biochip integrated with aptamer-functionalized graphene oxide nano-biosensors for one-step multiplexed pathogen detection.

    PubMed

    Zuo, Peng; Li, XiuJun; Dominguez, Delfina C; Ye, Bang-Ce

    2013-10-07

    Infectious pathogens often cause serious public health concerns throughout the world. There is an increasing demand for simple, rapid and sensitive approaches for multiplexed pathogen detection. In this paper we have developed a polydimethylsiloxane (PDMS)/paper/glass hybrid microfluidic system integrated with aptamer-functionalized graphene oxide (GO) nano-biosensors for simple, one-step, multiplexed pathogen detection. The paper substrate used in this hybrid microfluidic system facilitated the integration of aptamer biosensors on the microfluidic biochip, and avoided complicated surface treatment and aptamer probe immobilization in a PDMS or glass-only microfluidic system. Lactobacillus acidophilus was used as a bacterium model to develop the microfluidic platform with a detection limit of 11.0 cfu mL(-1). We have also successfully extended this method to the simultaneous detection of two infectious pathogens - Staphylococcus aureus and Salmonella enterica. This method is simple and fast. The one-step 'turn on' pathogen assay in a ready-to-use microfluidic device only takes ~10 min to complete on the biochip. Furthermore, this microfluidic device has great potential in rapid detection of a wide variety of different other bacterial and viral pathogens.

  15. A PDMS/paper/glass hybrid microfluidic biochip integrated with aptamer-functionalized graphene oxide nano-biosensors for one-step multiplexed pathogen detection

    PubMed Central

    Zuo, Peng; Dominguez, Delfina C.; Ye, Bang-Ce

    2014-01-01

    Infectious pathogens often cause serious public health concerns throughout the world. There is an increasing demand for simple, rapid and sensitive approaches for multiplexed pathogen detection. In this paper we have developed a polydimethylsiloxane (PDMS)/paper/glass hybrid microfluidic system integrated with aptamer-functionalized graphene oxide (GO) nano-biosensors for simple, one-step, multiplexed pathogen detection. The paper substrate used in this hybrid microfluidic system facilitated the integration of aptamer biosensors on the microfluidic biochip, and avoided complicated surface treatment and aptamer probe immobilization in a PDMS or glass-only microfluidic system. Lactobacillus acidophilus was used as a bacterium model to develop the microfluidic platform with a detection limit of 11.0 cfu mL−1. We have also successfully extended this method to the simultaneous detection of two infectious pathogens - Staphylococcus aureus and Salmonella enterica. This method is simple and fast. The one-step ‘turn on’ pathogen assay in a ready-to-use microfluidic device only takes ~10 min to complete on the biochip. Furthermore, this microfluidic device has great potential in rapid detection of a wide variety of different other bacterial and viral pathogens. PMID:23929394

  16. An Algorithm for Protein Helix Assignment Using Helix Geometry

    PubMed Central

    Cao, Chen; Xu, Shutan; Wang, Lincong

    2015-01-01

    Helices are one of the most common and were among the earliest recognized secondary structure elements in proteins. The assignment of helices in a protein underlies the analysis of its structure and function. Though the mathematical expression for a helical curve is simple, no previous assignment programs have used a genuine helical curve as a model for helix assignment. In this paper we present a two-step assignment algorithm. The first step searches for a series of bona fide helical curves each one best fits the coordinates of four successive backbone Cα atoms. The second step uses the best fit helical curves as input to make helix assignment. The application to the protein structures in the PDB (protein data bank) proves that the algorithm is able to assign accurately not only regular α-helix but also 310 and π helices as well as their left-handed versions. One salient feature of the algorithm is that the assigned helices are structurally more uniform than those by the previous programs. The structural uniformity should be useful for protein structure classification and prediction while the accurate assignment of a helix to a particular type underlies structure-function relationship in proteins. PMID:26132394

  17. Modular Bundle Adjustment for Photogrammetric Computations

    NASA Astrophysics Data System (ADS)

    Börlin, N.; Murtiyoso, A.; Grussenmeyer, P.; Menna, F.; Nocerino, E.

    2018-05-01

    In this paper we investigate how the residuals in bundle adjustment can be split into a composition of simple functions. According to the chain rule, the Jacobian (linearisation) of the residual can be formed as a product of the Jacobians of the individual steps. When implemented, this enables a modularisation of the computation of the bundle adjustment residuals and Jacobians where each component has limited responsibility. This enables simple replacement of components to e.g. implement different projection or rotation models by exchanging a module. The technique has previously been used to implement bundle adjustment in the open-source package DBAT (Börlin and Grussenmeyer, 2013) based on the Photogrammetric and Computer Vision interpretations of Brown (1971) lens distortion model. In this paper, we applied the technique to investigate how affine distortions can be used to model the projection of a tilt-shift lens. Two extended distortion models were implemented to test the hypothesis that the ordering of the affine and lens distortion steps can be changed to reduce the size of the residuals of a tilt-shift lens calibration. Results on synthetic data confirm that the ordering of the affine and lens distortion steps matter and is detectable by DBAT. However, when applied to a real camera calibration data set of a tilt-shift lens, no difference between the extended models was seen. This suggests that the tested hypothesis is false and that other effects need to be modelled to better explain the projection. The relatively low implementation effort that was needed to generate the models suggest that the technique can be used to investigate other novel projection models in photogrammetry, including modelling changes in the 3D geometry to better understand the tilt-shift lens.

  18. Analysis of longitudinal marginal structural models.

    PubMed

    Bryan, Jenny; Yu, Zhuo; Van Der Laan, Mark J

    2004-07-01

    In this article we construct and study estimators of the causal effect of a time-dependent treatment on survival in longitudinal studies. We employ a particular marginal structural model (MSM), proposed by Robins (2000), and follow a general methodology for constructing estimating functions in censored data models. The inverse probability of treatment weighted (IPTW) estimator of Robins et al. (2000) is used as an initial estimator and forms the basis for an improved, one-step estimator that is consistent and asymptotically linear when the treatment mechanism is consistently estimated. We extend these methods to handle informative censoring. The proposed methodology is employed to estimate the causal effect of exercise on mortality in a longitudinal study of seniors in Sonoma County. A simulation study demonstrates the bias of naive estimators in the presence of time-dependent confounders and also shows the efficiency gain of the IPTW estimator, even in the absence such confounding. The efficiency gain of the improved, one-step estimator is demonstrated through simulation.

  19. To address surface reaction network complexity using scaling relations machine learning and DFT calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ulissi, Zachary W.; Medford, Andrew J.; Bligaard, Thomas

    Surface reaction networks involving hydrocarbons exhibit enormous complexity with thousands of species and reactions for all but the very simplest of chemistries. We present a framework for optimization under uncertainty for heterogeneous catalysis reaction networks using surrogate models that are trained on the fly. The surrogate model is constructed by teaching a Gaussian process adsorption energies based on group additivity fingerprints, combined with transition-state scaling relations and a simple classifier for determining the rate-limiting step. The surrogate model is iteratively used to predict the most important reaction step to be calculated explicitly with computationally demanding electronic structure theory. Applying thesemore » methods to the reaction of syngas on rhodium(111), we identify the most likely reaction mechanism. Lastly, propagating uncertainty throughout this process yields the likelihood that the final mechanism is complete given measurements on only a subset of the entire network and uncertainty in the underlying density functional theory calculations.« less

  20. To address surface reaction network complexity using scaling relations machine learning and DFT calculations

    DOE PAGES

    Ulissi, Zachary W.; Medford, Andrew J.; Bligaard, Thomas; ...

    2017-03-06

    Surface reaction networks involving hydrocarbons exhibit enormous complexity with thousands of species and reactions for all but the very simplest of chemistries. We present a framework for optimization under uncertainty for heterogeneous catalysis reaction networks using surrogate models that are trained on the fly. The surrogate model is constructed by teaching a Gaussian process adsorption energies based on group additivity fingerprints, combined with transition-state scaling relations and a simple classifier for determining the rate-limiting step. The surrogate model is iteratively used to predict the most important reaction step to be calculated explicitly with computationally demanding electronic structure theory. Applying thesemore » methods to the reaction of syngas on rhodium(111), we identify the most likely reaction mechanism. Lastly, propagating uncertainty throughout this process yields the likelihood that the final mechanism is complete given measurements on only a subset of the entire network and uncertainty in the underlying density functional theory calculations.« less

Top