Sample records for normalization process model

  1. Performance analysis of no-vent fill process for liquid hydrogen tank in terrestrial and on-orbit environments

    NASA Astrophysics Data System (ADS)

    Wang, Lei; Li, Yanzhong; Zhang, Feini; Ma, Yuan

    2015-12-01

    Two finite difference computer models, aiming at the process predictions of no-vent fill in normal gravity and microgravity environments respectively, are developed to investigate the filling performance in a liquid hydrogen (LH2) tank. In the normal gravity case model, the tank/fluid system is divided into five control volume including ullage, bulk liquid, gas-liquid interface, ullage-adjacent wall, and liquid-adjacent wall. In the microgravity case model, vapor-liquid thermal equilibrium state is maintained throughout the process, and only two nodes representing fluid and wall regions are applied. To capture the liquid-wall heat transfer accurately, a series of heat transfer mechanisms are considered and modeled successively, including film boiling, transition boiling, nucleate boiling and liquid natural convection. The two models are validated by comparing their prediction with experimental data, which shows good agreement. Then the two models are used to investigate the performance of no-vent fill in different conditions and several conclusions are obtained. It shows that in the normal gravity environment the no-vent fill experiences a continuous pressure rise during the whole process and the maximum pressure occurs at the end of the operation, while the maximum pressure of the microgravity case occurs at the beginning stage of the process. Moreover, it seems that increasing inlet mass flux has an apparent influence on the pressure evolution of no-vent fill process in normal gravity but a little influence in microgravity. The larger initial wall temperature brings about more significant liquid evaporation during the filling operation, and then causes higher pressure evolution, no matter the filling process occurs under normal gravity or microgravity conditions. Reducing inlet liquid temperature can improve the filling performance in normal gravity, but cannot significantly reduce the maximum pressure in microgravity. The presented work benefits the understanding of the no-vent fill performance and may guide the design of on-orbit no-vent fill system.

  2. Design of a linear projector for use with the normal modes of the GLAS 4th order GCM

    NASA Technical Reports Server (NTRS)

    Bloom, S. C.

    1984-01-01

    The design of a linear projector for use with the normal modes of a model of atmospheric circulation is discussed. A central element in any normal mode initialization scheme is the process by which a set of data fields - winds, temperatures or geopotentials, and surface pressures - are expressed ("projected') in terms of the coefficients of a model's normal modes. This process is completely analogous to the Fourier decomposition of a single field (indeed a FFT applied in the zonal direction is a part of the process). Complete separability in all three spatial dimensions is assumed. The basis functions for the modal expansion are given. An important feature of the normal modes is their coupling of the structures of different fields, thus a coefficient in a normal mode expansion would contain both mass and momentum information.

  3. An order insertion scheduling model of logistics service supply chain considering capacity and time factors.

    PubMed

    Liu, Weihua; Yang, Yi; Wang, Shuqing; Liu, Yang

    2014-01-01

    Order insertion often occurs in the scheduling process of logistics service supply chain (LSSC), which disturbs normal time scheduling especially in the environment of mass customization logistics service. This study analyses order similarity coefficient and order insertion operation process and then establishes an order insertion scheduling model of LSSC with service capacity and time factors considered. This model aims to minimize the average unit volume operation cost of logistics service integrator and maximize the average satisfaction degree of functional logistics service providers. In order to verify the viability and effectiveness of our model, a specific example is numerically analyzed. Some interesting conclusions are obtained. First, along with the increase of completion time delay coefficient permitted by customers, the possible inserting order volume first increases and then trends to be stable. Second, supply chain performance reaches the best when the volume of inserting order is equal to the surplus volume of the normal operation capacity in mass service process. Third, the larger the normal operation capacity in mass service process is, the bigger the possible inserting order's volume will be. Moreover, compared to increasing the completion time delay coefficient, improving the normal operation capacity of mass service process is more useful.

  4. Fault detection and diagnosis using neural network approaches

    NASA Technical Reports Server (NTRS)

    Kramer, Mark A.

    1992-01-01

    Neural networks can be used to detect and identify abnormalities in real-time process data. Two basic approaches can be used, the first based on training networks using data representing both normal and abnormal modes of process behavior, and the second based on statistical characterization of the normal mode only. Given data representative of process faults, radial basis function networks can effectively identify failures. This approach is often limited by the lack of fault data, but can be facilitated by process simulation. The second approach employs elliptical and radial basis function neural networks and other models to learn the statistical distributions of process observables under normal conditions. Analytical models of failure modes can then be applied in combination with the neural network models to identify faults. Special methods can be applied to compensate for sensor failures, to produce real-time estimation of missing or failed sensors based on the correlations codified in the neural network.

  5. Dynamic Divisive Normalization Predicts Time-Varying Value Coding in Decision-Related Circuits

    PubMed Central

    LoFaro, Thomas; Webb, Ryan; Glimcher, Paul W.

    2014-01-01

    Normalization is a widespread neural computation, mediating divisive gain control in sensory processing and implementing a context-dependent value code in decision-related frontal and parietal cortices. Although decision-making is a dynamic process with complex temporal characteristics, most models of normalization are time-independent and little is known about the dynamic interaction of normalization and choice. Here, we show that a simple differential equation model of normalization explains the characteristic phasic-sustained pattern of cortical decision activity and predicts specific normalization dynamics: value coding during initial transients, time-varying value modulation, and delayed onset of contextual information. Empirically, we observe these predicted dynamics in saccade-related neurons in monkey lateral intraparietal cortex. Furthermore, such models naturally incorporate a time-weighted average of past activity, implementing an intrinsic reference-dependence in value coding. These results suggest that a single network mechanism can explain both transient and sustained decision activity, emphasizing the importance of a dynamic view of normalization in neural coding. PMID:25429145

  6. A menu-driven software package of Bayesian nonparametric (and parametric) mixed models for regression analysis and density estimation.

    PubMed

    Karabatsos, George

    2017-02-01

    Most of applied statistics involves regression analysis of data. In practice, it is important to specify a regression model that has minimal assumptions which are not violated by data, to ensure that statistical inferences from the model are informative and not misleading. This paper presents a stand-alone and menu-driven software package, Bayesian Regression: Nonparametric and Parametric Models, constructed from MATLAB Compiler. Currently, this package gives the user a choice from 83 Bayesian models for data analysis. They include 47 Bayesian nonparametric (BNP) infinite-mixture regression models; 5 BNP infinite-mixture models for density estimation; and 31 normal random effects models (HLMs), including normal linear models. Each of the 78 regression models handles either a continuous, binary, or ordinal dependent variable, and can handle multi-level (grouped) data. All 83 Bayesian models can handle the analysis of weighted observations (e.g., for meta-analysis), and the analysis of left-censored, right-censored, and/or interval-censored data. Each BNP infinite-mixture model has a mixture distribution assigned one of various BNP prior distributions, including priors defined by either the Dirichlet process, Pitman-Yor process (including the normalized stable process), beta (two-parameter) process, normalized inverse-Gaussian process, geometric weights prior, dependent Dirichlet process, or the dependent infinite-probits prior. The software user can mouse-click to select a Bayesian model and perform data analysis via Markov chain Monte Carlo (MCMC) sampling. After the sampling completes, the software automatically opens text output that reports MCMC-based estimates of the model's posterior distribution and model predictive fit to the data. Additional text and/or graphical output can be generated by mouse-clicking other menu options. This includes output of MCMC convergence analyses, and estimates of the model's posterior predictive distribution, for selected functionals and values of covariates. The software is illustrated through the BNP regression analysis of real data.

  7. Dissociative Functions in the Normal Mourning Process.

    ERIC Educational Resources Information Center

    Kauffman, Jeffrey

    1994-01-01

    Sees dissociative functions in mourning process as occurring in conjunction with integrative trends. Considers initial shock reaction in mourning as model of normal dissociation in mourning process. Dissociation is understood to be related to traumatic significance of death in human consciousness. Discerns four psychological categories of…

  8. Proposal: A Hybrid Dictionary Modelling Approach for Malay Tweet Normalization

    NASA Astrophysics Data System (ADS)

    Muhamad, Nor Azlizawati Binti; Idris, Norisma; Arshi Saloot, Mohammad

    2017-02-01

    Malay Twitter message presents a special deviation from the original language. Malay Tweet widely used currently by Twitter users, especially at Malaya archipelago. Thus, it is important to make a normalization system which can translated Malay Tweet language into the standard Malay language. Some researchers have conducted in natural language processing which mainly focuses on normalizing English Twitter messages, while few studies have been done for normalize Malay Tweets. This paper proposes an approach to normalize Malay Twitter messages based on hybrid dictionary modelling methods. This approach normalizes noisy Malay twitter messages such as colloquially language, novel words, and interjections into standard Malay language. This research will be used Language Model and N-grams model.

  9. Measurement-based reliability/performability models

    NASA Technical Reports Server (NTRS)

    Hsueh, Mei-Chen

    1987-01-01

    Measurement-based models based on real error-data collected on a multiprocessor system are described. Model development from the raw error-data to the estimation of cumulative reward is also described. A workload/reliability model is developed based on low-level error and resource usage data collected on an IBM 3081 system during its normal operation in order to evaluate the resource usage/error/recovery process in a large mainframe system. Thus, both normal and erroneous behavior of the system are modeled. The results provide an understanding of the different types of errors and recovery processes. The measured data show that the holding times in key operational and error states are not simple exponentials and that a semi-Markov process is necessary to model the system behavior. A sensitivity analysis is performed to investigate the significance of using a semi-Markov process, as opposed to a Markov process, to model the measured system.

  10. An Order Insertion Scheduling Model of Logistics Service Supply Chain Considering Capacity and Time Factors

    PubMed Central

    Yang, Yi; Wang, Shuqing; Liu, Yang

    2014-01-01

    Order insertion often occurs in the scheduling process of logistics service supply chain (LSSC), which disturbs normal time scheduling especially in the environment of mass customization logistics service. This study analyses order similarity coefficient and order insertion operation process and then establishes an order insertion scheduling model of LSSC with service capacity and time factors considered. This model aims to minimize the average unit volume operation cost of logistics service integrator and maximize the average satisfaction degree of functional logistics service providers. In order to verify the viability and effectiveness of our model, a specific example is numerically analyzed. Some interesting conclusions are obtained. First, along with the increase of completion time delay coefficient permitted by customers, the possible inserting order volume first increases and then trends to be stable. Second, supply chain performance reaches the best when the volume of inserting order is equal to the surplus volume of the normal operation capacity in mass service process. Third, the larger the normal operation capacity in mass service process is, the bigger the possible inserting order's volume will be. Moreover, compared to increasing the completion time delay coefficient, improving the normal operation capacity of mass service process is more useful. PMID:25276851

  11. Industrial process surveillance system

    DOEpatents

    Gross, Kenneth C.; Wegerich, Stephan W.; Singer, Ralph M.; Mott, Jack E.

    1998-01-01

    A system and method for monitoring an industrial process and/or industrial data source. The system includes generating time varying data from industrial data sources, processing the data to obtain time correlation of the data, determining the range of data, determining learned states of normal operation and using these states to generate expected values, comparing the expected values to current actual values to identify a current state of the process closest to a learned, normal state; generating a set of modeled data, and processing the modeled data to identify a data pattern and generating an alarm upon detecting a deviation from normalcy.

  12. Industrial process surveillance system

    DOEpatents

    Gross, K.C.; Wegerich, S.W.; Singer, R.M.; Mott, J.E.

    1998-06-09

    A system and method are disclosed for monitoring an industrial process and/or industrial data source. The system includes generating time varying data from industrial data sources, processing the data to obtain time correlation of the data, determining the range of data, determining learned states of normal operation and using these states to generate expected values, comparing the expected values to current actual values to identify a current state of the process closest to a learned, normal state; generating a set of modeled data, and processing the modeled data to identify a data pattern and generating an alarm upon detecting a deviation from normalcy. 96 figs.

  13. Industrial Process Surveillance System

    DOEpatents

    Gross, Kenneth C.; Wegerich, Stephan W; Singer, Ralph M.; Mott, Jack E.

    2001-01-30

    A system and method for monitoring an industrial process and/or industrial data source. The system includes generating time varying data from industrial data sources, processing the data to obtain time correlation of the data, determining the range of data, determining learned states of normal operation and using these states to generate expected values, comparing the expected values to current actual values to identify a current state of the process closest to a learned, normal state; generating a set of modeled data, and processing the modeled data to identify a data pattern and generating an alarm upon detecting a deviation from normalcy.

  14. Dynamic divisive normalization predicts time-varying value coding in decision-related circuits.

    PubMed

    Louie, Kenway; LoFaro, Thomas; Webb, Ryan; Glimcher, Paul W

    2014-11-26

    Normalization is a widespread neural computation, mediating divisive gain control in sensory processing and implementing a context-dependent value code in decision-related frontal and parietal cortices. Although decision-making is a dynamic process with complex temporal characteristics, most models of normalization are time-independent and little is known about the dynamic interaction of normalization and choice. Here, we show that a simple differential equation model of normalization explains the characteristic phasic-sustained pattern of cortical decision activity and predicts specific normalization dynamics: value coding during initial transients, time-varying value modulation, and delayed onset of contextual information. Empirically, we observe these predicted dynamics in saccade-related neurons in monkey lateral intraparietal cortex. Furthermore, such models naturally incorporate a time-weighted average of past activity, implementing an intrinsic reference-dependence in value coding. These results suggest that a single network mechanism can explain both transient and sustained decision activity, emphasizing the importance of a dynamic view of normalization in neural coding. Copyright © 2014 the authors 0270-6474/14/3416046-12$15.00/0.

  15. Negative Binomial Process Count and Mixture Modeling.

    PubMed

    Zhou, Mingyuan; Carin, Lawrence

    2015-02-01

    The seemingly disjoint problems of count and mixture modeling are united under the negative binomial (NB) process. A gamma process is employed to model the rate measure of a Poisson process, whose normalization provides a random probability measure for mixture modeling and whose marginalization leads to an NB process for count modeling. A draw from the NB process consists of a Poisson distributed finite number of distinct atoms, each of which is associated with a logarithmic distributed number of data samples. We reveal relationships between various count- and mixture-modeling distributions and construct a Poisson-logarithmic bivariate distribution that connects the NB and Chinese restaurant table distributions. Fundamental properties of the models are developed, and we derive efficient Bayesian inference. It is shown that with augmentation and normalization, the NB process and gamma-NB process can be reduced to the Dirichlet process and hierarchical Dirichlet process, respectively. These relationships highlight theoretical, structural, and computational advantages of the NB process. A variety of NB processes, including the beta-geometric, beta-NB, marked-beta-NB, marked-gamma-NB and zero-inflated-NB processes, with distinct sharing mechanisms, are also constructed. These models are applied to topic modeling, with connections made to existing algorithms under Poisson factor analysis. Example results show the importance of inferring both the NB dispersion and probability parameters.

  16. A model of the normal and null states of pulsars

    NASA Astrophysics Data System (ADS)

    Jones, P. B.

    1981-12-01

    A solvable three-dimensional polar cap model of pair creation and charged particle acceleration has been derived. There are no free parameters of significance apart from the polar surface magnetic flux density. The parameter determining the acceleration potential difference has been obtained by calculation of elementary nuclear and electromagnetic processes. Solutions of the model exist for both normal and null states of a pulsar, and the instability in the normal state leading to the normal to null transition has been identified. The predicted necessary condition for the transition is entirely consistent with observation.

  17. A model of the normal and null states of pulsars

    NASA Astrophysics Data System (ADS)

    Jones, P. B.

    A solvable three dimensional polar cap model of pair creation and charged particle acceleration is derived. There are no free parameters of significance apart from the polar surface magnetic flux density. The parameter CO determining the acceleration potential difference was obtained by calculation of elementary nuclear and electromagnetic processes. Solutions of the model exist for both normal and null states of a pulsar, and the instability in the normal state leading to the normal to null transition is identified. The predicted necessary condition for the transition is entirely consistent with observation.

  18. Sampling intensity and normalizations: Exploring cost-driving factors in nationwide mapping of tree canopy cover

    Treesearch

    John Tipton; Gretchen Moisen; Paul Patterson; Thomas A. Jackson; John Coulston

    2012-01-01

    There are many factors that will determine the final cost of modeling and mapping tree canopy cover nationwide. For example, applying a normalization process to Landsat data used in the models is important in standardizing reflectance values among scenes and eliminating visual seams in the final map product. However, normalization at the national scale is expensive and...

  19. [Monitoring method for macroporous resin column chromatography process of salvianolic acids based on near infrared spectroscopy].

    PubMed

    Hou, Xiang-Mei; Zhang, Lei; Yue, Hong-Shui; Ju, Ai-Chun; Ye, Zheng-Liang

    2016-07-01

    To study and establish a monitoring method for macroporous resin column chromatography process of salvianolic acids by using near infrared spectroscopy (NIR) as a process analytical technology (PAT).The multivariate statistical process control (MSPC) model was developed based on 7 normal operation batches, and 2 test batches (including one normal operation batch and one abnormal operation batch) were used to verify the monitoring performance of this model. The results showed that MSPC model had a good monitoring ability for the column chromatography process. Meanwhile, NIR quantitative calibration model was established for three key quality indexes (rosmarinic acid, lithospermic acid and salvianolic acid B) by using partial least squares (PLS) algorithm. The verification results demonstrated that this model had satisfactory prediction performance. The combined application of the above two models could effectively achieve real-time monitoring for macroporous resin column chromatography process of salvianolic acids, and can be used to conduct on-line analysis of key quality indexes. This established process monitoring method could provide reference for the development of process analytical technology for traditional Chinese medicines manufacturing. Copyright© by the Chinese Pharmaceutical Association.

  20. An analytical elastic plastic contact model with strain hardening and frictional effects for normal and oblique impacts

    DOE PAGES

    Brake, M. R. W.

    2015-02-17

    Impact between metallic surfaces is a phenomenon that is ubiquitous in the design and analysis of mechanical systems. We found that to model this phenomenon, a new formulation for frictional elastic–plastic contact between two surfaces is developed. The formulation is developed to consider both frictional, oblique contact (of which normal, frictionless contact is a limiting case) and strain hardening effects. The constitutive model for normal contact is developed as two contiguous loading domains: the elastic regime and a transitionary region in which the plastic response of the materials develops and the elastic response abates. For unloading, the constitutive model ismore » based on an elastic process. Moreover, the normal contact model is assumed to only couple one-way with the frictional/tangential contact model, which results in the normal contact model being independent of the frictional effects. Frictional, tangential contact is modeled using a microslip model that is developed to consider the pressure distribution that develops from the elastic–plastic normal contact. This model is validated through comparisons with experimental results reported in the literature, and is demonstrated to be significantly more accurate than 10 other normal contact models and three other tangential contact models found in the literature.« less

  1. A numerical insight into elastomer normally closed micro valve actuation with cohesive interfacial cracking modelling

    NASA Astrophysics Data System (ADS)

    Wang, Dongyang; Ba, Dechun; Hao, Ming; Duan, Qihui; Liu, Kun; Mei, Qi

    2018-05-01

    Pneumatic NC (normally closed) valves are widely used in high density microfluidics systems. To improve actuation reliability, the actuation pressure needs to be reduced. In this work, we utilize 3D FEM (finite element method) modelling to get an insight into the valve actuation process numerically. Specifically, the progressive debonding process at the elastomer interface is simulated with CZM (cohesive zone model) method. To minimize the actuation pressure, the V-shape design has been investigated and compared with a normal straight design. The geometrical effects of valve shape has been elaborated, in terms of valve actuation pressure. Based on our simulated results, we formulate the main concerns for micro valve design and fabrication, which is significant for minimizing actuation pressures and ensuring reliable operation.

  2. Attention and normalization circuits in macaque V1

    PubMed Central

    Sanayei, M; Herrero, J L; Distler, C; Thiele, A

    2015-01-01

    Attention affects neuronal processing and improves behavioural performance. In extrastriate visual cortex these effects have been explained by normalization models, which assume that attention influences the circuit that mediates surround suppression. While normalization models have been able to explain attentional effects, their validity has rarely been tested against alternative models. Here we investigate how attention and surround/mask stimuli affect neuronal firing rates and orientation tuning in macaque V1. Surround/mask stimuli provide an estimate to what extent V1 neurons are affected by normalization, which was compared against effects of spatial top down attention. For some attention/surround effect comparisons, the strength of attentional modulation was correlated with the strength of surround modulation, suggesting that attention and surround/mask stimulation (i.e. normalization) might use a common mechanism. To explore this in detail, we fitted multiplicative and additive models of attention to our data. In one class of models, attention contributed to normalization mechanisms, whereas in a different class of models it did not. Model selection based on Akaike's and on Bayesian information criteria demonstrated that in most cells the effects of attention were best described by models where attention did not contribute to normalization mechanisms. This demonstrates that attentional influences on neuronal responses in primary visual cortex often bypass normalization mechanisms. PMID:25757941

  3. A stress-induced phase transition model for semi-crystallize shape memory polymer

    NASA Astrophysics Data System (ADS)

    Guo, Xiaogang; Zhou, Bo; Liu, Liwu; Liu, Yanju; Leng, Jinsong

    2014-03-01

    The developments of constitutive models for shape memory polymer (SMP) have been motivated by its increasing applications. During cooling or heating process, the phase transition which is a continuous time-dependent process happens in semi-crystallize SMP and the various individual phases form at different temperature and in different configuration. Then, the transformation between these phases occurred and shape memory effect will emerge. In addition, stress applied on SMP is an important factor for crystal melting during phase transition. In this theory, an ideal phase transition model considering stress or pre-strain is the key to describe the behaviors of shape memory effect. So a normal distributed model was established in this research to characterize the volume fraction of each phase in SMP during phase transition. Generally, the experiment results are partly backward (in heating process) or forward (in cooling process) compared with the ideal situation considering delay effect during phase transition. So, a correction on the normal distributed model is needed. Furthermore, a nonlinear relationship between stress and phase transition temperature Tg is also taken into account for establishing an accurately normal distributed phase transition model. Finally, the constitutive model which taking the stress as an influence factor on phase transition was also established. Compared with the other expressions, this new-type model possesses less parameter and is more accurate. For the sake of verifying the rationality and accuracy of new phase transition and constitutive model, the comparisons between the simulated and experimental results were carried out.

  4. Hierarchical Multinomial Processing Tree Models: A Latent-Trait Approach

    ERIC Educational Resources Information Center

    Klauer, Karl Christoph

    2010-01-01

    Multinomial processing tree models are widely used in many areas of psychology. A hierarchical extension of the model class is proposed, using a multivariate normal distribution of person-level parameters with the mean and covariance matrix to be estimated from the data. The hierarchical model allows one to take variability between persons into…

  5. Developing Visualization Support System for Teaching/Learning Database Normalization

    ERIC Educational Resources Information Center

    Folorunso, Olusegun; Akinwale, AdioTaofeek

    2010-01-01

    Purpose: In tertiary institution, some students find it hard to learn database design theory, in particular, database normalization. The purpose of this paper is to develop a visualization tool to give students an interactive hands-on experience in database normalization process. Design/methodology/approach: The model-view-controller architecture…

  6. On the generation of log-Lévy distributions and extreme randomness

    NASA Astrophysics Data System (ADS)

    Eliazar, Iddo; Klafter, Joseph

    2011-10-01

    The log-normal distribution is prevalent across the sciences, as it emerges from the combination of multiplicative processes and the central limit theorem (CLT). The CLT, beyond yielding the normal distribution, also yields the class of Lévy distributions. The log-Lévy distributions are the Lévy counterparts of the log-normal distribution, they appear in the context of ultraslow diffusion processes, and they are categorized by Mandelbrot as belonging to the class of extreme randomness. In this paper, we present a natural stochastic growth model from which both the log-normal distribution and the log-Lévy distributions emerge universally—the former in the case of deterministic underlying setting, and the latter in the case of stochastic underlying setting. In particular, we establish a stochastic growth model which universally generates Mandelbrot’s extreme randomness.

  7. Collective thermal transport in pure and alloy semiconductors.

    PubMed

    Torres, Pol; Mohammed, Amr; Torelló, Àlvar; Bafaluy, Javier; Camacho, Juan; Cartoixà, Xavier; Shakouri, Ali; Alvarez, F Xavier

    2018-03-07

    Conventional models for predicting thermal conductivity of alloys usually assume a pure kinetic regime as alloy scattering dominates normal processes. However, some discrepancies between these models and experiments at very small alloy concentrations have been reported. In this work, we use the full first principles kinetic collective model (KCM) to calculate the thermal conductivity of Si 1-x Ge x and In x Ga 1-x As alloys. The calculated thermal conductivities match well with the experimental data for all alloy concentrations. The model shows that the collective contribution must be taken into account at very low impurity concentrations. For higher concentrations, the collective contribution is suppressed, but normal collisions have the effect of significantly reducing the kinetic contribution. The study thus shows the importance of the proper inclusion of normal processes even for alloys for accurate modeling of thermal transport. Furthermore, the phonon spectral distribution of the thermal conductivity is studied in the framework of KCM, providing insights to interpret the superdiffusive regime introduced in the truncated Lévy flight framework.

  8. Quasi-normal modes from non-commutative matrix dynamics

    NASA Astrophysics Data System (ADS)

    Aprile, Francesco; Sanfilippo, Francesco

    2017-09-01

    We explore similarities between the process of relaxation in the BMN matrix model and the physics of black holes in AdS/CFT. Focusing on Dyson-fluid solutions of the matrix model, we perform numerical simulations of the real time dynamics of the system. By quenching the equilibrium distribution we study quasi-normal oscillations of scalar single trace observables, we isolate the lowest quasi-normal mode, and we determine its frequencies as function of the energy. Considering the BMN matrix model as a truncation of N=4 SYM, we also compute the frequencies of the quasi-normal modes of the dual scalar fields in the AdS5-Schwarzschild background. We compare the results, and we finda surprising similarity.

  9. Attention and normalization circuits in macaque V1.

    PubMed

    Sanayei, M; Herrero, J L; Distler, C; Thiele, A

    2015-04-01

    Attention affects neuronal processing and improves behavioural performance. In extrastriate visual cortex these effects have been explained by normalization models, which assume that attention influences the circuit that mediates surround suppression. While normalization models have been able to explain attentional effects, their validity has rarely been tested against alternative models. Here we investigate how attention and surround/mask stimuli affect neuronal firing rates and orientation tuning in macaque V1. Surround/mask stimuli provide an estimate to what extent V1 neurons are affected by normalization, which was compared against effects of spatial top down attention. For some attention/surround effect comparisons, the strength of attentional modulation was correlated with the strength of surround modulation, suggesting that attention and surround/mask stimulation (i.e. normalization) might use a common mechanism. To explore this in detail, we fitted multiplicative and additive models of attention to our data. In one class of models, attention contributed to normalization mechanisms, whereas in a different class of models it did not. Model selection based on Akaike's and on Bayesian information criteria demonstrated that in most cells the effects of attention were best described by models where attention did not contribute to normalization mechanisms. This demonstrates that attentional influences on neuronal responses in primary visual cortex often bypass normalization mechanisms. © 2015 The Authors. European Journal of Neuroscience published by Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  10. A Digital Image-Based Discrete Fracture Network Model and Its Numerical Investigation of Direct Shear Tests

    NASA Astrophysics Data System (ADS)

    Wang, Peitao; Cai, Meifeng; Ren, Fenhua; Li, Changhong; Yang, Tianhong

    2017-07-01

    This paper develops a numerical approach to determine the mechanical behavior of discrete fractures network (DFN) models based on digital image processing technique and particle flow code (PFC2D). A series of direct shear tests of jointed rocks were numerically performed to study the effect of normal stress, friction coefficient and joint bond strength on the mechanical behavior of joint rock and evaluate the influence of micro-parameters on the shear properties of jointed rocks using the proposed approach. The complete shear stress-displacement curve of the DFN model under direct shear tests was presented to evaluate the failure processes of jointed rock. The results show that the peak and residual strength are sensitive to normal stress. A higher normal stress has a greater effect on the initiation and propagation of cracks. Additionally, an increase in the bond strength ratio results in an increase in the number of both shear and normal cracks. The friction coefficient was also found to have a significant influence on the shear strength and shear cracks. Increasing in the friction coefficient resulted in the decreasing in the initiation of normal cracks. The unique contribution of this paper is the proposed modeling technique to simulate the mechanical behavior of jointed rock mass based on particle mechanics approaches.

  11. An ordinary differential equation model for full thickness wounds and the effects of diabetes.

    PubMed

    Bowden, L G; Maini, P K; Moulton, D E; Tang, J B; Wang, X T; Liu, P Y; Byrne, H M

    2014-11-21

    Wound healing is a complex process in which a sequence of interrelated phases contributes to a reduction in wound size. For diabetic patients, many of these processes are compromised, so that wound healing slows down. In this paper we present a simple ordinary differential equation model for wound healing in which attention focusses on the dominant processes that contribute to closure of a full thickness wound. Asymptotic analysis of the resulting model reveals that normal healing occurs in stages: the initial and rapid elastic recoil of the wound is followed by a longer proliferative phase during which growth in the dermis dominates healing. At longer times, fibroblasts exert contractile forces on the dermal tissue, the resulting tension stimulating further dermal tissue growth and enhancing wound closure. By fitting the model to experimental data we find that the major difference between normal and diabetic healing is a marked reduction in the rate of dermal tissue growth for diabetic patients. The model is used to estimate the breakdown of dermal healing into two processes: tissue growth and contraction, the proportions of which provide information about the quality of the healed wound. We show further that increasing dermal tissue growth in the diabetic wound produces closure times similar to those associated with normal healing and we discuss the clinical implications of this hypothesised treatment. Copyright © 2014 Elsevier Ltd. All rights reserved.

  12. A speech processing study using an acoustic model of a multiple-channel cochlear implant

    NASA Astrophysics Data System (ADS)

    Xu, Ying

    1998-10-01

    A cochlear implant is an electronic device designed to provide sound information for adults and children who have bilateral profound hearing loss. The task of representing speech signals as electrical stimuli is central to the design and performance of cochlear implants. Studies have shown that the current speech- processing strategies provide significant benefits to cochlear implant users. However, the evaluation and development of speech-processing strategies have been complicated by hardware limitations and large variability in user performance. To alleviate these problems, an acoustic model of a cochlear implant with the SPEAK strategy is implemented in this study, in which a set of acoustic stimuli whose psychophysical characteristics are as close as possible to those produced by a cochlear implant are presented on normal-hearing subjects. To test the effectiveness and feasibility of this acoustic model, a psychophysical experiment was conducted to match the performance of a normal-hearing listener using model- processed signals to that of a cochlear implant user. Good agreement was found between an implanted patient and an age-matched normal-hearing subject in a dynamic signal discrimination experiment, indicating that this acoustic model is a reasonably good approximation of a cochlear implant with the SPEAK strategy. The acoustic model was then used to examine the potential of the SPEAK strategy in terms of its temporal and frequency encoding of speech. It was hypothesized that better temporal and frequency encoding of speech can be accomplished by higher stimulation rates and a larger number of activated channels. Vowel and consonant recognition tests were conducted on normal-hearing subjects using speech tokens processed by the acoustic model, with different combinations of stimulation rate and number of activated channels. The results showed that vowel recognition was best at 600 pps and 8 activated channels, but further increases in stimulation rate and channel numbers were not beneficial. Manipulations of stimulation rate and number of activated channels did not appreciably affect consonant recognition. These results suggest that overall speech performance may improve by appropriately increasing stimulation rate and number of activated channels. Future revision of this acoustic model is necessary to provide more accurate amplitude representation of speech.

  13. Deconstructing Interocular Suppression: Attention and Divisive Normalization

    PubMed Central

    Li, Hsin-Hung; Carrasco, Marisa; Heeger, David J.

    2015-01-01

    In interocular suppression, a suprathreshold monocular target can be rendered invisible by a salient competitor stimulus presented in the other eye. Despite decades of research on interocular suppression and related phenomena (e.g., binocular rivalry, flash suppression, continuous flash suppression), the neural processing underlying interocular suppression is still unknown. We developed and tested a computational model of interocular suppression. The model included two processes that contributed to the strength of interocular suppression: divisive normalization and attentional modulation. According to the model, the salient competitor induced a stimulus-driven attentional modulation selective for the location and orientation of the competitor, thereby increasing the gain of neural responses to the competitor and reducing the gain of neural responses to the target. Additional suppression was induced by divisive normalization in the model, similar to other forms of visual masking. To test the model, we conducted psychophysics experiments in which both the size and the eye-of-origin of the competitor were manipulated. For small and medium competitors, behavioral performance was consonant with a change in the response gain of neurons that responded to the target. But large competitors induced a contrast-gain change, even when the competitor was split between the two eyes. The model correctly predicted these results and outperformed an alternative model in which the attentional modulation was eye specific. We conclude that both stimulus-driven attention (selective for location and feature) and divisive normalization contribute to interocular suppression. PMID:26517321

  14. Deconstructing Interocular Suppression: Attention and Divisive Normalization.

    PubMed

    Li, Hsin-Hung; Carrasco, Marisa; Heeger, David J

    2015-10-01

    In interocular suppression, a suprathreshold monocular target can be rendered invisible by a salient competitor stimulus presented in the other eye. Despite decades of research on interocular suppression and related phenomena (e.g., binocular rivalry, flash suppression, continuous flash suppression), the neural processing underlying interocular suppression is still unknown. We developed and tested a computational model of interocular suppression. The model included two processes that contributed to the strength of interocular suppression: divisive normalization and attentional modulation. According to the model, the salient competitor induced a stimulus-driven attentional modulation selective for the location and orientation of the competitor, thereby increasing the gain of neural responses to the competitor and reducing the gain of neural responses to the target. Additional suppression was induced by divisive normalization in the model, similar to other forms of visual masking. To test the model, we conducted psychophysics experiments in which both the size and the eye-of-origin of the competitor were manipulated. For small and medium competitors, behavioral performance was consonant with a change in the response gain of neurons that responded to the target. But large competitors induced a contrast-gain change, even when the competitor was split between the two eyes. The model correctly predicted these results and outperformed an alternative model in which the attentional modulation was eye specific. We conclude that both stimulus-driven attention (selective for location and feature) and divisive normalization contribute to interocular suppression.

  15. A normalization model suggests that attention changes the weighting of inputs between visual areas

    PubMed Central

    Cohen, Marlene R.

    2017-01-01

    Models of divisive normalization can explain the trial-averaged responses of neurons in sensory, association, and motor areas under a wide range of conditions, including how visual attention changes the gains of neurons in visual cortex. Attention, like other modulatory processes, is also associated with changes in the extent to which pairs of neurons share trial-to-trial variability. We showed recently that in addition to decreasing correlations between similarly tuned neurons within the same visual area, attention increases correlations between neurons in primary visual cortex (V1) and the middle temporal area (MT) and that an extension of a classic normalization model can account for this correlation increase. One of the benefits of having a descriptive model that can account for many physiological observations is that it can be used to probe the mechanisms underlying processes such as attention. Here, we use electrical microstimulation in V1 paired with recording in MT to provide causal evidence that the relationship between V1 and MT activity is nonlinear and is well described by divisive normalization. We then use the normalization model and recording and microstimulation experiments to show that the attention dependence of V1–MT correlations is better explained by a mechanism in which attention changes the weights of connections between V1 and MT than by a mechanism that modulates responses in either area. Our study shows that normalization can explain interactions between neurons in different areas and provides a framework for using multiarea recording and stimulation to probe the neural mechanisms underlying neuronal computations. PMID:28461501

  16. A normalization model suggests that attention changes the weighting of inputs between visual areas.

    PubMed

    Ruff, Douglas A; Cohen, Marlene R

    2017-05-16

    Models of divisive normalization can explain the trial-averaged responses of neurons in sensory, association, and motor areas under a wide range of conditions, including how visual attention changes the gains of neurons in visual cortex. Attention, like other modulatory processes, is also associated with changes in the extent to which pairs of neurons share trial-to-trial variability. We showed recently that in addition to decreasing correlations between similarly tuned neurons within the same visual area, attention increases correlations between neurons in primary visual cortex (V1) and the middle temporal area (MT) and that an extension of a classic normalization model can account for this correlation increase. One of the benefits of having a descriptive model that can account for many physiological observations is that it can be used to probe the mechanisms underlying processes such as attention. Here, we use electrical microstimulation in V1 paired with recording in MT to provide causal evidence that the relationship between V1 and MT activity is nonlinear and is well described by divisive normalization. We then use the normalization model and recording and microstimulation experiments to show that the attention dependence of V1-MT correlations is better explained by a mechanism in which attention changes the weights of connections between V1 and MT than by a mechanism that modulates responses in either area. Our study shows that normalization can explain interactions between neurons in different areas and provides a framework for using multiarea recording and stimulation to probe the neural mechanisms underlying neuronal computations.

  17. Pricing foreign equity option under stochastic volatility tempered stable Lévy processes

    NASA Astrophysics Data System (ADS)

    Gong, Xiaoli; Zhuang, Xintian

    2017-10-01

    Considering that financial assets returns exhibit leptokurtosis, asymmetry properties as well as clustering and heteroskedasticity effect, this paper substitutes the logarithm normal jumps in Heston stochastic volatility model by the classical tempered stable (CTS) distribution and normal tempered stable (NTS) distribution to construct stochastic volatility tempered stable Lévy processes (TSSV) model. The TSSV model framework permits infinite activity jump behaviors of return dynamics and time varying volatility consistently observed in financial markets through subordinating tempered stable process to stochastic volatility process, capturing leptokurtosis, fat tailedness and asymmetry features of returns. By employing the analytical characteristic function and fast Fourier transform (FFT) technique, the formula for probability density function (PDF) of TSSV returns is derived, making the analytical formula for foreign equity option (FEO) pricing available. High frequency financial returns data are employed to verify the effectiveness of proposed models in reflecting the stylized facts of financial markets. Numerical analysis is performed to investigate the relationship between the corresponding parameters and the implied volatility of foreign equity option.

  18. Aggregate and Individual Replication Probability within an Explicit Model of the Research Process

    ERIC Educational Resources Information Center

    Miller, Jeff; Schwarz, Wolf

    2011-01-01

    We study a model of the research process in which the true effect size, the replication jitter due to changes in experimental procedure, and the statistical error of effect size measurement are all normally distributed random variables. Within this model, we analyze the probability of successfully replicating an initial experimental result by…

  19. Surface morphology of active normal faults in hard rock: Implications for the mechanics of the Asal Rift, Djibouti

    NASA Astrophysics Data System (ADS)

    Pinzuti, Paul; Mignan, Arnaud; King, Geoffrey C. P.

    2010-10-01

    Tectonic-stretching models have been previously proposed to explain the process of continental break-up through the example of the Asal Rift, Djibouti, one of the few places where the early stages of seafloor spreading can be observed. In these models, deformation is distributed starting at the base of a shallow seismogenic zone, in which sub-vertical normal faults are responsible for subsidence whereas cracks accommodate extension. Alternative models suggest that extension results from localised magma intrusion, with normal faults accommodating extension and subsidence only above the maximum reach of the magma column. In these magmatic rifting models, or so-called magmatic intrusion models, normal faults have dips of 45-55° and root into dikes. Vertical profiles of normal fault scarps from levelling campaign in the Asal Rift, where normal faults seem sub-vertical at surface level, have been analysed to discuss the creation and evolution of normal faults in massive fractured rocks (basalt lava flows), using mechanical and kinematics concepts. We show that the studied normal fault planes actually have an average dip ranging between 45° and 65° and are characterised by an irregular stepped form. We suggest that these normal fault scarps correspond to sub-vertical en echelon structures, and that, at greater depth, these scarps combine and give birth to dipping normal faults. The results of our analysis are compatible with the magmatic intrusion models instead of tectonic-stretching models. The geometry of faulting between the Fieale volcano and Lake Asal in the Asal Rift can be simply related to the depth of diking, which in turn can be related to magma supply. This new view supports the magmatic intrusion model of early stages of continental breaking.

  20. Thorough specification of the neurophysiologic processes underlying behavior and of their manifestation in EEG - demonstration with the go/no-go task.

    PubMed

    Shahaf, Goded; Pratt, Hillel

    2013-01-01

    In this work we demonstrate the principles of a systematic modeling approach of the neurophysiologic processes underlying a behavioral function. The modeling is based upon a flexible simulation tool, which enables parametric specification of the underlying neurophysiologic characteristics. While the impact of selecting specific parameters is of interest, in this work we focus on the insights, which emerge from rather accepted assumptions regarding neuronal representation. We show that harnessing of even such simple assumptions enables the derivation of significant insights regarding the nature of the neurophysiologic processes underlying behavior. We demonstrate our approach in some detail by modeling the behavioral go/no-go task. We further demonstrate the practical significance of this simplified modeling approach in interpreting experimental data - the manifestation of these processes in the EEG and ERP literature of normal and abnormal (ADHD) function, as well as with comprehensive relevant ERP data analysis. In-fact we show that from the model-based spatiotemporal segregation of the processes, it is possible to derive simple and yet effective and theory-based EEG markers differentiating normal and ADHD subjects. We summarize by claiming that the neurophysiologic processes modeled for the go/no-go task are part of a limited set of neurophysiologic processes which underlie, in a variety of combinations, any behavioral function with measurable operational definition. Such neurophysiologic processes could be sampled directly from EEG on the basis of model-based spatiotemporal segregation.

  1. Neyman, Markov processes and survival analysis.

    PubMed

    Yang, Grace

    2013-07-01

    J. Neyman used stochastic processes extensively in his applied work. One example is the Fix and Neyman (F-N) competing risks model (1951) that uses finite homogeneous Markov processes to analyse clinical trials with breast cancer patients. We revisit the F-N model, and compare it with the Kaplan-Meier (K-M) formulation for right censored data. The comparison offers a way to generalize the K-M formulation to include risks of recovery and relapses in the calculation of a patient's survival probability. The generalization is to extend the F-N model to a nonhomogeneous Markov process. Closed-form solutions of the survival probability are available in special cases of the nonhomogeneous processes, like the popular multiple decrement model (including the K-M model) and Chiang's staging model, but these models do not consider recovery and relapses while the F-N model does. An analysis of sero-epidemiology current status data with recurrent events is illustrated. Fix and Neyman used Neyman's RBAN (regular best asymptotic normal) estimates for the risks, and provided a numerical example showing the importance of considering both the survival probability and the length of time of a patient living a normal life in the evaluation of clinical trials. The said extension would result in a complicated model and it is unlikely to find analytical closed-form solutions for survival analysis. With ever increasing computing power, numerical methods offer a viable way of investigating the problem.

  2. Weakly coupled map lattice models for multicellular patterning and collective normalization of abnormal single-cell states

    NASA Astrophysics Data System (ADS)

    García-Morales, Vladimir; Manzanares, José A.; Mafe, Salvador

    2017-04-01

    We present a weakly coupled map lattice model for patterning that explores the effects exerted by weakening the local dynamic rules on model biological and artificial networks composed of two-state building blocks (cells). To this end, we use two cellular automata models based on (i) a smooth majority rule (model I) and (ii) a set of rules similar to those of Conway's Game of Life (model II). The normal and abnormal cell states evolve according to local rules that are modulated by a parameter κ . This parameter quantifies the effective weakening of the prescribed rules due to the limited coupling of each cell to its neighborhood and can be experimentally controlled by appropriate external agents. The emergent spatiotemporal maps of single-cell states should be of significance for positional information processes as well as for intercellular communication in tumorigenesis, where the collective normalization of abnormal single-cell states by a predominantly normal neighborhood may be crucial.

  3. Weakly coupled map lattice models for multicellular patterning and collective normalization of abnormal single-cell states.

    PubMed

    García-Morales, Vladimir; Manzanares, José A; Mafe, Salvador

    2017-04-01

    We present a weakly coupled map lattice model for patterning that explores the effects exerted by weakening the local dynamic rules on model biological and artificial networks composed of two-state building blocks (cells). To this end, we use two cellular automata models based on (i) a smooth majority rule (model I) and (ii) a set of rules similar to those of Conway's Game of Life (model II). The normal and abnormal cell states evolve according to local rules that are modulated by a parameter κ. This parameter quantifies the effective weakening of the prescribed rules due to the limited coupling of each cell to its neighborhood and can be experimentally controlled by appropriate external agents. The emergent spatiotemporal maps of single-cell states should be of significance for positional information processes as well as for intercellular communication in tumorigenesis, where the collective normalization of abnormal single-cell states by a predominantly normal neighborhood may be crucial.

  4. Direct analysis in real time mass spectrometry, a process analytical technology tool for real-time process monitoring in botanical drug manufacturing.

    PubMed

    Wang, Lu; Zeng, Shanshan; Chen, Teng; Qu, Haibin

    2014-03-01

    A promising process analytical technology (PAT) tool has been introduced for batch processes monitoring. Direct analysis in real time mass spectrometry (DART-MS), a means of rapid fingerprint analysis, was applied to a percolation process with multi-constituent substances for an anti-cancer botanical preparation. Fifteen batches were carried out, including ten normal operations and five abnormal batches with artificial variations. The obtained multivariate data were analyzed by a multi-way partial least squares (MPLS) model. Control trajectories were derived from eight normal batches, and the qualification was tested by R(2) and Q(2). Accuracy and diagnosis capability of the batch model were then validated by the remaining batches. Assisted with high performance liquid chromatography (HPLC) determination, process faults were explained by corresponding variable contributions. Furthermore, a batch level model was developed to compare and assess the model performance. The present study has demonstrated that DART-MS is very promising in process monitoring in botanical manufacturing. Compared with general PAT tools, DART-MS offers a particular account on effective compositions and can be potentially used to improve batch quality and process consistency of samples in complex matrices. Copyright © 2014 Elsevier B.V. All rights reserved.

  5. Surface Morphology of Active Normal Faults in Hard Rock: Implications for the Mechanics of the Asal Rift, Djibouti

    NASA Astrophysics Data System (ADS)

    Pinzuti, P.; Mignan, A.; King, G. C.

    2009-12-01

    Mechanical stretching models have been previously proposed to explain the process of continental break-up through the example of the Asal Rift, Djibouti, one of the few places where the early stages of seafloor spreading can be observed. In these models, deformation is distributed starting at the base of a shallow seismogenic zone, in which sub-vertical normal faults are responsible for subsidence whereas cracks accommodate extension. Alternative models suggest that extension results from localized magma injection, with normal faults accommodating extension and subsidence above the maximum reach of the magma column. In these magmatic intrusion models, normal faults have dips of 45-55° and root into dikes. Using mechanical and kinematics concepts and vertical profiles of normal fault scarps from an Asal Rift campaign, where normal faults are sub-vertical on surface level, we discuss the creation and evolution of normal faults in massive fractured rocks (basalt). We suggest that the observed fault scarps correspond to sub-vertical en echelon structures and that at greater depth, these scarps combine and give birth to dipping normal faults. Finally, the geometry of faulting between the Fieale volcano and Lake Asal in the Asal Rift can be simply related to the depth of diking, which in turn can be related to magma supply. This new view supports the magmatic intrusion model of early stages of continental breaking.

  6. Replication of Cancellation Orders Using First-Passage Time Theory in Foreign Currency Market

    NASA Astrophysics Data System (ADS)

    Boilard, Jean-François; Kanazawa, Kiyoshi; Takayasu, Hideki; Takayasu, Misako

    Our research focuses on the annihilation dynamics of limit orders in a spot foreign currency market for various currency pairs. We analyze the cancellation order distribution conditioned on the normalized distance from the mid-price; where the normalized distance is defined as the final distance divided by the initial distance. To reproduce real data, we introduce two simple models that assume the market price moves randomly and cancellation occurs either after fixed time t or following the Poisson process. Results of our model qualitatively reproduce basic statistical properties of cancellation orders of the data when limit orders are cancelled according to the Poisson process. We briefly discuss implication of our findings in the construction of more detailed microscopic models.

  7. EGSIEM: Combination of GRACE monthly gravity models on normal equation level

    NASA Astrophysics Data System (ADS)

    Meyer, Ulrich; Jean, Yoomin; Jäggi, Adrian; Mayer-Gürr, Torsten; Neumayer, Hans; Lemoine, Jean-Michel

    2016-04-01

    One of the three geodetic services to be realized in the frame of the EGSIEM project is a scientific combination service. Each associated processing center (AC) will follow a set of common processing standards but will apply its own, independent analysis method. Therefore the quality, robustness and reliability of the combined monthly gravity fields is expected to improve significantly compared to the individual solutions. The Monthly GRACE gravity fields of all ACs are combined on normal equation level. The individual normal equations are weighted depending on pairwise comparisons of the individual gravity field solutions. To derive these weights and for quality control of the individual contributions first a combination of the monthly gravity fields on solution level is performed. The concept of weighting and of the combination on normal equation level is introduced and the formats used for normal equation exchange and gravity field solutions is described. First results of the combination on normal equation level are presented and compared to the corresponding combinations on solution level. EGSIEM has an open data policy and all processing centers of GRACE gravity fields are invited to participate in the combination.

  8. Analyzing Environmental Policies for Chlorinated Solvents with a Model of Markets and Regulations

    DTIC Science & Technology

    1991-01-01

    electronics, aerospace, fabricated metal products, and dry cleaning depend heavily on chlorinated solvents in their production processes . For example...production processes . The second of the model’s components is a group of economic equations that represents all of the solvent substitutions in...Instead, the process for numerically specifying the substitution parameters involves eliciting expert judgments and then normalizing the parameters

  9. Prediction of normalized biodiesel properties by simulation of multiple feedstock blends.

    PubMed

    García, Manuel; Gonzalo, Alberto; Sánchez, José Luis; Arauzo, Jesús; Peña, José Angel

    2010-06-01

    A continuous process for biodiesel production has been simulated using Aspen HYSYS V7.0 software. As fresh feed, feedstocks with a mild acid content have been used. The process flowsheet follows a traditional alkaline transesterification scheme constituted by esterification, transesterification and purification stages. Kinetic models taking into account the concentration of the different species have been employed in order to simulate the behavior of the CSTR reactors and the product distribution within the process. The comparison between experimental data found in literature and the predicted normalized properties, has been discussed. Additionally, a comparison between different thermodynamic packages has been performed. NRTL activity model has been selected as the most reliable of them. The combination of these models allows the prediction of 13 out of 25 parameters included in standard EN-14214:2003, and confers simulators a great value as predictive as well as optimization tool. (c) 2010 Elsevier Ltd. All rights reserved.

  10. Review of Knowledge Enhanced Electronic Logic (KEEL) Technology

    DTIC Science & Technology

    2016-09-01

    compiled. Two KEEL Engine processing models are available for most languages : The “Normal Model” processes information as if it was processed on an... language also makes it easy to “see” the functional relationships and the dynamic (interactive) nature of the language , allows one to interact with...for the Accelerated Processing Model ( Patent number 7,512,581 (3/31/2009)). In June 2006, application US 11/446/801 was submitted to support

  11. Near infrared spectroscopy combined with multivariate analysis for monitoring the ethanol precipitation process of fraction I + II + III supernatant in human albumin separation

    NASA Astrophysics Data System (ADS)

    Li, Can; Wang, Fei; Zang, Lixuan; Zang, Hengchang; Alcalà, Manel; Nie, Lei; Wang, Mingyu; Li, Lian

    2017-03-01

    Nowadays, as a powerful process analytical tool, near infrared spectroscopy (NIRS) has been widely applied in process monitoring. In present work, NIRS combined with multivariate analysis was used to monitor the ethanol precipitation process of fraction I + II + III (FI + II + III) supernatant in human albumin (HA) separation to achieve qualitative and quantitative monitoring at the same time and assure the product's quality. First, a qualitative model was established by using principal component analysis (PCA) with 6 of 8 normal batches samples, and evaluated by the remaining 2 normal batches and 3 abnormal batches. The results showed that the first principal component (PC1) score chart could be successfully used for fault detection and diagnosis. Then, two quantitative models were built with 6 of 8 normal batches to determine the content of the total protein (TP) and HA separately by using partial least squares regression (PLS-R) strategy, and the models were validated by 2 remaining normal batches. The determination coefficient of validation (Rp2), root mean square error of cross validation (RMSECV), root mean square error of prediction (RMSEP) and ratio of performance deviation (RPD) were 0.975, 0.501 g/L, 0.465 g/L and 5.57 for TP, and 0.969, 0.530 g/L, 0.341 g/L and 5.47 for HA, respectively. The results showed that the established models could give a rapid and accurate measurement of the content of TP and HA. The results of this study indicated that NIRS is an effective tool and could be successfully used for qualitative and quantitative monitoring the ethanol precipitation process of FI + II + III supernatant simultaneously. This research has significant reference value for assuring the quality and improving the recovery ratio of HA in industrialization scale by using NIRS.

  12. Near infrared spectroscopy combined with multivariate analysis for monitoring the ethanol precipitation process of fraction I+II+III supernatant in human albumin separation.

    PubMed

    Li, Can; Wang, Fei; Zang, Lixuan; Zang, Hengchang; Alcalà, Manel; Nie, Lei; Wang, Mingyu; Li, Lian

    2017-03-15

    Nowadays, as a powerful process analytical tool, near infrared spectroscopy (NIRS) has been widely applied in process monitoring. In present work, NIRS combined with multivariate analysis was used to monitor the ethanol precipitation process of fraction I+II+III (FI+II+III) supernatant in human albumin (HA) separation to achieve qualitative and quantitative monitoring at the same time and assure the product's quality. First, a qualitative model was established by using principal component analysis (PCA) with 6 of 8 normal batches samples, and evaluated by the remaining 2 normal batches and 3 abnormal batches. The results showed that the first principal component (PC1) score chart could be successfully used for fault detection and diagnosis. Then, two quantitative models were built with 6 of 8 normal batches to determine the content of the total protein (TP) and HA separately by using partial least squares regression (PLS-R) strategy, and the models were validated by 2 remaining normal batches. The determination coefficient of validation (R p 2 ), root mean square error of cross validation (RMSECV), root mean square error of prediction (RMSEP) and ratio of performance deviation (RPD) were 0.975, 0.501g/L, 0.465g/L and 5.57 for TP, and 0.969, 0.530g/L, 0.341g/L and 5.47 for HA, respectively. The results showed that the established models could give a rapid and accurate measurement of the content of TP and HA. The results of this study indicated that NIRS is an effective tool and could be successfully used for qualitative and quantitative monitoring the ethanol precipitation process of FI+II+III supernatant simultaneously. This research has significant reference value for assuring the quality and improving the recovery ratio of HA in industrialization scale by using NIRS. Copyright © 2016 Elsevier B.V. All rights reserved.

  13. Constant strain rate experiments and constitutive modeling for a class of bitumen

    NASA Astrophysics Data System (ADS)

    Reddy, Kommidi Santosh; Umakanthan, S.; Krishnan, J. Murali

    2012-08-01

    The mechanical properties of bitumen vary with the nature of the crude source and the processing methods employed. To understand the role of the processing conditions played in the mechanical properties, bitumen samples derived from the same crude source but processed differently (blown and blended) are investigated. The samples are subjected to constant strain rate experiments in a parallel plate rheometer. The torque applied to realize the prescribed angular velocity for the top plate and the normal force applied to maintain the gap between the top and bottom plate are measured. It is found that when the top plate is held stationary, the time taken by the torque to be reduced by a certain percentage of its maximum value is different from the time taken by the normal force to decrease by the same percentage of its maximum value. Further, the time at which the maximum torque occurs is different from the time at which the maximum normal force occurs. Since the existing constitutive relations for bitumen cannot capture the difference in the relaxation times for the torque and normal force, a new rate type constitutive model, incorporating this response, is proposed. Although the blended and blown bitumen samples used in this study correspond to the same grade, the mechanical responses of the two samples are not the same. This is also reflected in the difference in the values of the material parameters in the model proposed. The differences in the mechanical properties between the differently processed bitumen samples increase further with aging. This has implications for the long-term performance of the pavement.

  14. Trade off between variable and fixed size normalization in orthogonal polynomials based iris recognition system.

    PubMed

    Krishnamoorthi, R; Anna Poorani, G

    2016-01-01

    Iris normalization is an important stage in any iris biometric, as it has a propensity to trim down the consequences of iris distortion. To indemnify the variation in size of the iris owing to the action of stretching or enlarging the pupil in iris acquisition process and camera to eyeball distance, two normalization schemes has been proposed in this work. In the first method, the iris region of interest is normalized by converting the iris into the variable size rectangular model in order to avoid the under samples near the limbus border. In the second method, the iris region of interest is normalized by converting the iris region into a fixed size rectangular model in order to avoid the dimensional discrepancies between the eye images. The performance of the proposed normalization methods is evaluated with orthogonal polynomials based iris recognition in terms of FAR, FRR, GAR, CRR and EER.

  15. Modeling Pulse Transmission in the Monterey Bay Using Parabolic Equation Methods

    DTIC Science & Technology

    1991-12-01

    Collins 9-13 was chosen for this purpose due its energy conservation scheme , and its ability to efficiently incorporate higher order terms in its...pressure field generated by the PE model into normal modes. Additionally, this process provides increased physical understanding of mode coupling and...separation of variables (i.e. normal modes or fast field), as well as pure numerical schemes such as the parabolic equation methods, can be used. However, as

  16. Relating memory to functional performance in normal aging to dementia using hierarchical Bayesian cognitive processing models.

    PubMed

    Shankle, William R; Pooley, James P; Steyvers, Mark; Hara, Junko; Mangrola, Tushar; Reisberg, Barry; Lee, Michael D

    2013-01-01

    Determining how cognition affects functional abilities is important in Alzheimer disease and related disorders. A total of 280 patients (normal or Alzheimer disease and related disorders) received a total of 1514 assessments using the functional assessment staging test (FAST) procedure and the MCI Screen. A hierarchical Bayesian cognitive processing model was created by embedding a signal detection theory model of the MCI Screen-delayed recognition memory task into a hierarchical Bayesian framework. The signal detection theory model used latent parameters of discriminability (memory process) and response bias (executive function) to predict, simultaneously, recognition memory performance for each patient and each FAST severity group. The observed recognition memory data did not distinguish the 6 FAST severity stages, but the latent parameters completely separated them. The latent parameters were also used successfully to transform the ordinal FAST measure into a continuous measure reflecting the underlying continuum of functional severity. Hierarchical Bayesian cognitive processing models applied to recognition memory data from clinical practice settings accurately translated a latent measure of cognition into a continuous measure of functional severity for both individuals and FAST groups. Such a translation links 2 levels of brain information processing and may enable more accurate correlations with other levels, such as those characterized by biomarkers.

  17. The sea urchin larva, a suitable model for biomineralisation studies in space (IML-2 ESA Biorack experiment '24-F urchin').

    PubMed

    Marthy, H J; Gasset, G; Tixador, R; Schatt, P; Eche, B; Dessommes, A; Giacomini, T; Tap, G; Gorand, D

    1996-06-27

    By the ESA Biorack 'F-24 urchin' experiment of the IML-2 mission, for the first time the biomineralisation process in developing sea urchin larvae could be studied under real microgravity conditions. The main objectives were to determine whether in microgravity the process of skeleton formation does occur correctly compared to normal gravity conditions and whether larvae with differentiated skeletons do 'de-mineralise'. These objectives have been essentially achieved. Postflight studies on the recovered 'sub-normal' skeletons focused on qualitative, statistical and quantitative aspects. Clear evidence is obtained that the basic biomineralisation process does actually occur normally in microgravity. No significant differences are observed between flight and ground samples. The sub-normal skeleton architectures indicate, however, that the process of positioning of the skeletogenic cells (determining primarily shape and size of the skeleton) is particularly sensitive to modifications of environmental factors, potentially including gravity. The anatomical heterogeneity of the recovered skeletons, interpreted as long term effect of an accidental thermal shock during artificial egg fertilisation (break of climatisation at LSSF), masks possible effects of microgravity. No pronounced demineralisation appears to occur in microgravity; the magnesium component of the skeleton seems yet less stable than the calcium. On the basis of these results, a continuation of biomineralisation studies in space, with the sea urchin larva as model system, appears well justified and desirable.

  18. Three-Dimensional Coculture Of Human Small-Intestine Cells

    NASA Technical Reports Server (NTRS)

    Wolf, David; Spaulding, Glen; Goodwin, Thomas J.; Prewett, Tracy

    1994-01-01

    Complex three-dimensional masses of normal human epithelial and mesenchymal small-intestine cells cocultured in process involving specially designed bioreactors. Useful as tissued models for studies of growth, regulatory, and differentiation processes in normal intestinal tissues; diseases of small intestine; and interactions between cells of small intestine and viruses causing disease both in small intestine and elsewhere in body. Process used to produce other tissue models, leading to advances in understanding of growth and differentiation in developing organisms, of renewal of tissue, and of treatment of myriad of clinical conditions. Prior articles describing design and use of rotating-wall culture vessels include "Growing And Assembling Cells Into Tissues" (MSC-21559), "High-Aspect-Ratio Rotating Cell-Culture Vessel" (MSC-21662), and "In Vitro, Matrix-Free Formation Of Solid Tumor Spheroids" (MSC-21843).

  19. Normalization of time-series satellite reflectance data to a standard sun-target-sensor geometry using a semi-empirical model

    NASA Astrophysics Data System (ADS)

    Zhao, Yongguang; Li, Chuanrong; Ma, Lingling; Tang, Lingli; Wang, Ning; Zhou, Chuncheng; Qian, Yonggang

    2017-10-01

    Time series of satellite reflectance data have been widely used to characterize environmental phenomena, describe trends in vegetation dynamics and study climate change. However, several sensors with wide spatial coverage and high observation frequency are usually designed to have large field of view (FOV), which cause variations in the sun-targetsensor geometry in time-series reflectance data. In this study, on the basis of semiempirical kernel-driven BRDF model, a new semi-empirical model was proposed to normalize the sun-target-sensor geometry of remote sensing image. To evaluate the proposed model, bidirectional reflectance under different canopy growth conditions simulated by Discrete Anisotropic Radiative Transfer (DART) model were used. The semi-empirical model was first fitted by using all simulated bidirectional reflectance. Experimental result showed a good fit between the bidirectional reflectance estimated by the proposed model and the simulated value. Then, MODIS time-series reflectance data was normalized to a common sun-target-sensor geometry by the proposed model. The experimental results showed the proposed model yielded good fits between the observed and estimated values. The noise-like fluctuations in time-series reflectance data was also reduced after the sun-target-sensor normalization process.

  20. Varying acoustic-phonemic ambiguity reveals that talker normalization is obligatory in speech processing.

    PubMed

    Choi, Ja Young; Hu, Elly R; Perrachione, Tyler K

    2018-04-01

    The nondeterministic relationship between speech acoustics and abstract phonemic representations imposes a challenge for listeners to maintain perceptual constancy despite the highly variable acoustic realization of speech. Talker normalization facilitates speech processing by reducing the degrees of freedom for mapping between encountered speech and phonemic representations. While this process has been proposed to facilitate the perception of ambiguous speech sounds, it is currently unknown whether talker normalization is affected by the degree of potential ambiguity in acoustic-phonemic mapping. We explored the effects of talker normalization on speech processing in a series of speeded classification paradigms, parametrically manipulating the potential for inconsistent acoustic-phonemic relationships across talkers for both consonants and vowels. Listeners identified words with varying potential acoustic-phonemic ambiguity across talkers (e.g., beet/boat vs. boot/boat) spoken by single or mixed talkers. Auditory categorization of words was always slower when listening to mixed talkers compared to a single talker, even when there was no potential acoustic ambiguity between target sounds. Moreover, the processing cost imposed by mixed talkers was greatest when words had the most potential acoustic-phonemic overlap across talkers. Models of acoustic dissimilarity between target speech sounds did not account for the pattern of results. These results suggest (a) that talker normalization incurs the greatest processing cost when disambiguating highly confusable sounds and (b) that talker normalization appears to be an obligatory component of speech perception, taking place even when the acoustic-phonemic relationships across sounds are unambiguous.

  1. Inhibition: Mental Control Process or Mental Resource?

    ERIC Educational Resources Information Center

    Im-Bolter, Nancie; Johnson, Janice; Ling, Daphne; Pascual-Leone, Juan

    2015-01-01

    The current study tested 2 models of inhibition in 45 children with language impairment and 45 children with normally developing language; children were aged 7 to 12 years. Of interest was whether a model of inhibition as a mental-control process (i.e., executive function) or as a mental resource would more accurately reflect the relations among…

  2. Multivariate Generalizations of Student's t-Distribution. ONR Technical Report. [Biometric Lab Report No. 90-3.

    ERIC Educational Resources Information Center

    Gibbons, Robert D.; And Others

    In the process of developing a conditionally-dependent item response theory (IRT) model, the problem arose of modeling an underlying multivariate normal (MVN) response process with general correlation among the items. Without the assumption of conditional independence, for which the underlying MVN cdf takes on comparatively simple forms and can be…

  3. A morphological study of the pacemaker cells of the aganglionic intestine in Hirschsprung's disease utilizing ls/ls model mice.

    PubMed

    Taniguchi, Kan; Matsuura, Kimio; Matsuoka, Takanori; Nakatani, Hajime; Nakano, Takumi; Furuya, Yasuo; Sugimoto, Takeki; Kobayashi, Michiya; Araki, Keijiro

    2005-06-01

    Hirschsprung's disease is a congenital aganglionic neural disorder of the segmental distal intestine characterized by unsettled pathogenesis. The relationship between Hirschsprung's disease and pacemaker cells (PMC), which almost corresponds to that of the interstitial cells of Cajal (ICC), was morphologically observed at the level of the intermuscular layer corresponding to Auerbach's plexus using ls/ls mice. These mice are an ideal model because of their large intestinal aganglionosis and gene abnormalities, which are similar to the human form of the disease. Immunostaining using anti-c-kit receptor antibody (ACK2), a marker of PMC, applied to whole-mount muscle-layer specimens, revealed the presence of c-kit immunopositive multipolar cells with many cytoplasmic processes in normal mice. For ls/ls mice, however, there were significantly fewer processes. The average number of processes per positive cell of 2.5 for the aganglionic large intestine was fewer than 3.5 for the large and small intestine of normal mice, indicating the inability to form connections between nerves and PMC in the aganglionic intestine. For normal mice with an Auerbach's plexus, the process attachment of ICC to the Auerbach's plexus was observed by scanning electron microscopy. However, for ls/ls mice no attachment to the intermuscular nerve without Auerbach's plexus was found, although transmission electron microscopy showed no difference in the cell structure and organelles of the c-kit immunopositive cells between the normal and ls/ls mice. These findings suggest that in the aganglionic intestine of Hirschsprung's disease, aplasia of enteric ganglia induces secondary disturbances during the normal development of intestinal PMC.

  4. Optimal filtering and Bayesian detection for friction-based diagnostics in machines.

    PubMed

    Ray, L R; Townsend, J R; Ramasubramanian, A

    2001-01-01

    Non-model-based diagnostic methods typically rely on measured signals that must be empirically related to process behavior or incipient faults. The difficulty in interpreting a signal that is indirectly related to the fundamental process behavior is significant. This paper presents an integrated non-model and model-based approach to detecting when process behavior varies from a proposed model. The method, which is based on nonlinear filtering combined with maximum likelihood hypothesis testing, is applicable to dynamic systems whose constitutive model is well known, and whose process inputs are poorly known. Here, the method is applied to friction estimation and diagnosis during motion control in a rotating machine. A nonlinear observer estimates friction torque in a machine from shaft angular position measurements and the known input voltage to the motor. The resulting friction torque estimate can be analyzed directly for statistical abnormalities, or it can be directly compared to friction torque outputs of an applicable friction process model in order to diagnose faults or model variations. Nonlinear estimation of friction torque provides a variable on which to apply diagnostic methods that is directly related to model variations or faults. The method is evaluated experimentally by its ability to detect normal load variations in a closed-loop controlled motor driven inertia with bearing friction and an artificially-induced external line contact. Results show an ability to detect statistically significant changes in friction characteristics induced by normal load variations over a wide range of underlying friction behaviors.

  5. Continuation-like semantics for modeling structural process anomalies

    PubMed Central

    2012-01-01

    Background Biomedical ontologies usually encode knowledge that applies always or at least most of the time, that is in normal circumstances. But for some applications like phenotype ontologies it is becoming increasingly important to represent information about aberrations from a norm. These aberrations may be modifications of physiological structures, but also modifications of biological processes. Methods To facilitate precise definitions of process-related phenotypes, such as delayed eruption of the primary teeth or disrupted ocular pursuit movements, I introduce a modeling approach that draws inspiration from the use of continuations in the analysis of programming languages and apply a similar idea to ontological modeling. This approach characterises processes by describing their outcome up to a certain point and the way they will continue in the canonical case. Definitions of process types are then given in terms of their continuations and anomalous phenotypes are defined by their differences to the canonical definitions. Results The resulting model is capable of accurately representing structural process anomalies. It allows distinguishing between different anomaly kinds (delays, interruptions), gives identity criteria for interrupted processes, and explains why normal and anomalous process instances can be subsumed under a common type, thus establishing the connection between canonical and anomalous process-related phenotypes. Conclusion This paper shows how to to give semantically rich definitions of process-related phenotypes. These allow to expand the application areas of phenotype ontologies beyond literature annotation and establishment of genotype-phenotype associations to the detection of anomalies in suitably encoded datasets. PMID:23046705

  6. Time-independent models of asset returns revisited

    NASA Astrophysics Data System (ADS)

    Gillemot, L.; Töyli, J.; Kertesz, J.; Kaski, K.

    2000-07-01

    In this study we investigate various well-known time-independent models of asset returns being simple normal distribution, Student t-distribution, Lévy, truncated Lévy, general stable distribution, mixed diffusion jump, and compound normal distribution. For this we use Standard and Poor's 500 index data of the New York Stock Exchange, Helsinki Stock Exchange index data describing a small volatile market, and artificial data. The results indicate that all models, excluding the simple normal distribution, are, at least, quite reasonable descriptions of the data. Furthermore, the use of differences instead of logarithmic returns tends to make the data looking visually more Lévy-type distributed than it is. This phenomenon is especially evident in the artificial data that has been generated by an inflated random walk process.

  7. Developing a Signature Based Safeguards Approach for the Electrorefiner and Salt Cleanup Unit Operations in Pyroprocessing Facilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Murphy, Chantell Lynne-Marie

    Traditional nuclear materials accounting does not work well for safeguards when applied to pyroprocessing. Alternate methods such as Signature Based Safeguards (SBS) are being investigated. The goal of SBS is real-time/near-real-time detection of anomalous events in the pyroprocessing facility as they could indicate loss of special nuclear material. In high-throughput reprocessing facilities, metric tons of separated material are processed that must be accounted for. Even with very low uncertainties of accountancy measurements (<0.1%) the uncertainty of the material balances is still greater than the desired level. Novel contributions of this work are as follows: (1) significant enhancement of SBS developmentmore » for the salt cleanup process by creating a new gas sparging process model, selecting sensors to monitor normal operation, identifying safeguards-significant off-normal scenarios, and simulating those off-normal events and generating sensor output; (2) further enhancement of SBS development for the electrorefiner by simulating off-normal events caused by changes in salt concentration and identifying which conditions lead to Pu and Cm not tracking throughout the rest of the system; and (3) new contribution in applying statistical techniques to analyze the signatures gained from these two models to help draw real-time conclusions on anomalous events.« less

  8. Comprehensive Experiment--Clinical Biochemistry: Determination of Blood Glucose and Triglycerides in Normal and Diabetic Rats

    ERIC Educational Resources Information Center

    Jiao, Li; Xiujuan, Shi; Juan, Wang; Song, Jia; Lei, Xu; Guotong, Xu; Lixia, Lu

    2015-01-01

    For second year medical students, we redesigned an original laboratory experiment and developed a combined research-teaching clinical biochemistry experiment. Using an established diabetic rat model to detect blood glucose and triglycerides, the students participate in the entire experimental process, which is not normally experienced during a…

  9. Social Breakdown and Competence. A Model of Normal Aging

    ERIC Educational Resources Information Center

    Kuypers, J. A.; Bengtson, V. L.

    1973-01-01

    Presents a model emphasizing the interactions between reorganization of social systems and individual competencies in old age. The model suggests the process by which loss of coping abilities and feelings of worthlessness develop. Implications for effective intervention with the elderly are discussed. (DP)

  10. Nonparametric Bayesian models through probit stick-breaking processes

    PubMed Central

    Rodríguez, Abel; Dunson, David B.

    2013-01-01

    We describe a novel class of Bayesian nonparametric priors based on stick-breaking constructions where the weights of the process are constructed as probit transformations of normal random variables. We show that these priors are extremely flexible, allowing us to generate a great variety of models while preserving computational simplicity. Particular emphasis is placed on the construction of rich temporal and spatial processes, which are applied to two problems in finance and ecology. PMID:24358072

  11. Nonparametric Bayesian models through probit stick-breaking processes.

    PubMed

    Rodríguez, Abel; Dunson, David B

    2011-03-01

    We describe a novel class of Bayesian nonparametric priors based on stick-breaking constructions where the weights of the process are constructed as probit transformations of normal random variables. We show that these priors are extremely flexible, allowing us to generate a great variety of models while preserving computational simplicity. Particular emphasis is placed on the construction of rich temporal and spatial processes, which are applied to two problems in finance and ecology.

  12. The impact of fluid topology on residual saturations - A pore-network model study

    NASA Astrophysics Data System (ADS)

    Doster, F.; Kallel, W.; van Dijke, R.

    2014-12-01

    In two-phase flow in porous media only fractions of the resident fluid are mobilised during a displacement process and, in general, a significant amount of the resident fluid remains permanently trapped. Depending on the application, entrapment is desirable (geological carbon storage), or it should be obviated (enhanced oil recovery, contaminant remediation). Despite its utmost importance for these applications, predictions of trapped fluid saturations for macroscopic systems, in particular under changing displacement conditions, remain challenging. The models that aim to represent trapping phenomena are typically empirical and require tracking of the history of the state variables. This exacerbates the experimental verification and the design of sophisticated displacement technologies that enhance or impede trapping. Recently, experiments [1] have suggested that a macroscopic normalized Euler number, quantifying the topology of fluid distributions, could serve as a parameter to predict residual saturations based on state variables. In these experiments the entrapment of fluids was visualised through 3D micro CT imaging. However, the experiments are notoriously time consuming and therefore only allow for a sparse sampling of the parameter space. Pore-network models represent porous media through an equivalent network structure of pores and throats. Under quasi-static capillary dominated conditions displacement processes can be modeled through simple invasion percolation rules. Hence, in contrast to experiments, pore-network models are fast and therefore allow full sampling of the parameter space. Here, we use pore-network modeling [2] to critically investigate the knowledge gained through observing and tracking the normalized Euler number. More specifically, we identify conditions under which (a) systems with the same saturations but different normalized Euler numbers lead to different residual saturations and (b) systems with the same saturations and the same normalized Euler numbers but different process histories yield the same residual saturations. Special attention is given to contact angle and process histories with varying drainage and imbibition periods. [1] Herring et al., Adv. Water. Resour., 62, 47-58 (2013) [2] Ryazanov et al., Transp. Porous Media, 80, 79-99 (2009).

  13. CONFIG - Adapting qualitative modeling and discrete event simulation for design of fault management systems

    NASA Technical Reports Server (NTRS)

    Malin, Jane T.; Basham, Bryan D.

    1989-01-01

    CONFIG is a modeling and simulation tool prototype for analyzing the normal and faulty qualitative behaviors of engineered systems. Qualitative modeling and discrete-event simulation have been adapted and integrated, to support early development, during system design, of software and procedures for management of failures, especially in diagnostic expert systems. Qualitative component models are defined in terms of normal and faulty modes and processes, which are defined by invocation statements and effect statements with time delays. System models are constructed graphically by using instances of components and relations from object-oriented hierarchical model libraries. Extension and reuse of CONFIG models and analysis capabilities in hybrid rule- and model-based expert fault-management support systems are discussed.

  14. Generalization of the normal-exponential model: exploration of a more accurate parametrisation for the signal distribution on Illumina BeadArrays.

    PubMed

    Plancade, Sandra; Rozenholc, Yves; Lund, Eiliv

    2012-12-11

    Illumina BeadArray technology includes non specific negative control features that allow a precise estimation of the background noise. As an alternative to the background subtraction proposed in BeadStudio which leads to an important loss of information by generating negative values, a background correction method modeling the observed intensities as the sum of the exponentially distributed signal and normally distributed noise has been developed. Nevertheless, Wang and Ye (2012) display a kernel-based estimator of the signal distribution on Illumina BeadArrays and suggest that a gamma distribution would represent a better modeling of the signal density. Hence, the normal-exponential modeling may not be appropriate for Illumina data and background corrections derived from this model may lead to wrong estimation. We propose a more flexible modeling based on a gamma distributed signal and a normal distributed background noise and develop the associated background correction, implemented in the R-package NormalGamma. Our model proves to be markedly more accurate to model Illumina BeadArrays: on the one hand, it is shown on two types of Illumina BeadChips that this model offers a more correct fit of the observed intensities. On the other hand, the comparison of the operating characteristics of several background correction procedures on spike-in and on normal-gamma simulated data shows high similarities, reinforcing the validation of the normal-gamma modeling. The performance of the background corrections based on the normal-gamma and normal-exponential models are compared on two dilution data sets, through testing procedures which represent various experimental designs. Surprisingly, we observe that the implementation of a more accurate parametrisation in the model-based background correction does not increase the sensitivity. These results may be explained by the operating characteristics of the estimators: the normal-gamma background correction offers an improvement in terms of bias, but at the cost of a loss in precision. This paper addresses the lack of fit of the usual normal-exponential model by proposing a more flexible parametrisation of the signal distribution as well as the associated background correction. This new model proves to be considerably more accurate for Illumina microarrays, but the improvement in terms of modeling does not lead to a higher sensitivity in differential analysis. Nevertheless, this realistic modeling makes way for future investigations, in particular to examine the characteristics of pre-processing strategies.

  15. Uraemic hyperparathyroidism causes a reversible inflammatory process of aortic valve calcification in rats

    PubMed Central

    Shuvy, Mony; Abedat, Suzan; Beeri, Ronen; Danenberg, Haim D.; Planer, David; Ben-Dov, Iddo Z.; Meir, Karen; Sosna, Jacob; Lotan, Chaim

    2008-01-01

    Aims Renal failure is associated with aortic valve calcification (AVC). Our aim was to develop an animal model for exploring the pathophysiology and reversibility of AVC, utilizing rats with diet-induced kidney disease. Methods and results Sprague–Dawley rats (n = 23) were fed a phosphate-enriched, uraemia-inducing diet for 7 weeks followed by a normal diet for 2 weeks (‘diet group’). These rats were compared with normal controls (n = 10) and with uraemic controls fed with phosphate-depleted diet (‘low-phosphate group’, n = 10). Clinical investigations included serum creatinine, phosphate and parathyroid hormone (PTH) levels, echocardiography, and multislice computed tomography. Pathological examinations of the valves included histological characterization, Von Kossa staining, and antigen and gene expression analyses. Eight diet group rats were further assessed for reversibility of valve calcification following normalization of their kidney function. At 4 weeks, all diet group rats developed renal failure and hyperparathyroidism. At week 9, renal failure resolved with improvement in the hyperparathyroid state. Echocardiography demonstrated valve calcifications only in diet group rats. Tomographic calcium scores were significantly higher in the diet group compared with controls. Von Kossa stain in diet group valves revealed calcium deposits, positive staining for osteopontin, and CD68. Gene expression analyses revealed overexpression of osteoblast genes and nuclear factor κB activation. Valve calcification resolved after diet cessation in parallel with normalization of PTH levels. Resolution was associated with down-regulation of inflammation and osteoblastic features. Low-phosphate group rats developed kidney dysfunction similar to that of the diet group but with normal levels of PTH. Calcium scores and histology showed only minimal valve calcification. Conclusion We developed an animal model for AVC. The process is related to disturbed mineral metabolism. It is associated with inflammation and osteoblastic features. Furthermore, the process is reversible upon normalization of the mineral homeostasis. Thus, our model constitutes a convenient platform for studying AVC and potential remedies. PMID:18390899

  16. Uraemic hyperparathyroidism causes a reversible inflammatory process of aortic valve calcification in rats.

    PubMed

    Shuvy, Mony; Abedat, Suzan; Beeri, Ronen; Danenberg, Haim D; Planer, David; Ben-Dov, Iddo Z; Meir, Karen; Sosna, Jacob; Lotan, Chaim

    2008-08-01

    Renal failure is associated with aortic valve calcification (AVC). Our aim was to develop an animal model for exploring the pathophysiology and reversibility of AVC, utilizing rats with diet-induced kidney disease. Sprague-Dawley rats (n = 23) were fed a phosphate-enriched, uraemia-inducing diet for 7 weeks followed by a normal diet for 2 weeks ('diet group'). These rats were compared with normal controls (n = 10) and with uraemic controls fed with phosphate-depleted diet ('low-phosphate group', n = 10). Clinical investigations included serum creatinine, phosphate and parathyroid hormone (PTH) levels, echocardiography, and multislice computed tomography. Pathological examinations of the valves included histological characterization, Von Kossa staining, and antigen and gene expression analyses. Eight diet group rats were further assessed for reversibility of valve calcification following normalization of their kidney function. At 4 weeks, all diet group rats developed renal failure and hyperparathyroidism. At week 9, renal failure resolved with improvement in the hyperparathyroid state. Echocardiography demonstrated valve calcifications only in diet group rats. Tomographic calcium scores were significantly higher in the diet group compared with controls. Von Kossa stain in diet group valves revealed calcium deposits, positive staining for osteopontin, and CD68. Gene expression analyses revealed overexpression of osteoblast genes and nuclear factor kappaB activation. Valve calcification resolved after diet cessation in parallel with normalization of PTH levels. Resolution was associated with down-regulation of inflammation and osteoblastic features. Low-phosphate group rats developed kidney dysfunction similar to that of the diet group but with normal levels of PTH. Calcium scores and histology showed only minimal valve calcification. We developed an animal model for AVC. The process is related to disturbed mineral metabolism. It is associated with inflammation and osteoblastic features. Furthermore, the process is reversible upon normalization of the mineral homeostasis. Thus, our model constitutes a convenient platform for studying AVC and potential remedies.

  17. Transient Properties of Probability Distribution for a Markov Process with Size-dependent Additive Noise

    NASA Astrophysics Data System (ADS)

    Yamada, Yuhei; Yamazaki, Yoshihiro

    2018-04-01

    This study considered a stochastic model for cluster growth in a Markov process with a cluster size dependent additive noise. According to this model, the probability distribution of the cluster size transiently becomes an exponential or a log-normal distribution depending on the initial condition of the growth. In this letter, a master equation is obtained for this model, and derivation of the distributions is discussed.

  18. The imbalanced brain: from normal behavior to schizophrenia.

    PubMed

    Grossberg, S

    2000-07-15

    An outstanding problem in psychiatry concerns how to link discoveries about the pharmacological, neurophysiological, and neuroanatomical substrates of mental disorders to the abnormal behaviors that they control. A related problem concerns how to understand abnormal behaviors on a continuum with normal behaviors. During the past few decades, neural models have been developed of how normal cognitive and emotional processes learn from the environment, focus attention and act upon motivationally important events, and cope with unexpected events. When arousal or volitional signals in these models are suitably altered, they give rise to symptoms that strikingly resemble negative and positive symptoms of schizophrenia, including flat affect, impoverishment of will, attentional problems, loss of a theory of mind, thought derailment, hallucinations, and delusions. This article models how emotional centers of the brain, such as the amygdala, interact with sensory and prefrontal cortices (notably ventral, or orbital, prefrontal cortex) to generate affective states, attend to motivationally salient sensory events, and elicit motivated behaviors. Closing this feedback loop between cognitive and emotional centers is predicted to generate a cognitive-emotional resonance that can support conscious awareness. When such emotional centers become depressed, negative symptoms of schizophrenia emerge in the model. Such emotional centers are modeled as opponent affective processes, such as fear and relief, whose response amplitude and sensitivity are calibrated by an arousal level and chemical transmitters that slowly inactivate, or habituate, in an activity-dependent way. These opponent processes exhibit an Inverted-U, whereby behavior becomes depressed if the arousal level is chosen too large or too small. The negative symptoms are owing to the way in which the depressed opponent process interacts with other circuits throughout the brain.

  19. Nested Incremental Modeling in the Development of Computational Theories: The CDP+ Model of Reading Aloud

    ERIC Educational Resources Information Center

    Perry, Conrad; Ziegler, Johannes C.; Zorzi, Marco

    2007-01-01

    At least 3 different types of computational model have been shown to account for various facets of both normal and impaired single word reading: (a) the connectionist triangle model, (b) the dual-route cascaded model, and (c) the connectionist dual process model. Major strengths and weaknesses of these models are identified. In the spirit of…

  20. Delta-Isobar Production in the Hard Photodisintegration of a Deuteron

    NASA Astrophysics Data System (ADS)

    Granados, Carlos; Sargsian, Misak

    2010-02-01

    Hard photodisintegration of the deuteron in delta-isobar production channels is proposed as a useful process in identifying the quark structure of hadrons and of hadronic interactions at large momentum and energy transfer. The reactions are modeled using the hard re scattering model, HRM, following previous works on hard breakup of a nucleon nucleon (NN) system in light nuclei. Here,quantitative predictions through the HRM require the numerical input of fits of experimental NN hard elastic scattering cross sections. Because of the lack of data in hard NN scattering into δ-isobar channels, the cross section of the corresponding photodisintegration processes cannot be predicted in the same way. Instead, the corresponding NN scattering process is modeled through the quark interchange mechanism, QIM, leaving an unknown normalization parameter. The observables of interest are ratios of differential cross sections of δ-isobar production channels to NN breakup in deuteron photodisintegration. Both entries in these ratios are derived through the HRM and QIM so that normalization parameters cancel out and numerical predictions can be obtained. )

  1. Relating Memory To Functional Performance In Normal Aging to Dementia Using Hierarchical Bayesian Cognitive Processing Models

    PubMed Central

    Shankle, William R.; Pooley, James P.; Steyvers, Mark; Hara, Junko; Mangrola, Tushar; Reisberg, Barry; Lee, Michael D.

    2012-01-01

    Determining how cognition affects functional abilities is important in Alzheimer’s disease and related disorders (ADRD). 280 patients (normal or ADRD) received a total of 1,514 assessments using the Functional Assessment Staging Test (FAST) procedure and the MCI Screen (MCIS). A hierarchical Bayesian cognitive processing (HBCP) model was created by embedding a signal detection theory (SDT) model of the MCIS delayed recognition memory task into a hierarchical Bayesian framework. The SDT model used latent parameters of discriminability (memory process) and response bias (executive function) to predict, simultaneously, recognition memory performance for each patient and each FAST severity group. The observed recognition memory data did not distinguish the six FAST severity stages, but the latent parameters completely separated them. The latent parameters were also used successfully to transform the ordinal FAST measure into a continuous measure reflecting the underlying continuum of functional severity. HBCP models applied to recognition memory data from clinical practice settings accurately translated a latent measure of cognition to a continuous measure of functional severity for both individuals and FAST groups. Such a translation links two levels of brain information processing, and may enable more accurate correlations with other levels, such as those characterized by biomarkers. PMID:22407225

  2. Instrumented roll technology for the design space development of roller compaction process.

    PubMed

    Nesarikar, Vishwas V; Vatsaraj, Nipa; Patel, Chandrakant; Early, William; Pandey, Preetanshu; Sprockel, Omar; Gao, Zhihui; Jerzewski, Robert; Miller, Ronald; Levin, Michael

    2012-04-15

    Instrumented roll technology on Alexanderwerk WP120 roller compactor was developed and utilized successfully for the measurement of normal stress on ribbon during the process. The effects of process parameters such as roll speed (4-12 rpm), feed screw speed (19-53 rpm), and hydraulic roll pressure (40-70 bar) on normal stress and ribbon density were studied using placebo and active pre-blends. The placebo blend consisted of 1:1 ratio of microcrystalline cellulose PH102 and anhydrous lactose with sodium croscarmellose, colloidal silicon dioxide, and magnesium stearate. The active pre-blends were prepared using various combinations of one active ingredient (3-17%, w/w) and lubricant (0.1-0.9%, w/w) levels with remaining excipients same as placebo. Three force transducers (load cells) were installed linearly along the width of the roll, equidistant from each other with one transducer located in the center. Normal stress values recorded by side sensors and were lower than normal stress values recorded by middle sensor and showed greater variability than middle sensor. Normal stress was found to be directly proportional to hydraulic pressure and inversely to screw to roll speed ratio. For active pre-blends, normal stress was also a function of compressibility. For placebo pre-blends, ribbon density increased as normal stress increased. For active pre-blends, in addition to normal stress, ribbon density was also a function of gap. Models developed using placebo were found to predict ribbon densities of active blends with good accuracy and the prediction error decreased as the drug concentration of active blend decreased. Effective angle of internal friction and compressibility properties of active pre blend may be used as key indicators for predicting ribbon densities of active blend using placebo ribbon density model. Feasibility of on-line prediction of ribbon density during roller compaction was demonstrated using porosity-pressure data of pre-blend and normal stress measurements. Effect of vacuum to de-aerate pre blend prior to entering the nip zone was studied. Varying levels of vacuum for de-aeration of placebo pre blend did not affect the normal stress values. However, turning off vacuum completely caused an increase in normal stress with subsequent decrease in gap. Use of instrumented roll demonstrated potential to reduce the number of DOE runs by enhancing fundamental understanding of relationship between normal stress on ribbon and process parameters. Copyright © 2012 Elsevier B.V. All rights reserved.

  3. A model for the flux-r.m.s. correlation in blazar variability or the minijets-in-a-jet statistical model

    NASA Astrophysics Data System (ADS)

    Biteau, J.; Giebels, B.

    2012-12-01

    Very high energy gamma-ray variability of blazar emission remains of puzzling origin. Fast flux variations down to the minute time scale, as observed with H.E.S.S. during flares of the blazar PKS 2155-304, suggests that variability originates from the jet, where Doppler boosting can be invoked to relax causal constraints on the size of the emission region. The observation of log-normality in the flux distributions should rule out additive processes, such as those resulting from uncorrelated multiple-zone emission models, and favour an origin of the variability from multiplicative processes not unlike those observed in a broad class of accreting systems. We show, using a simple kinematic model, that Doppler boosting of randomly oriented emitting regions generates flux distributions following a Pareto law, that the linear flux-r.m.s. relation found for a single zone holds for a large number of emitting regions, and that the skewed distribution of the total flux is close to a log-normal, despite arising from an additive process.

  4. Finding Groups Using Model-Based Cluster Analysis: Heterogeneous Emotional Self-Regulatory Processes and Heavy Alcohol Use Risk

    ERIC Educational Resources Information Center

    Mun, Eun Young; von Eye, Alexander; Bates, Marsha E.; Vaschillo, Evgeny G.

    2008-01-01

    Model-based cluster analysis is a new clustering procedure to investigate population heterogeneity utilizing finite mixture multivariate normal densities. It is an inferentially based, statistically principled procedure that allows comparison of nonnested models using the Bayesian information criterion to compare multiple models and identify the…

  5. Hot Isostatic Press Manufacturing Process Development for Fabrication of RERTR Monolithic Fuel Plates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crapps, Justin M.; Clarke, Kester D.; Katz, Joel D.

    2012-06-06

    We use experimentation and finite element modeling to study a Hot Isostatic Press (HIP) manufacturing process for U-10Mo Monolithic Fuel Plates. Finite element simulations are used to identify the material properties affecting the process and improve the process geometry. Accounting for the high temperature material properties and plasticity is important to obtain qualitative agreement between model and experimental results. The model allows us to improve the process geometry and provide guidance on selection of material and finish conditions for the process strongbacks. We conclude that the HIP can must be fully filled to provide uniform normal stress across the bondingmore » interface.« less

  6. A combined approach of generalized additive model and bootstrap with small sample sets for fault diagnosis in fermentation process of glutamate.

    PubMed

    Liu, Chunbo; Pan, Feng; Li, Yun

    2016-07-29

    Glutamate is of great importance in food and pharmaceutical industries. There is still lack of effective statistical approaches for fault diagnosis in the fermentation process of glutamate. To date, the statistical approach based on generalized additive model (GAM) and bootstrap has not been used for fault diagnosis in fermentation processes, much less the fermentation process of glutamate with small samples sets. A combined approach of GAM and bootstrap was developed for the online fault diagnosis in the fermentation process of glutamate with small sample sets. GAM was first used to model the relationship between glutamate production and different fermentation parameters using online data from four normal fermentation experiments of glutamate. The fitted GAM with fermentation time, dissolved oxygen, oxygen uptake rate and carbon dioxide evolution rate captured 99.6 % variance of glutamate production during fermentation process. Bootstrap was then used to quantify the uncertainty of the estimated production of glutamate from the fitted GAM using 95 % confidence interval. The proposed approach was then used for the online fault diagnosis in the abnormal fermentation processes of glutamate, and a fault was defined as the estimated production of glutamate fell outside the 95 % confidence interval. The online fault diagnosis based on the proposed approach identified not only the start of the fault in the fermentation process, but also the end of the fault when the fermentation conditions were back to normal. The proposed approach only used a small sample sets from normal fermentations excitements to establish the approach, and then only required online recorded data on fermentation parameters for fault diagnosis in the fermentation process of glutamate. The proposed approach based on GAM and bootstrap provides a new and effective way for the fault diagnosis in the fermentation process of glutamate with small sample sets.

  7. Data preprocessing methods of FT-NIR spectral data for the classification cooking oil

    NASA Astrophysics Data System (ADS)

    Ruah, Mas Ezatul Nadia Mohd; Rasaruddin, Nor Fazila; Fong, Sim Siong; Jaafar, Mohd Zuli

    2014-12-01

    This recent work describes the data pre-processing method of FT-NIR spectroscopy datasets of cooking oil and its quality parameters with chemometrics method. Pre-processing of near-infrared (NIR) spectral data has become an integral part of chemometrics modelling. Hence, this work is dedicated to investigate the utility and effectiveness of pre-processing algorithms namely row scaling, column scaling and single scaling process with Standard Normal Variate (SNV). The combinations of these scaling methods have impact on exploratory analysis and classification via Principle Component Analysis plot (PCA). The samples were divided into palm oil and non-palm cooking oil. The classification model was build using FT-NIR cooking oil spectra datasets in absorbance mode at the range of 4000cm-1-14000cm-1. Savitzky Golay derivative was applied before developing the classification model. Then, the data was separated into two sets which were training set and test set by using Duplex method. The number of each class was kept equal to 2/3 of the class that has the minimum number of sample. Then, the sample was employed t-statistic as variable selection method in order to select which variable is significant towards the classification models. The evaluation of data pre-processing were looking at value of modified silhouette width (mSW), PCA and also Percentage Correctly Classified (%CC). The results show that different data processing strategies resulting to substantial amount of model performances quality. The effects of several data pre-processing i.e. row scaling, column standardisation and single scaling process with Standard Normal Variate indicated by mSW and %CC. At two PCs model, all five classifier gave high %CC except Quadratic Distance Analysis.

  8. [Muscle regeneration in mdx mouse, and a trial of normal myoblast transfer into regenerating dystrophic muscle].

    PubMed

    Takemitsu, M; Arahata, K; Nonaka, I

    1990-10-01

    The most ideal therapeutic trial on Duchenne muscular dystrophy (DMD) is a transfer of normal myoblasts into dystrophic muscle which has been attempted on animal models in several institutes. In the process of muscle regeneration, the transferred normal myoblasts are expected to incorporate into the regenerating fibers in host dystrophic mouse. To know the capacity of muscle regeneration in dystrophic muscle, we compared the regenerating process of the normal muscle with that of the dystrophic muscle after myonecrosis induced by 0.25% bupivacaine hydrochloride (BPVC) chronologically. In the present study, C57BL/10ScSn-mdx (mdx) mouse was used as an animal model of DMD and C57BL/10ScSn (B10) mouse as a control. There was no definite difference in the behavior of muscle fiber regeneration between normal and dystrophic muscles. The dystrophic muscle regenerated rapidly at the similar tempo to the normal as to their size and fiber type differentiation. The variation in fiber size diameter of dystrophic muscle, however, was more obvious than that of normal. To promote successful myoblast transfer from B10 mouse into dystrophic mdx mouse at higher ratio, cultured normal myoblasts were transferred into the regenerating dystrophic muscle on the first and the second day after myonecrosis induced by BPVC. Two weeks after the myoblast injection, the muscles were examined with immunohistochemical stain using anti dystrophin antibody. Although dystrophin-positive fibers appeared in dystrophic muscle, the positive fibers were unexpectedly small in number (3.86 +/- 1.50%).(ABSTRACT TRUNCATED AT 250 WORDS)

  9. Integration of local motion is normal in amblyopia

    NASA Astrophysics Data System (ADS)

    Hess, Robert F.; Mansouri, Behzad; Dakin, Steven C.; Allen, Harriet A.

    2006-05-01

    We investigate the global integration of local motion direction signals in amblyopia, in a task where performance is equated between normal and amblyopic eyes at the single element level. We use an equivalent noise model to derive the parameters of internal noise and number of samples, both of which we show are normal in amblyopia for this task. This result is in apparent conflict with a previous study in amblyopes showing that global motion processing is defective in global coherence tasks [Vision Res. 43, 729 (2003)]. A similar discrepancy between the normalcy of signal integration [Vision Res. 44, 2955 (2004)] and anomalous global coherence form processing has also been reported [Vision Res. 45, 449 (2005)]. We suggest that these discrepancies for form and motion processing in amblyopia point to a selective problem in separating signal from noise in the typical global coherence task.

  10. Identification of nonlinear normal modes of engineering structures under broadband forcing

    NASA Astrophysics Data System (ADS)

    Noël, Jean-Philippe; Renson, L.; Grappasonni, C.; Kerschen, G.

    2016-06-01

    The objective of the present paper is to develop a two-step methodology integrating system identification and numerical continuation for the experimental extraction of nonlinear normal modes (NNMs) under broadband forcing. The first step processes acquired input and output data to derive an experimental state-space model of the structure. The second step converts this state-space model into a model in modal space from which NNMs are computed using shooting and pseudo-arclength continuation. The method is demonstrated using noisy synthetic data simulated on a cantilever beam with a hardening-softening nonlinearity at its free end.

  11. A self-organized criticality model for ion temperature gradient mode driven turbulence in confined plasma

    NASA Astrophysics Data System (ADS)

    Isliker, H.; Pisokas, Th.; Strintzi, D.; Vlahos, L.

    2010-08-01

    A new self-organized criticality (SOC) model is introduced in the form of a cellular automaton (CA) for ion temperature gradient (ITG) mode driven turbulence in fusion plasmas. Main characteristics of the model are that it is constructed in terms of the actual physical variable, the ion temperature, and that the temporal evolution of the CA, which necessarily is in the form of rules, mimics actual physical processes as they are considered to be active in the system, i.e., a heating process and a local diffusive process that sets on if a threshold in the normalized ITG R /LT is exceeded. The model reaches the SOC state and yields ion temperature profiles of exponential shape, which exhibit very high stiffness, in that they basically are independent of the loading pattern applied. This implies that there is anomalous heat transport present in the system, despite the fact that diffusion at the local level is imposed to be of a normal kind. The distributions of the heat fluxes in the system and of the heat out-fluxes are of power-law shape. The basic properties of the model are in good qualitative agreement with experimental results.

  12. On compensatory strategies and computational models: the case of pure alexia.

    PubMed

    Shallice, Tim

    2014-01-01

    The article is concerned with inferences from the behaviour of neurological patients to models of normal function. It takes the letter-by-letter reading strategy common in pure alexic patients as an example of the methodological problems involved in making such inferences that compensatory strategies produce. The evidence is discussed on the possible use of three ways the letter-by-letter reading process might operate: "reversed spelling"; the use of the phonological input buffer as a temporary holding store during word building; and the use of serial input to the visual word-form system entirely within the visual-orthographic domain such as in the model of Plaut [1999. A connectionist approach to word reading and acquired dyslexia: Extension to sequential processing. Cognitive Science, 23, 543-568]. The compensatory strategy used by, at least, one pure alexic patient does not fit with the third of these possibilities. On the more general question, it is argued that even if compensatory strategies are being used, the behaviour of neurological patients can be useful for the development and assessment of first-generation information-processing models of normal function, but they are not likely to be useful for the development and assessment of second-generation computational models.

  13. Two-part models with stochastic processes for modelling longitudinal semicontinuous data: Computationally efficient inference and modelling the overall marginal mean.

    PubMed

    Yiu, Sean; Tom, Brian Dm

    2017-01-01

    Several researchers have described two-part models with patient-specific stochastic processes for analysing longitudinal semicontinuous data. In theory, such models can offer greater flexibility than the standard two-part model with patient-specific random effects. However, in practice, the high dimensional integrations involved in the marginal likelihood (i.e. integrated over the stochastic processes) significantly complicates model fitting. Thus, non-standard computationally intensive procedures based on simulating the marginal likelihood have so far only been proposed. In this paper, we describe an efficient method of implementation by demonstrating how the high dimensional integrations involved in the marginal likelihood can be computed efficiently. Specifically, by using a property of the multivariate normal distribution and the standard marginal cumulative distribution function identity, we transform the marginal likelihood so that the high dimensional integrations are contained in the cumulative distribution function of a multivariate normal distribution, which can then be efficiently evaluated. Hence, maximum likelihood estimation can be used to obtain parameter estimates and asymptotic standard errors (from the observed information matrix) of model parameters. We describe our proposed efficient implementation procedure for the standard two-part model parameterisation and when it is of interest to directly model the overall marginal mean. The methodology is applied on a psoriatic arthritis data set concerning functional disability.

  14. Methodological study of affine transformations of gene expression data with proposed robust non-parametric multi-dimensional normalization method.

    PubMed

    Bengtsson, Henrik; Hössjer, Ola

    2006-03-01

    Low-level processing and normalization of microarray data are most important steps in microarray analysis, which have profound impact on downstream analysis. Multiple methods have been suggested to date, but it is not clear which is the best. It is therefore important to further study the different normalization methods in detail and the nature of microarray data in general. A methodological study of affine models for gene expression data is carried out. Focus is on two-channel comparative studies, but the findings generalize also to single- and multi-channel data. The discussion applies to spotted as well as in-situ synthesized microarray data. Existing normalization methods such as curve-fit ("lowess") normalization, parallel and perpendicular translation normalization, and quantile normalization, but also dye-swap normalization are revisited in the light of the affine model and their strengths and weaknesses are investigated in this context. As a direct result from this study, we propose a robust non-parametric multi-dimensional affine normalization method, which can be applied to any number of microarrays with any number of channels either individually or all at once. A high-quality cDNA microarray data set with spike-in controls is used to demonstrate the power of the affine model and the proposed normalization method. We find that an affine model can explain non-linear intensity-dependent systematic effects in observed log-ratios. Affine normalization removes such artifacts for non-differentially expressed genes and assures that symmetry between negative and positive log-ratios is obtained, which is fundamental when identifying differentially expressed genes. In addition, affine normalization makes the empirical distributions in different channels more equal, which is the purpose of quantile normalization, and may also explain why dye-swap normalization works or fails. All methods are made available in the aroma package, which is a platform-independent package for R.

  15. Empirical analysis and modeling of manual turnpike tollbooths in China

    NASA Astrophysics Data System (ADS)

    Zhang, Hao

    2017-03-01

    To deal with low-level of service satisfaction at tollbooths of many turnpikes in China, we conduct an empirical study and use a queueing model to investigate performance measures. In this paper, we collect archived data from six tollbooths of a turnpike in China. Empirical analysis on vehicle's time-dependent arrival process and collector's time-dependent service time is conducted. It shows that the vehicle arrival process follows a non-homogeneous Poisson process while the collector service time follows a log-normal distribution. Further, we model the process of collecting tolls at tollbooths with MAP / PH / 1 / FCFS queue for mathematical tractability and present some numerical examples.

  16. Fault detection and diagnosis in an industrial fed-batch cell culture process.

    PubMed

    Gunther, Jon C; Conner, Jeremy S; Seborg, Dale E

    2007-01-01

    A flexible process monitoring method was applied to industrial pilot plant cell culture data for the purpose of fault detection and diagnosis. Data from 23 batches, 20 normal operating conditions (NOC) and three abnormal, were available. A principal component analysis (PCA) model was constructed from 19 NOC batches, and the remaining NOC batch was used for model validation. Subsequently, the model was used to successfully detect (both offline and online) abnormal process conditions and to diagnose the root causes. This research demonstrates that data from a relatively small number of batches (approximately 20) can still be used to monitor for a wide range of process faults.

  17. The study on the nanomachining property and cutting model of single-crystal sapphire by atomic force microscopy.

    PubMed

    Huang, Jen-Ching; Weng, Yung-Jin

    2014-01-01

    This study focused on the nanomachining property and cutting model of single-crystal sapphire during nanomachining. The coated diamond probe is used to as a tool, and the atomic force microscopy (AFM) is as an experimental platform for nanomachining. To understand the effect of normal force on single-crystal sapphire machining, this study tested nano-line machining and nano-rectangular pattern machining at different normal force. In nano-line machining test, the experimental results showed that the normal force increased, the groove depth from nano-line machining also increased. And the trend is logarithmic type. In nano-rectangular pattern machining test, it is found when the normal force increases, the groove depth also increased, but rather the accumulation of small chips. This paper combined the blew by air blower, the cleaning by ultrasonic cleaning machine and using contact mode probe to scan the surface topology after nanomaching, and proposed the "criterion of nanomachining cutting model," in order to determine the cutting model of single-crystal sapphire in the nanomachining is ductile regime cutting model or brittle regime cutting model. After analysis, the single-crystal sapphire substrate is processed in small normal force during nano-linear machining; its cutting modes are ductile regime cutting model. In the nano-rectangular pattern machining, due to the impact of machined zones overlap, the cutting mode is converted into a brittle regime cutting model. © 2014 Wiley Periodicals, Inc.

  18. Computational Study of Thrombus Formation and Clotting Factor Effects under Venous Flow Conditions

    PubMed Central

    Govindarajan, Vijay; Rakesh, Vineet; Reifman, Jaques; Mitrophanov, Alexander Y.

    2016-01-01

    A comprehensive understanding of thrombus formation as a physicochemical process that has evolved to protect the integrity of the human vasculature is critical to our ability to predict and control pathological states caused by a malfunctioning blood coagulation system. Despite numerous investigations, the spatial and temporal details of thrombus growth as a multicomponent process are not fully understood. Here, we used computational modeling to investigate the temporal changes in the spatial distributions of the key enzymatic (i.e., thrombin) and structural (i.e., platelets and fibrin) components within a growing thrombus. Moreover, we investigated the interplay between clot structure and its mechanical properties, such as hydraulic resistance to flow. Our model relied on the coupling of computational fluid dynamics and biochemical kinetics, and was validated using flow-chamber data from a previous experimental study. The model allowed us to identify the distinct patterns characterizing the spatial distributions of thrombin, platelets, and fibrin accumulating within a thrombus. Our modeling results suggested that under the simulated conditions, thrombin kinetics was determined predominantly by prothrombinase. Furthermore, our simulations showed that thrombus resistance imparted by fibrin was ∼30-fold higher than that imparted by platelets. Yet, thrombus-mediated bloodflow occlusion was driven primarily by the platelet deposition process, because the height of the platelet accumulation domain was approximately twice that of the fibrin accumulation domain. Fibrinogen supplementation in normal blood resulted in a nonlinear increase in thrombus resistance, and for a supplemented fibrinogen level of 48%, the thrombus resistance increased by ∼2.7-fold. Finally, our model predicted that restoring the normal levels of clotting factors II, IX, and X while simultaneously restoring fibrinogen (to 88% of its normal level) in diluted blood can restore fibrin generation to ∼78% of its normal level and hence improve clot formation under dilution. PMID:27119646

  19. Mutual-friction induced instability of normal-fluid vortex tubes in superfluid helium-4

    NASA Astrophysics Data System (ADS)

    Kivotides, Demosthenes

    2018-06-01

    It is shown that, as a result of its interactions with superfluid vorticity, a normal-fluid vortex tube in helium-4 becomes unstable and disintegrates. The superfluid vorticity acquires only a small (few percents of normal-fluid tube strength) polarization, whilst expanding in a front-like manner in the intervortex space of the normal-fluid, forming a dense, unstructured tangle in the process. The accompanied energy spectra scalings offer a structural explanation of analogous scalings in fully developed finite-temperature superfluid turbulence. A macroscopic mutual-friction model incorporating these findings is proposed.

  20. Preneoplastic lesion growth driven by the death of adjacent normal stem cells

    PubMed Central

    Chao, Dennis L.; Eck, J. Thomas; Brash, Douglas E.; Maley, Carlo C.; Luebeck, E. Georg

    2008-01-01

    Clonal expansion of premalignant lesions is an important step in the progression to cancer. This process is commonly considered to be a consequence of sustaining a proliferative mutation. Here, we investigate whether the growth trajectory of clones can be better described by a model in which clone growth does not depend on a proliferative advantage. We developed a simple computer model of clonal expansion in an epithelium in which mutant clones can only colonize space left unoccupied by the death of adjacent normal stem cells. In this model, competition for space occurs along the frontier between mutant and normal territories, and both the shapes and the growth rates of lesions are governed by the differences between mutant and normal cells' replication or apoptosis rates. The behavior of this model of clonal expansion along a mutant clone's frontier, when apoptosis of both normal and mutant cells is included, matches the growth of UVB-induced p53-mutant clones in mouse dorsal epidermis better than a standard exponential growth model that does not include tissue architecture. The model predicts precancer cell mutation and death rates that agree with biological observations. These results support the hypothesis that clonal expansion of premalignant lesions can be driven by agents, such as ionizing or nonionizing radiation, that cause cell killing but do not directly stimulate cell replication. PMID:18815380

  1. Numerical modeling of overland flow due to rainfall-runoff

    USDA-ARS?s Scientific Manuscript database

    Runoff is a basic hydrologic process that can be influenced by management activities in agricultural watersheds. Better description of runoff patterns through modeling will help to understand and predict watershed sediment transport and water quality. Normally, runoff is studied with kinematic wave ...

  2. A transition-based joint model for disease named entity recognition and normalization.

    PubMed

    Lou, Yinxia; Zhang, Yue; Qian, Tao; Li, Fei; Xiong, Shufeng; Ji, Donghong

    2017-08-01

    Disease named entities play a central role in many areas of biomedical research, and automatic recognition and normalization of such entities have received increasing attention in biomedical research communities. Existing methods typically used pipeline models with two independent phases: (i) a disease named entity recognition (DER) system is used to find the boundaries of mentions in text and (ii) a disease named entity normalization (DEN) system is used to connect the mentions recognized to concepts in a controlled vocabulary. The main problems of such models are: (i) there is error propagation from DER to DEN and (ii) DEN is useful for DER, but pipeline models cannot utilize this. We propose a transition-based model to jointly perform disease named entity recognition and normalization, casting the output construction process into an incremental state transition process, learning sequences of transition actions globally, which correspond to joint structural outputs. Beam search and online structured learning are used, with learning being designed to guide search. Compared with the only existing method for joint DEN and DER, our method allows non-local features to be used, which significantly improves the accuracies. We evaluate our model on two corpora: the BioCreative V Chemical Disease Relation (CDR) corpus and the NCBI disease corpus. Experiments show that our joint framework achieves significantly higher performances compared to competitive pipeline baselines. Our method compares favourably to other state-of-the-art approaches. Data and code are available at https://github.com/louyinxia/jointRN. dhji@whu.edu.cn. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  3. Dynamic response characteristics of high temperature superconducting maglev systems: Comparison between Halbach-type and normal permanent magnet guideways

    NASA Astrophysics Data System (ADS)

    Wang, B.; Zheng, J.; Che, T.; Zheng, B. T.; Si, S. S.; Deng, Z. G.

    2015-12-01

    The permanent magnet guideway (PMG) is very important for the performance of the high temperature superconducting (HTS) system in terms of electromagnetic force and operational stability. The dynamic response characteristics of a HTS maglev model levitating on two types of PMG, which are the normal PMG with iron flux concentration and Halbach-type PMG, were investigated by experiments. The dynamic signals for different field-cooling heights (FCHs) and loading/unloading processes were acquired and analyzed by a vibration analyzer and laser displacement sensors. The resonant frequency, stiffness and levitation height of the model were discussed. It was found that the maglev model on the Halbach-type PMG has higher resonant frequency and higher vertical stiffness compared with the normal PMG. However, the low lateral stiffness of the model on the Halbach-type PMG indicates poor lateral stability. Besides, the Halbach-type PMG has better loading capacity than the normal PMG. These results are helpful to design a suitable PMG for the HTS system in practical applications.

  4. Modeling and forecasting foreign exchange daily closing prices with normal inverse Gaussian

    NASA Astrophysics Data System (ADS)

    Teneng, Dean

    2013-09-01

    We fit the normal inverse Gaussian(NIG) distribution to foreign exchange closing prices using the open software package R and select best models by Käärik and Umbleja (2011) proposed strategy. We observe that daily closing prices (12/04/2008 - 07/08/2012) of CHF/JPY, AUD/JPY, GBP/JPY, NZD/USD, QAR/CHF, QAR/EUR, SAR/CHF, SAR/EUR, TND/CHF and TND/EUR are excellent fits while EGP/EUR and EUR/GBP are good fits with a Kolmogorov-Smirnov test p-value of 0.062 and 0.08 respectively. It was impossible to estimate normal inverse Gaussian parameters (by maximum likelihood; computational problem) for JPY/CHF but CHF/JPY was an excellent fit. Thus, while the stochastic properties of an exchange rate can be completely modeled with a probability distribution in one direction, it may be impossible the other way around. We also demonstrate that foreign exchange closing prices can be forecasted with the normal inverse Gaussian (NIG) Lévy process, both in cases where the daily closing prices can and cannot be modeled by NIG distribution.

  5. Extending BPM Environments of Your Choice with Performance Related Decision Support

    NASA Astrophysics Data System (ADS)

    Fritzsche, Mathias; Picht, Michael; Gilani, Wasif; Spence, Ivor; Brown, John; Kilpatrick, Peter

    What-if Simulations have been identified as one solution for business performance related decision support. Such support is especially useful in cases where it can be automatically generated out of Business Process Management (BPM) Environments from the existing business process models and performance parameters monitored from the executed business process instances. Currently, some of the available BPM Environments offer basic-level performance prediction capabilities. However, these functionalities are normally too limited to be generally useful for performance related decision support at business process level. In this paper, an approach is presented which allows the non-intrusive integration of sophisticated tooling for what-if simulations, analytic performance prediction tools, process optimizations or a combination of such solutions into already existing BPM environments. The approach abstracts from process modelling techniques which enable automatic decision support spanning processes across numerous BPM Environments. For instance, this enables end-to-end decision support for composite processes modelled with the Business Process Modelling Notation (BPMN) on top of existing Enterprise Resource Planning (ERP) processes modelled with proprietary languages.

  6. Mimicking Aphasic Semantic Errors in Normal Speech Production: Evidence from a Novel Experimental Paradigm

    ERIC Educational Resources Information Center

    Hodgson, Catherine; Lambon Ralph, Matthew A.

    2008-01-01

    Semantic errors are commonly found in semantic dementia (SD) and some forms of stroke aphasia and provide insights into semantic processing and speech production. Low error rates are found in standard picture naming tasks in normal controls. In order to increase error rates and thus provide an experimental model of aphasic performance, this study…

  7. Engineering model for ultrafast laser microprocessing

    NASA Astrophysics Data System (ADS)

    Audouard, E.; Mottay, E.

    2016-03-01

    Ultrafast laser micro-machining relies on complex laser-matter interaction processes, leading to a virtually athermal laser ablation. The development of industrial ultrafast laser applications benefits from a better understanding of these processes. To this end, a number of sophisticated scientific models have been developed, providing valuable insights in the physics of the interaction. Yet, from an engineering point of view, they are often difficult to use, and require a number of adjustable parameters. We present a simple engineering model for ultrafast laser processing, applied in various real life applications: percussion drilling, line engraving, and non normal incidence trepanning. The model requires only two global parameters. Analytical results are derived for single pulse percussion drilling or simple pass engraving. Simple assumptions allow to predict the effect of non normal incident beams to obtain key parameters for trepanning drilling. The model is compared to experimental data on stainless steel with a wide range of laser characteristics (time duration, repetition rate, pulse energy) and machining conditions (sample or beam speed). Ablation depth and volume ablation rate are modeled for pulse durations from 100 fs to 1 ps. Trepanning time of 5.4 s with a conicity of 0.15° is obtained for a hole of 900 μm depth and 100 μm diameter.

  8. Renormalized vibrations and normal energy transport in 1d FPU-like discrete nonlinear Schrödinger equations.

    PubMed

    Li, Simeng; Li, Nianbei

    2018-03-28

    For one-dimensional (1d) nonlinear atomic lattices, the models with on-site nonlinearities such as the Frenkel-Kontorova (FK) and ϕ 4 lattices have normal energy transport while the models with inter-site nonlinearities such as the Fermi-Pasta-Ulam-β (FPU-β) lattice exhibit anomalous energy transport. The 1d Discrete Nonlinear Schrödinger (DNLS) equations with on-site nonlinearities has been previously studied and normal energy transport has also been found. Here, we investigate the energy transport of 1d FPU-like DNLS equations with inter-site nonlinearities. Extended from the FPU-β lattice, the renormalized vibration theory is developed for the FPU-like DNLS models and the predicted renormalized vibrations are verified by direct numerical simulations same as the FPU-β lattice. However, the energy diffusion processes are explored and normal energy transport is observed for the 1d FPU-like DNLS models, which is different from their atomic lattice counterpart of FPU-β lattice. The reason might be that, unlike nonlinear atomic lattices where models with on-site nonlinearities have one less conserved quantities than the models with inter-site nonlinearities, the DNLS models with on-site or inter-site nonlinearities have the same number of conserved quantities as the result of gauge transformation.

  9. A Bayesian Semiparametric Item Response Model with Dirichlet Process Priors

    ERIC Educational Resources Information Center

    Miyazaki, Kei; Hoshino, Takahiro

    2009-01-01

    In Item Response Theory (IRT), item characteristic curves (ICCs) are illustrated through logistic models or normal ogive models, and the probability that examinees give the correct answer is usually a monotonically increasing function of their ability parameters. However, since only limited patterns of shapes can be obtained from logistic models…

  10. Nuclear test ban treaty verification: Improving test ban monitoring with empirical and model-based signal processing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harris, David B.; Gibbons, Steven J.; Rodgers, Arthur J.

    In this approach, small scale-length medium perturbations not modeled in the tomographic inversion might be described as random fields, characterized by particular distribution functions (e.g., normal with specified spatial covariance). Conceivably, random field parameters (scatterer density or scale length) might themselves be the targets of tomographic inversions of the scattered wave field. As a result, such augmented models may provide processing gain through the use of probabilistic signal sub spaces rather than deterministic waveforms.

  11. Nuclear test ban treaty verification: Improving test ban monitoring with empirical and model-based signal processing

    DOE PAGES

    Harris, David B.; Gibbons, Steven J.; Rodgers, Arthur J.; ...

    2012-05-01

    In this approach, small scale-length medium perturbations not modeled in the tomographic inversion might be described as random fields, characterized by particular distribution functions (e.g., normal with specified spatial covariance). Conceivably, random field parameters (scatterer density or scale length) might themselves be the targets of tomographic inversions of the scattered wave field. As a result, such augmented models may provide processing gain through the use of probabilistic signal sub spaces rather than deterministic waveforms.

  12. Microscopic prediction of speech recognition for listeners with normal hearing in noise using an auditory model.

    PubMed

    Jürgens, Tim; Brand, Thomas

    2009-11-01

    This study compares the phoneme recognition performance in speech-shaped noise of a microscopic model for speech recognition with the performance of normal-hearing listeners. "Microscopic" is defined in terms of this model twofold. First, the speech recognition rate is predicted on a phoneme-by-phoneme basis. Second, microscopic modeling means that the signal waveforms to be recognized are processed by mimicking elementary parts of human's auditory processing. The model is based on an approach by Holube and Kollmeier [J. Acoust. Soc. Am. 100, 1703-1716 (1996)] and consists of a psychoacoustically and physiologically motivated preprocessing and a simple dynamic-time-warp speech recognizer. The model is evaluated while presenting nonsense speech in a closed-set paradigm. Averaged phoneme recognition rates, specific phoneme recognition rates, and phoneme confusions are analyzed. The influence of different perceptual distance measures and of the model's a-priori knowledge is investigated. The results show that human performance can be predicted by this model using an optimal detector, i.e., identical speech waveforms for both training of the recognizer and testing. The best model performance is yielded by distance measures which focus mainly on small perceptual distances and neglect outliers.

  13. Exploration of a physiologically-inspired hearing-aid algorithm using a computer model mimicking impaired hearing.

    PubMed

    Jürgens, Tim; Clark, Nicholas R; Lecluyse, Wendy; Meddis, Ray

    2016-01-01

    To use a computer model of impaired hearing to explore the effects of a physiologically-inspired hearing-aid algorithm on a range of psychoacoustic measures. A computer model of a hypothetical impaired listener's hearing was constructed by adjusting parameters of a computer model of normal hearing. Absolute thresholds, estimates of compression, and frequency selectivity (summarized to a hearing profile) were assessed using this model with and without pre-processing the stimuli by a hearing-aid algorithm. The influence of different settings of the algorithm on the impaired profile was investigated. To validate the model predictions, the effect of the algorithm on hearing profiles of human impaired listeners was measured. A computer model simulating impaired hearing (total absence of basilar membrane compression) was used, and three hearing-impaired listeners participated. The hearing profiles of the model and the listeners showed substantial changes when the test stimuli were pre-processed by the hearing-aid algorithm. These changes consisted of lower absolute thresholds, steeper temporal masking curves, and sharper psychophysical tuning curves. The hearing-aid algorithm affected the impaired hearing profile of the model to approximate a normal hearing profile. Qualitatively similar results were found with the impaired listeners' hearing profiles.

  14. An updated concept of coagulation with clinical implications.

    PubMed

    Romney, Gregory; Glick, Michael

    2009-05-01

    Over the past century, a series of models have been put forth to explain the coagulation mechanism. The coagulation cascade/waterfall model has gained the most widespread acceptance. This model, however, has problems when it is used in different clinical scenarios. A more recently proposed cell-based model better describes the coagulation process in vivo and provides oral health care professionals (OHCPs) with a better understanding of the clinical implications of providing dental care to patients with potentially increased bleeding tendencies. The authors conducted a literature search using the PubMed database. They searched for key words including "coagulation," "hemostasis," "bleeding," "coagulation factors," "models," "prothrombin time," "activated partial thromboplastin time," "international normalized ratio," "anticoagulation therapy" and "hemophilia" separately and in combination. The coagulation cascade/waterfall model is insufficient to explain coagulation in vivo, predict a patient's bleeding tendency, or correlate clinical outcomes with specific laboratory screening tests such as prothrombin time, activated partial thromboplastin time and international normalized ratio. However, the cell-based model of coagulation that reflects the in vivo process of coagulation provides insight into the clinical ramifications of treating dental patients with specific coagulation factor deficiencies. Understanding the in vivo coagulation process will help OHCPs better predict a patient's bleeding tendency. In addition, applying the theoretical concept of the cell-based model of coagulation to commonly used laboratory screening tests for coagulation and bleeding will result in safer and more appropriate dental care.

  15. Complex Networks in Psychological Models

    NASA Astrophysics Data System (ADS)

    Wedemann, R. S.; Carvalho, L. S. A. V. D.; Donangelo, R.

    We develop schematic, self-organizing, neural-network models to describe mechanisms associated with mental processes, by a neurocomputational substrate. These models are examples of real world complex networks with interesting general topological structures. Considering dopaminergic signal-to-noise neuronal modulation in the central nervous system, we propose neural network models to explain development of cortical map structure and dynamics of memory access, and unify different mental processes into a single neurocomputational substrate. Based on our neural network models, neurotic behavior may be understood as an associative memory process in the brain, and the linguistic, symbolic associative process involved in psychoanalytic working-through can be mapped onto a corresponding process of reconfiguration of the neural network. The models are illustrated through computer simulations, where we varied dopaminergic modulation and observed the self-organizing emergent patterns at the resulting semantic map, interpreting them as different manifestations of mental functioning, from psychotic through to normal and neurotic behavior, and creativity.

  16. A 3% Measurement of the Beam Normal Single Spin Asymmetry in Forward Angle Elastic Electron-Proton Scattering using the Qweak Setup

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Waidyawansa, Dinayadura Buddhini

    2013-08-01

    The beam normal single spin asymmetry generated in the scattering of transversely polarized electrons from unpolarized nucleons is an observable of the imaginary part of the two-photon exchange process. Moreover, it is a potential source of false asymmetry in parity violating electron scattering experiments. The Q{sub weak} experiment uses parity violating electron scattering to make a direct measurement of the weak charge of the proton. The targeted 4% measurement of the weak charge of the proton probes for parity violating new physics beyond the Standard Model. The beam normal single spin asymmetry at Q{sub weak} kinematics is at least threemore » orders of magnitude larger than 5 ppb precision of the parity violating asymmetry. To better understand this parity conserving background, the Q{sub weak} Collaboration has performed elastic scattering measurements with fully transversely polarized electron beam on the proton and aluminum. This dissertation presents the analysis of the 3% measurement (1.3% statistical and 2.6% systematic) of beam normal single spin asymmetry in electronproton scattering at a Q2 of 0.025 (GeV/c)2. It is the most precise existing measurement of beam normal single spin asymmetry available at the time. A measurement of this precision helps to improve the theoretical models on beam normal single spin asymmetry and thereby our understanding of the doubly virtual Compton scattering process.« less

  17. Impaired receptor-mediated catabolism of low density lipoprotein in the WHHL rabbit, an animal model of familial hypercholesterolemia

    PubMed Central

    Bilheimer, David W.; Watanabe, Yoshio; Kita, Toru

    1982-01-01

    The homozygous WHHL (Watanabe heritable hyperlipidemic) rabbit displays either no or only minimal low density lipoprotein (LDL) receptor activity on cultured fibroblasts and liver membranes and has therefore been proposed as an animal model for human familial hypercholesterolemia. To assess the impact of this mutation on LDL metabolism in vivo, we performed lipoprotein turnover studies in normal and WHHL rabbits using both native rabbit LDL and chemically modified LDL (i.e., methyl-LDL) that does not bind to LDL receptors. The total fractional catabolic rate (FCR) for LDL in the normal rabbit was 3.5-fold greater than in the WHHL rabbit. Sixty-seven percent of the total FCR for LDL in the normal rabbit was due to LDL receptor-mediated clearance and 33% was attributable to receptor-independent processes; in the WHHL rabbit, essentially all of the LDL was catabolized via receptor-independent processes. Despite a 17.5-fold elevated plasma pool size of LDL apoprotein (apo-LDL) in WHHL as compared to normal rabbits, the receptor-independent FCR—as judged by the turnover of methyl-LDL—was similar in the two strains. Thus, the receptor-independent catabolic processes are not influenced by the mutation affecting the LDL receptor. The WHHL rabbits also exhibited a 5.6-fold increase in the absolute rate of apo-LDL synthesis and catabolism. In absolute terms, the WHHL rabbit cleared 19-fold more apo-LDL via receptor-independent processes than did the normal rabbit and cleared virtually none by the receptor-dependent pathway. These results indicate that the homozygous WHHL rabbit shares a number of metabolic features in common with human familial hypercholesterolemia and should serve as a useful model for the study of altered lipoprotein metabolism associated with receptor abnormalities. We also noted that the in vivo metabolic behavior of human and rabbit LDL in the normal rabbit differed such that the mean total FCR for human LDL was only 64% of the mean total FCR for rabbit LDL, whereas human and rabbit methyl-LDL were cleared at identical rates. Thus, if human LDL and methyl-LDL had been used in these studies, the magnitude of both the total and receptor-dependent FCR would have been underestimated. PMID:6285345

  18. Hard to “tune in”: neural mechanisms of live face-to-face interaction with high-functioning autistic spectrum disorder

    PubMed Central

    Tanabe, Hiroki C.; Kosaka, Hirotaka; Saito, Daisuke N.; Koike, Takahiko; Hayashi, Masamichi J.; Izuma, Keise; Komeda, Hidetsugu; Ishitobi, Makoto; Omori, Masao; Munesue, Toshio; Okazawa, Hidehiko; Wada, Yuji; Sadato, Norihiro

    2012-01-01

    Persons with autism spectrum disorders (ASD) are known to have difficulty in eye contact (EC). This may make it difficult for their partners during face to face communication with them. To elucidate the neural substrates of live inter-subject interaction of ASD patients and normal subjects, we conducted hyper-scanning functional MRI with 21 subjects with autistic spectrum disorder (ASD) paired with typically-developed (normal) subjects, and with 19 pairs of normal subjects as a control. Baseline EC was maintained while subjects performed real-time joint-attention task. The task-related effects were modeled out, and inter-individual correlation analysis was performed on the residual time-course data. ASD–Normal pairs were less accurate at detecting gaze direction than Normal–Normal pairs. Performance was impaired both in ASD subjects and in their normal partners. The left occipital pole (OP) activation by gaze processing was reduced in ASD subjects, suggesting that deterioration of eye-cue detection in ASD is related to impairment of early visual processing of gaze. On the other hand, their normal partners showed greater activity in the bilateral occipital cortex and the right prefrontal area, indicating a compensatory workload. Inter-brain coherence in the right IFG that was observed in the Normal-Normal pairs (Saito et al., 2010) during EC diminished in ASD–Normal pairs. Intra-brain functional connectivity between the right IFG and right superior temporal sulcus (STS) in normal subjects paired with ASD subjects was reduced compared with in Normal–Normal pairs. This functional connectivity was positively correlated with performance of the normal partners on the eye-cue detection. Considering the integrative role of the right STS in gaze processing, inter-subject synchronization during EC may be a prerequisite for eye cue detection by the normal partner. PMID:23060772

  19. Batch Statistical Process Monitoring Approach to a Cocrystallization Process.

    PubMed

    Sarraguça, Mafalda C; Ribeiro, Paulo R S; Dos Santos, Adenilson O; Lopes, João A

    2015-12-01

    Cocrystals are defined as crystalline structures composed of two or more compounds that are solid at room temperature held together by noncovalent bonds. Their main advantages are the increase of solubility, bioavailability, permeability, stability, and at the same time retaining active pharmaceutical ingredient bioactivity. The cocrystallization between furosemide and nicotinamide by solvent evaporation was monitored on-line using near-infrared spectroscopy (NIRS) as a process analytical technology tool. The near-infrared spectra were analyzed using principal component analysis. Batch statistical process monitoring was used to create control charts to perceive the process trajectory and define control limits. Normal and non-normal operating condition batches were performed and monitored with NIRS. The use of NIRS associated with batch statistical process models allowed the detection of abnormal variations in critical process parameters, like the amount of solvent or amount of initial components present in the cocrystallization. © 2015 Wiley Periodicals, Inc. and the American Pharmacists Association.

  20. Dimensional modeling: beyond data processing constraints.

    PubMed

    Bunardzic, A

    1995-01-01

    The focus of information processing requirements is shifting from the on-line transaction processing (OLTP) issues to the on-line analytical processing (OLAP) issues. While the former serves to ensure the feasibility of the real-time on-line transaction processing (which has already exceeded a level of up to 1,000 transactions per second under normal conditions), the latter aims at enabling more sophisticated analytical manipulation of data. The OLTP requirements, or how to efficiently get data into the system, have been solved by applying the Relational theory in the form of Entity-Relation model. There is presently no theory related to OLAP that would resolve the analytical processing requirements as efficiently as Relational theory provided for the transaction processing. The "relational dogma" also provides the mathematical foundation for the Centralized Data Processing paradigm in which mission-critical information is incorporated as 'one and only one instance' of data, thus ensuring data integrity. In such surroundings, the information that supports business analysis and decision support activities is obtained by running predefined reports and queries that are provided by the IS department. In today's intensified competitive climate, businesses are finding that this traditional approach is not good enough. The only way to stay on top of things, and to survive and prosper, is to decentralize the IS services. The newly emerging Distributed Data Processing, with its increased emphasis on empowering the end user, does not seem to find enough merit in the relational database model to justify relying upon it. Relational theory proved too rigid and complex to accommodate the analytical processing needs. In order to satisfy the OLAP requirements, or how to efficiently get the data out of the system, different models, metaphors, and theories have been devised. All of them are pointing to the need for simplifying the highly non-intuitive mathematical constraints found in the relational databases normalized to their 3rd normal form. Object-oriented approach insists on the importance of the common sense component of the data processing activities. But, particularly interesting, is the approach that advocates the necessity of 'flattening' the structure of the business models as we know them today. This discipline is called Dimensional Modeling and it enables users to form multidimensional views of the relevant facts which are stored in a 'flat' (non-structured), easy-to-comprehend and easy-to-access database. When using dimensional modeling, we relax many of the axioms inherent in a relational model. We focus on the knowledge of the relevant facts which are reflecting the business operations and are the real basis for the decision support and business analysis. At the core of the dimensional modeling are fact tables that contain the non-discrete, additive data. To determine the level of aggregation of these facts, we use granularity tables that specify the resolution, or the level/detail, that the user is allowed to entertain. The third component is dimension tables that embody the knowledge of the constraints to be used to form the views.

  1. Normalization and standardization of electronic health records for high-throughput phenotyping: the SHARPn consortium

    PubMed Central

    Pathak, Jyotishman; Bailey, Kent R; Beebe, Calvin E; Bethard, Steven; Carrell, David S; Chen, Pei J; Dligach, Dmitriy; Endle, Cory M; Hart, Lacey A; Haug, Peter J; Huff, Stanley M; Kaggal, Vinod C; Li, Dingcheng; Liu, Hongfang; Marchant, Kyle; Masanz, James; Miller, Timothy; Oniki, Thomas A; Palmer, Martha; Peterson, Kevin J; Rea, Susan; Savova, Guergana K; Stancl, Craig R; Sohn, Sunghwan; Solbrig, Harold R; Suesse, Dale B; Tao, Cui; Taylor, David P; Westberg, Les; Wu, Stephen; Zhuo, Ning; Chute, Christopher G

    2013-01-01

    Research objective To develop scalable informatics infrastructure for normalization of both structured and unstructured electronic health record (EHR) data into a unified, concept-based model for high-throughput phenotype extraction. Materials and methods Software tools and applications were developed to extract information from EHRs. Representative and convenience samples of both structured and unstructured data from two EHR systems—Mayo Clinic and Intermountain Healthcare—were used for development and validation. Extracted information was standardized and normalized to meaningful use (MU) conformant terminology and value set standards using Clinical Element Models (CEMs). These resources were used to demonstrate semi-automatic execution of MU clinical-quality measures modeled using the Quality Data Model (QDM) and an open-source rules engine. Results Using CEMs and open-source natural language processing and terminology services engines—namely, Apache clinical Text Analysis and Knowledge Extraction System (cTAKES) and Common Terminology Services (CTS2)—we developed a data-normalization platform that ensures data security, end-to-end connectivity, and reliable data flow within and across institutions. We demonstrated the applicability of this platform by executing a QDM-based MU quality measure that determines the percentage of patients between 18 and 75 years with diabetes whose most recent low-density lipoprotein cholesterol test result during the measurement year was <100 mg/dL on a randomly selected cohort of 273 Mayo Clinic patients. The platform identified 21 and 18 patients for the denominator and numerator of the quality measure, respectively. Validation results indicate that all identified patients meet the QDM-based criteria. Conclusions End-to-end automated systems for extracting clinical information from diverse EHR systems require extensive use of standardized vocabularies and terminologies, as well as robust information models for storing, discovering, and processing that information. This study demonstrates the application of modular and open-source resources for enabling secondary use of EHR data through normalization into standards-based, comparable, and consistent format for high-throughput phenotyping to identify patient cohorts. PMID:24190931

  2. Estimating the concentration of urea and creatinine in the human serum of normal and dialysis patients through Raman spectroscopy.

    PubMed

    de Almeida, Maurício Liberal; Saatkamp, Cassiano Junior; Fernandes, Adriana Barrinha; Pinheiro, Antonio Luiz Barbosa; Silveira, Landulfo

    2016-09-01

    Urea and creatinine are commonly used as biomarkers of renal function. Abnormal concentrations of these biomarkers are indicative of pathological processes such as renal failure. This study aimed to develop a model based on Raman spectroscopy to estimate the concentration values of urea and creatinine in human serum. Blood sera from 55 clinically normal subjects and 47 patients with chronic kidney disease undergoing dialysis were collected, and concentrations of urea and creatinine were determined by spectrophotometric methods. A Raman spectrum was obtained with a high-resolution dispersive Raman spectrometer (830 nm). A spectral model was developed based on partial least squares (PLS), where the concentrations of urea and creatinine were correlated with the Raman features. Principal components analysis (PCA) was used to discriminate dialysis patients from normal subjects. The PLS model showed r = 0.97 and r = 0.93 for urea and creatinine, respectively. The root mean square errors of cross-validation (RMSECV) for the model were 17.6 and 1.94 mg/dL, respectively. PCA showed high discrimination between dialysis and normality (95 % accuracy). The Raman technique was able to determine the concentrations with low error and to discriminate dialysis from normal subjects, consistent with a rapid and low-cost test.

  3. Advanced Reactors-Intermediate Heat Exchanger (IHX) Coupling: Theoretical Modeling and Experimental Validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Utgikar, Vivek; Sun, Xiaodong; Christensen, Richard

    2016-12-29

    The overall goal of the research project was to model the behavior of the advanced reactorintermediate heat exchange system and to develop advanced control techniques for off-normal conditions. The specific objectives defined for the project were: 1. To develop the steady-state thermal hydraulic design of the intermediate heat exchanger (IHX); 2. To develop mathematical models to describe the advanced nuclear reactor-IHX-chemical process/power generation coupling during normal and off-normal operations, and to simulate models using multiphysics software; 3. To develop control strategies using genetic algorithm or neural network techniques and couple these techniques with the multiphysics software; 4. To validate themore » models experimentally The project objectives were accomplished by defining and executing four different tasks corresponding to these specific objectives. The first task involved selection of IHX candidates and developing steady state designs for those. The second task involved modeling of the transient and offnormal operation of the reactor-IHX system. The subsequent task dealt with the development of control strategies and involved algorithm development and simulation. The last task involved experimental validation of the thermal hydraulic performances of the two prototype heat exchangers designed and fabricated for the project at steady state and transient conditions to simulate the coupling of the reactor- IHX-process plant system. The experimental work utilized the two test facilities at The Ohio State University (OSU) including one existing High-Temperature Helium Test Facility (HTHF) and the newly developed high-temperature molten salt facility.« less

  4. A mathematical model of breast cancer development, local treatment and recurrence.

    PubMed

    Enderling, Heiko; Chaplain, Mark A J; Anderson, Alexander R A; Vaidya, Jayant S

    2007-05-21

    Cancer development is a stepwise process through which normal somatic cells acquire mutations which enable them to escape their normal function in the tissue and become self-sufficient in survival. The number of mutations depends on the patient's age, genetic susceptibility and on the exposure of the patient to carcinogens throughout their life. It is believed that in every malignancy 4-6 crucial similar mutations have to occur on cancer-related genes. These genes are classified as oncogenes and tumour suppressor genes (TSGs) which gain or lose their function respectively, after they have received one mutative hit or both of their alleles have been knocked out. With the acquisition of each of the necessary mutations the transformed cell gains a selective advantage over normal cells, and the mutation will spread throughout the tissue via clonal expansion. We present a simplified model of this mutation and expansion process, in which we assume that the loss of two TSGs is sufficient to give rise to a cancer. Our mathematical model of the stepwise development of breast cancer verifies the idea that the normal mutation rate in genes is only sufficient to give rise to a tumour within a clinically observable time if a high number of breast stem cells and TSGs exist or genetic instability is involved as a driving force of the mutation pathway. Furthermore, our model shows that if a mutation occurred in stem cells pre-puberty, and formed a field of cells with this mutation through clonal formation of the breast, it is most likely that a tumour will arise from within this area. We then apply different treatment strategies, namely surgery and adjuvant external beam radiotherapy and targeted intraoperative radiotherapy (TARGIT) and use the model to identify different sources of local recurrence and analyse their prevention.

  5. Reading Orthographically Strange Nonwords: Modelling Backup Strategies in Reading

    ERIC Educational Resources Information Center

    Perry, Conrad

    2018-01-01

    The latest version of the connectionist dual process model of reading (CDP++.parser) was tested on a set of nonwords, many of which were orthographically strange (e.g., PSIZ). A grapheme-by-grapheme read-out strategy was used because the normal strategy produced many poor responses. The new strategy allowed the model to produce results similar to…

  6. Chaos Theory as a Model for Life Transitions Counseling: Nonlinear Dynamics and Life's Changes

    ERIC Educational Resources Information Center

    Bussolari, Cori J.; Goodell, Judith A.

    2009-01-01

    Chaos theory is presented for counselors working with clients experiencing life transitions. It is proposed as a model that considers disorder, unpredictability, and lack of control as normal parts of transition processes. Nonlinear constructs from physics are adapted for use in counseling. The model provides a method clients can use to…

  7. Is Word Shape Still in Poor Shape for the Race to the Lexicon?

    ERIC Educational Resources Information Center

    Hill, Jessica C.

    2010-01-01

    Current models of normal reading behavior emphasize not only the recognition and processing of the word being fixated (n) but also processing of the upcoming parafoveal word (n + 1). Gaze contingent displays employing the boundary paradigm often mask words in order to understand how much and what type of processing is completed on the parafoveal…

  8. Effects of Pump-turbine S-shaped Characteristics on Transient Behaviours: Model Setup

    NASA Astrophysics Data System (ADS)

    Zeng, Wei; Yang, Jiandong; Hu, Jinhong

    2017-04-01

    Pumped storage stations undergo numerous transition processes, which make the pump turbines go through the unstable S-shaped region. The hydraulic transient in S-shaped region has normally been investigated through numerical simulations, while field experiments generally involve high risks and are difficult to perform. In this research, a pumped storage model composed of a piping system, two model units, two electrical control systems, a measurement system and a collection system was set up to study the transition processes. The model platform can be applied to simulate almost any hydraulic transition process that occurs in real power stations, such as load rejection, startup, frequency control and grid connection.

  9. Modeling of First-Passage Processes in Financial Markets

    NASA Astrophysics Data System (ADS)

    Inoue, Jun-Ichi; Hino, Hikaru; Sazuka, Naoya; Scalas, Enrico

    2010-03-01

    In this talk, we attempt to make a microscopic modeling the first-passage process (or the first-exit process) of the BUND future by minority game with market history. We find that the first-passage process of the minority game with appropriate history length generates the same properties as the BTP future (the middle and long term Italian Government bonds with fixed interest rates), namely, both first-passage time distributions have a crossover at some specific time scale as is the case for the Mittag-Leffler function. We also provide a macroscopic (or a phenomenological) modeling of the first-passage process of the BTP future and show analytically that the first-passage time distribution of a simplest mixture of the normal compound Poisson processes does not have such a crossover.

  10. Amplitude and Phase Characteristics of Signals at the Output of Spatially Separated Antennas for Paths with Scattering

    NASA Astrophysics Data System (ADS)

    Anikin, A. S.

    2018-06-01

    Conditional statistical characteristics of the phase difference are considered depending on the ratio of instantaneous output signal amplitudes of spatially separated weakly directional antennas for the normal field model for paths with radio-wave scattering. The dependences obtained are related to the physical processes on the radio-wave propagation path. The normal model parameters are established at which the statistical characteristics of the phase difference depend on the ratio of the instantaneous amplitudes and hence can be used to measure the phase difference. Using Shannon's formula, the amount of information on the phase difference of signals contained in the ratio of their amplitudes is calculated depending on the parameters of the normal field model. Approaches are suggested to reduce the shift of phase difference measured for paths with radio-wave scattering. A comparison with results of computer simulation by the Monte Carlo method is performed.

  11. The Processing and Interpretation of Verb Phrase Ellipsis Constructions by Children at Normal and Slowed Speech Rates

    PubMed Central

    Callahan, Sarah M.; Walenski, Matthew; Love, Tracy

    2013-01-01

    Purpose To examine children’s comprehension of verb phrase (VP) ellipsis constructions in light of their automatic, online structural processing abilities and conscious, metalinguistic reflective skill. Method Forty-two children ages 5 through 12 years listened to VP ellipsis constructions involving the strict/sloppy ambiguity (e.g., “The janitor untangled himself from the rope and the fireman in the elementary school did too after the accident.”) in which the ellipsis phrase (“did too”) had 2 interpretations: (a) strict (“untangled the janitor”) and (b) sloppy (“untangled the fireman”). We examined these sentences at a normal speech rate with an online cross-modal picture priming task (n = 14) and an offline sentence–picture matching task (n = 11). Both tasks were also given with slowed speech input (n = 17). Results Children showed priming for both the strict and sloppy interpretations at a normal speech rate but only for the strict interpretation with slowed input. Offline, children displayed an adultlike preference for the sloppy interpretation with normal-rate input but a divergent pattern with slowed speech. Conclusions Our results suggest that children and adults rely on a hybrid syntax-discourse model for the online comprehension and offline interpretation of VP ellipsis constructions. This model incorporates a temporally sensitive syntactic process of VP reconstruction (disrupted with slow input) and a temporally protracted discourse effect attributed to parallelism (preserved with slow input). PMID:22223886

  12. Online Deviation Detection for Medical Processes

    PubMed Central

    Christov, Stefan C.; Avrunin, George S.; Clarke, Lori A.

    2014-01-01

    Human errors are a major concern in many medical processes. To help address this problem, we are investigating an approach for automatically detecting when performers of a medical process deviate from the acceptable ways of performing that process as specified by a detailed process model. Such deviations could represent errors and, thus, detecting and reporting deviations as they occur could help catch errors before harm is done. In this paper, we identify important issues related to the feasibility of the proposed approach and empirically evaluate the approach for two medical procedures, chemotherapy and blood transfusion. For the evaluation, we use the process models to generate sample process executions that we then seed with synthetic errors. The process models describe the coordination of activities of different process performers in normal, as well as in exceptional situations. The evaluation results suggest that the proposed approach could be applied in clinical settings to help catch errors before harm is done. PMID:25954343

  13. Confounding environmental colour and distribution shape leads to underestimation of population extinction risk.

    PubMed

    Fowler, Mike S; Ruokolainen, Lasse

    2013-01-01

    The colour of environmental variability influences the size of population fluctuations when filtered through density dependent dynamics, driving extinction risk through dynamical resonance. Slow fluctuations (low frequencies) dominate in red environments, rapid fluctuations (high frequencies) in blue environments and white environments are purely random (no frequencies dominate). Two methods are commonly employed to generate the coloured spatial and/or temporal stochastic (environmental) series used in combination with population (dynamical feedback) models: autoregressive [AR(1)] and sinusoidal (1/f) models. We show that changing environmental colour from white to red with 1/f models, and from white to red or blue with AR(1) models, generates coloured environmental series that are not normally distributed at finite time-scales, potentially confounding comparison with normally distributed white noise models. Increasing variability of sample Skewness and Kurtosis and decreasing mean Kurtosis of these series alter the frequency distribution shape of the realised values of the coloured stochastic processes. These changes in distribution shape alter patterns in the probability of single and series of extreme conditions. We show that the reduced extinction risk for undercompensating (slow growing) populations in red environments previously predicted with traditional 1/f methods is an artefact of changes in the distribution shapes of the environmental series. This is demonstrated by comparison with coloured series controlled to be normally distributed using spectral mimicry. Changes in the distribution shape that arise using traditional methods lead to underestimation of extinction risk in normally distributed, red 1/f environments. AR(1) methods also underestimate extinction risks in traditionally generated red environments. This work synthesises previous results and provides further insight into the processes driving extinction risk in model populations. We must let the characteristics of known natural environmental covariates (e.g., colour and distribution shape) guide us in our choice of how to best model the impact of coloured environmental variation on population dynamics.

  14. MULTI: a shared memory approach to cooperative molecular modeling.

    PubMed

    Darden, T; Johnson, P; Smith, H

    1991-03-01

    A general purpose molecular modeling system, MULTI, based on the UNIX shared memory and semaphore facilities for interprocess communication is described. In addition to the normal querying or monitoring of geometric data, MULTI also provides processes for manipulating conformations, and for displaying peptide or nucleic acid ribbons, Connolly surfaces, close nonbonded contacts, crystal-symmetry related images, least-squares superpositions, and so forth. This paper outlines the basic techniques used in MULTI to ensure cooperation among these specialized processes, and then describes how they can work together to provide a flexible modeling environment.

  15. Plant calendar pattern based on rainfall forecast and the probability of its success in Deli Serdang regency of Indonesia

    NASA Astrophysics Data System (ADS)

    Darnius, O.; Sitorus, S.

    2018-03-01

    The objective of this study was to determine the pattern of plant calendar of three types of crops; namely, palawija, rice, andbanana, based on rainfall in Deli Serdang Regency. In the first stage, we forecasted rainfall by using time series analysis, and obtained appropriate model of ARIMA (1,0,0) (1,1,1)12. Based on the forecast result, we designed a plant calendar pattern for the three types of plant. Furthermore, the probability of success in the plant types following the plant calendar pattern was calculated by using the Markov process by discretizing the continuous rainfall data into three categories; namely, Below Normal (BN), Normal (N), and Above Normal (AN) to form the probability transition matrix. Finally, the combination of rainfall forecasting models and the Markov process were used to determine the pattern of cropping calendars and the probability of success in the three crops. This research used rainfall data of Deli Serdang Regency taken from the office of BMKG (Meteorologist Climatology and Geophysics Agency), Sampali Medan, Indonesia.

  16. Stability of Retained Austenite in High-Al, Low-Si TRIP-Assisted Steels Processed via Continuous Galvanizing Heat Treatments

    NASA Astrophysics Data System (ADS)

    McDermid, J. R.; Zurob, H. S.; Bian, Y.

    2011-12-01

    Two galvanizable high-Al, low-Si transformation-induced plasticity (TRIP)-assisted steels were subjected to isothermal bainitic transformation (IBT) temperatures compatible with the continuous galvanizing (CGL) process and the kinetics of the retained austenite (RA) to martensite transformation during room temperature deformation studied as a function of heat treatment parameters. It was determined that there was a direct relationship between the rate of strain-induced transformation and optimal mechanical properties, with more gradual transformation rates being favored. The RA to martensite transformation kinetics were successfully modeled using two methodologies: (1) the strain-based model of Olsen and Cohen and (2) a simple relationship with the normalized flow stress, ( {{{σ_{{flow}} - σ_{YS} }/{σ_{YS }}}} ) . For the strain-based model, it was determined that the model parameters were a strong function of strain and alloy thermal processing history and a weak function of alloy chemistry. It was verified that the strain-based model in the present work agrees well with those derived by previous workers using TRIP-assisted steels of similar composition. It was further determined that the RA to martensite transformation kinetics for all alloys and heat treatments could be described using a simple model vs the normalized flow stress, indicating that the RA to martensite transformation is stress-induced rather than strain-induced for temperatures above the Ms^{σ }.

  17. Spatiotemporal variability of snow depletion curves derived from SNODAS for the conterminous United States, 2004-2013

    USGS Publications Warehouse

    Driscoll, Jessica; Hay, Lauren E.; Bock, Andrew R.

    2017-01-01

    Assessment of water resources at a national scale is critical for understanding their vulnerability to future change in policy and climate. Representation of the spatiotemporal variability in snowmelt processes in continental-scale hydrologic models is critical for assessment of water resource response to continued climate change. Continental-extent hydrologic models such as the U.S. Geological Survey National Hydrologic Model (NHM) represent snowmelt processes through the application of snow depletion curves (SDCs). SDCs relate normalized snow water equivalent (SWE) to normalized snow covered area (SCA) over a snowmelt season for a given modeling unit. SDCs were derived using output from the operational Snow Data Assimilation System (SNODAS) snow model as daily 1-km gridded SWE over the conterminous United States. Daily SNODAS output were aggregated to a predefined watershed-scale geospatial fabric and used to also calculate SCA from October 1, 2004 to September 30, 2013. The spatiotemporal variability in SNODAS output at the watershed scale was evaluated through the spatial distribution of the median and standard deviation for the time period. Representative SDCs for each watershed-scale modeling unit over the conterminous United States (n = 54,104) were selected using a consistent methodology and used to create categories of snowmelt based on SDC shape. The relation of SDC categories to the topographic and climatic variables allow for national-scale categorization of snowmelt processes.

  18. Chronic PTSD Treated with Metacognitive Therapy: An Open Trial

    ERIC Educational Resources Information Center

    Wells, Adrian; Welford, Mary; Fraser, Janelle; King, Paul; Mendel, Elizabeth; Wisely, Julie; Knight, Alice; Rees, David

    2008-01-01

    This paper reports on an open trial of metacognitive therapy (MCT) for chronic PTSD. MCT does not require imaginal reliving, prolonged exposure, or challenging of thoughts about trauma. It is based on an information-processing model of factors that impede normal and in-built recovery processes. It is targeted at modifying maladaptive styles of…

  19. A comparative study of the characterization of miR-155 in knockout mice

    PubMed Central

    Zhang, Dong; Cui, Yongchun; Li, Bin; Luo, Xiaokang; Li, Bo; Tang, Yue

    2017-01-01

    miR-155 is one of the most important miRNAs and plays a very important role in numerous biological processes. However, few studies have characterized this miRNA in mice under normal physiological conditions. We aimed to characterize miR-155 in vivo by using a comparative analysis. In our study, we compared miR-155 knockout (KO) mice with C57BL/6 wild type (WT) mice in order to characterize miR-155 in mice under normal physiological conditions using many evaluation methods, including a reproductive performance analysis, growth curve, ultrasonic estimation, haematological examination, and histopathological analysis. These analyses showed no significant differences between groups in the main evaluation indices. The growth and development were nearly normal for all mice and did not differ between the control and model groups. Using a comparative analysis and a summary of related studies published in recent years, we found that miR-155 was not essential for normal physiological processes in 8-week-old mice. miR-155 deficiency did not affect the development and growth of naturally ageing mice during the 42 days after birth. Thus, studying the complex biological functions of miR-155 requires the further use of KO mouse models. PMID:28278287

  20. Predicting durations of online collective actions based on Peaks' heights

    NASA Astrophysics Data System (ADS)

    Lu, Peng; Nie, Shizhao; Wang, Zheng; Jing, Ziwei; Yang, Jianwu; Qi, Zhongxiang; Pujia, Wangmo

    2018-02-01

    Capturing the whole process of collective actions, the peak model contains four stages, including Prepare, Outbreak, Peak, and Vanish. Based on the peak model, one of the key variables, factors and parameters are further investigated in this paper, which is the rate between peaks and spans. Although the durations or spans and peaks' heights are highly diversified, it seems that the ratio between them is quite stable. If the rate's regularity is discovered, we can predict how long the collective action lasts and when it ends based on the peak's height. In this work, we combined mathematical simulations and empirical big data of 148 cases to explore the regularity of ratio's distribution. It is indicated by results of simulations that the rate has some regularities of distribution, which is not normal distribution. The big data has been collected from the 148 online collective actions and the whole processes of participation are recorded. The outcomes of empirical big data indicate that the rate seems to be closer to being log-normally distributed. This rule holds true for both the total cases and subgroups of 148 online collective actions. The Q-Q plot is applied to check the normal distribution of the rate's logarithm, and the rate's logarithm does follow the normal distribution.

  1. Basic Geometric Support of Systems for Earth Observation from Geostationary and Highly Elliptical Orbits

    NASA Astrophysics Data System (ADS)

    Gektin, Yu. M.; Egoshkin, N. A.; Eremeev, V. V.; Kuznecov, A. E.; Moskatinyev, I. V.; Smelyanskiy, M. B.

    2017-12-01

    A set of standardized models and algorithms for geometric normalization and georeferencing images from geostationary and highly elliptical Earth observation systems is considered. The algorithms can process information from modern scanning multispectral sensors with two-coordinate scanning and represent normalized images in optimal projection. Problems of the high-precision ground calibration of the imaging equipment using reference objects, as well as issues of the flight calibration and refinement of geometric models using the absolute and relative reference points, are considered. Practical testing of the models, algorithms, and technologies is performed in the calibration of sensors for spacecrafts of the Electro-L series and during the simulation of the Arktika prospective system.

  2. Simulating the impact of dust cooling on the statistical properties of the intra-cluster medium

    NASA Astrophysics Data System (ADS)

    Pointecouteau, Etienne; da Silva, Antonio; Catalano, Andrea; Montier, Ludovic; Lanoux, Joseph; Roncarelli, Mauro; Giard, Martin

    2009-08-01

    From the first stages of star and galaxy formation, non-gravitational processes such as ram pressure stripping, SNs, galactic winds, AGNs, galaxy-galaxy mergers, etc. lead to the enrichment of the IGM in stars, metals as well as dust, via the ejection of galactic material into the IGM. We know now that these processes shape, side by side with gravitation, the formation and the evolution of structures. We present here hydrodynamic simulations of structure formation implementing the effect of the cooling by dust on large scale structure formation. We focus on the scale of galaxy clusters and study the statistical properties of clusters. Here, we present our results on the TX-M and the LX-M scaling relations which exhibit changes on both the slope and normalization when adding cooling by dust to the standard radiative cooling model. For example, the normalization of the TX-M relation changes only by a maximum of 2% at M=1014M⊙ whereas the normalization of the LX-TX changes by as much as 10% at TX=1keV for models that including dust cooling. Our study shows that the dust is an added non-gravitational process that contributes shaping the thermodynamical state of the hot ICM gas.

  3. Inhibitory Control in Mind and Brain: An Interactive Race Model of Countermanding Saccades

    ERIC Educational Resources Information Center

    Boucher, Leanne; Palmeri, Thomas J.; Logan, Gordon D.; Schall, Jeffrey D.

    2007-01-01

    The stop-signal task has been used to study normal cognitive control and clinical dysfunction. Its utility is derived from a race model that accounts for performance and provides an estimate of the time it takes to stop a movement. This model posits a race between go and stop processes with stochastically independent finish times. However,…

  4. Source-independent full waveform inversion of seismic data

    DOEpatents

    Lee, Ki Ha

    2006-02-14

    A set of seismic trace data is collected in an input data set that is first Fourier transformed in its entirety into the frequency domain. A normalized wavefield is obtained for each trace of the input data set in the frequency domain. Normalization is done with respect to the frequency response of a reference trace selected from the set of seismic trace data. The normalized wavefield is source independent, complex, and dimensionless. The normalized wavefield is shown to be uniquely defined as the normalized impulse response, provided that a certain condition is met for the source. This property allows construction of the inversion algorithm disclosed herein, without any source or source coupling information. The algorithm minimizes the error between data normalized wavefield and the model normalized wavefield. The methodology is applicable to any 3-D seismic problem, and damping may be easily included in the process.

  5. Orientation of chain molecules in ionotropic gels: a Brownian dynamics model

    NASA Astrophysics Data System (ADS)

    Woelki, Stefan; Kohler, Hans-Helmut

    2003-09-01

    As is known from birefringence measurements, polysaccharide molecules of ionotropic gels are preferentially orientated normal to the direction of gel growth. In this paper the orientation effect is investigated by means of an off-lattice Brownian dynamics model simulating the gel formation process. The model describes the integration of a single coarse grained phantom chain into the growing gel. The equations of motion of the chain are derived. The computer simulations show that, during the process of integration, the chain is contracting normal to the direction of gel growth. A scaling relation is obtained for the degree of contraction as a function of the length parameters of the chain, the velocity of the gel formation front and the rate constant of the crosslinking reaction. It is shown that the scaling relation, if applied to the example of ionotropic copper alginate gel, leads to reasonable predictions of the time course of the degree of contraction of the alginate chains.

  6. Modeling target normal sheath acceleration using handoffs between multiple simulations

    NASA Astrophysics Data System (ADS)

    McMahon, Matthew; Willis, Christopher; Mitchell, Robert; King, Frank; Schumacher, Douglass; Akli, Kramer; Freeman, Richard

    2013-10-01

    We present a technique to model the target normal sheath acceleration (TNSA) process using full-scale LSP PIC simulations. The technique allows for a realistic laser, full size target and pre-plasma, and sufficient propagation length for the accelerated ions and electrons. A first simulation using a 2D Cartesian grid models the laser-plasma interaction (LPI) self-consistently and includes field ionization. Electrons accelerated by the laser are imported into a second simulation using a 2D cylindrical grid optimized for the initial TNSA process and incorporating an equation of state. Finally, all of the particles are imported to a third simulation optimized for the propagation of the accelerated ions and utilizing a static field solver for initialization. We also show use of 3D LPI simulations. Simulation results are compared to recent ion acceleration experiments using SCARLET laser at The Ohio State University. This work was performed with support from ASOFR under contract # FA9550-12-1-0341, DARPA, and allocations of computing time from the Ohio Supercomputing Center.

  7. Cell competition with normal epithelial cells promotes apical extrusion of transformed cells through metabolic changes.

    PubMed

    Kon, Shunsuke; Ishibashi, Kojiro; Katoh, Hiroto; Kitamoto, Sho; Shirai, Takanobu; Tanaka, Shinya; Kajita, Mihoko; Ishikawa, Susumu; Yamauchi, Hajime; Yako, Yuta; Kamasaki, Tomoko; Matsumoto, Tomohiro; Watanabe, Hirotaka; Egami, Riku; Sasaki, Ayana; Nishikawa, Atsuko; Kameda, Ikumi; Maruyama, Takeshi; Narumi, Rika; Morita, Tomoko; Sasaki, Yoshiteru; Enoki, Ryosuke; Honma, Sato; Imamura, Hiromi; Oshima, Masanobu; Soga, Tomoyoshi; Miyazaki, Jun-Ichi; Duchen, Michael R; Nam, Jin-Min; Onodera, Yasuhito; Yoshioka, Shingo; Kikuta, Junichi; Ishii, Masaru; Imajo, Masamichi; Nishida, Eisuke; Fujioka, Yoichiro; Ohba, Yusuke; Sato, Toshiro; Fujita, Yasuyuki

    2017-05-01

    Recent studies have revealed that newly emerging transformed cells are often apically extruded from epithelial tissues. During this process, normal epithelial cells can recognize and actively eliminate transformed cells, a process called epithelial defence against cancer (EDAC). Here, we show that mitochondrial membrane potential is diminished in RasV12-transformed cells when they are surrounded by normal cells. In addition, glucose uptake is elevated, leading to higher lactate production. The mitochondrial dysfunction is driven by upregulation of pyruvate dehydrogenase kinase 4 (PDK4), which positively regulates elimination of RasV12-transformed cells. Furthermore, EDAC from the surrounding normal cells, involving filamin, drives the Warburg-effect-like metabolic alteration. Moreover, using a cell-competition mouse model, we demonstrate that PDK-mediated metabolic changes promote the elimination of RasV12-transformed cells from intestinal epithelia. These data indicate that non-cell-autonomous metabolic modulation is a crucial regulator for cell competition, shedding light on the unexplored events at the initial stage of carcinogenesis.

  8. Parameters or Cues?

    ERIC Educational Resources Information Center

    MacWhinney, Brian

    2004-01-01

    Truscott and Sharwood Smith (henceforth T&SS) attempt to show how second language acquisition can occur without any learning. In their APT model, change depends only on the tuning of innate principles through the normal course of processing of L2. There are some features of their model that I find attractive. Specifically, their acceptance of the…

  9. Advances in sepsis research derived from animal models.

    PubMed

    Männel, Daniela N

    2007-09-01

    Inflammation is the basic process by which tissues of the body respond to infection. Activation of the immune system normally leads to removal of microbial pathogens, and after resolution of the inflammation immune homeostasis is restored. This controlled process, however, can be disturbed resulting in disease. Therefore, many studies using infection models have investigated the participating immune mechanisms aiming at possible therapeutic interventions. Defined model substances such as bacterial lipopolysaccharide (endotoxin) have been used to mimic bacterial infections and analyze their immune stimulating functions. A complex network of molecular mechanisms involved in the recognition and activation processes of bacterial infections and their regulation has developed from these studies. More complex infection models will now help to interpret earlier observations leading to the design of relevant new infection models.

  10. Normalization regulates competition for visual awareness

    PubMed Central

    Ling, Sam; Blake, Randolph

    2012-01-01

    Summary Signals in our brain are in a constant state of competition, including those that vie for motor control, sensory dominance and awareness. To shed light on the mechanisms underlying neural competition, we exploit binocular rivalry, a phenomenon that allows us to probe the competitive process that ordinarily transpires outside of our awareness. By measuring psychometric functions under different states of rivalry, we discovered a pattern of gain changes that are consistent with a model of competition in which attention interacts with normalization processes, thereby driving the ebb and flow between states of awareness. Moreover, we reveal that attention plays a crucial role in modulating competition; without attention, rivalry suppression for high-contrast stimuli is negligible. We propose a framework whereby our visual awareness of competing sensory representations is governed by a common neural computation: normalization. PMID:22884335

  11. Numerical Simulation of Dry Granular Flow Impacting a Rigid Wall Using the Discrete Element Method

    PubMed Central

    Wu, Fengyuan; Fan, Yunyun; Liang, Li; Wang, Chao

    2016-01-01

    This paper presents a clump model based on Discrete Element Method. The clump model was more close to the real particle than a spherical particle. Numerical simulations of several tests of dry granular flow impacting a rigid wall flowing in an inclined chute have been achieved. Five clump models with different sphericity have been used in the simulations. By comparing the simulation results with the experimental results of normal force on the rigid wall, a clump model with better sphericity was selected to complete the following numerical simulation analysis and discussion. The calculation results of normal force showed good agreement with the experimental results, which verify the effectiveness of the clump model. Then, total normal force and bending moment of the rigid wall and motion process of the granular flow were further analyzed. Finally, comparison analysis of the numerical simulations using the clump model with different grain composition was obtained. By observing normal force on the rigid wall and distribution of particle size at the front of the rigid wall at the final state, the effect of grain composition on the force of the rigid wall has been revealed. It mainly showed that, with the increase of the particle size, the peak force at the retaining wall also increase. The result can provide a basis for the research of relevant disaster and the design of protective structures. PMID:27513661

  12. Regulation of DMT1 on Bone Microstructure in Type 2 Diabetes

    PubMed Central

    Zhang, Wei-Lin; Meng, Hong-Zheng; Yang, Mao-Wei

    2015-01-01

    Diabetic osteoporosis is gradually attracted people's attention. However, the process of bone microstructure changes in diabetic patients, and the exact mechanism of osteoblast iron overload are unclear. Therefore, the present study aimed to explore the function of DMT1 in the pathological process of diabetic osteoporosis. We build the type two diabetes osteoporosis models with SD rats and Belgrade rats, respectively. Difference expression of DMT1 was detected by using the method of immunohistochemistry and western blotting. Detection of bone microstructure and biomechanics and iron content for each group of samples. We found that DMT1 expression in type 2 diabetic rats was higher than that in normal rats. The bone biomechanical indices and bone microstructure in the rat model deficient in DMT1 was significantly better than that in the normal diabetic model. The loss of DMT1 can reduce the content of iron in bone. These findings indicate that DMT1 expression was enhanced in the bone tissue of type 2 diabetic rats, and plays an important role in the pathological process of diabetic osteoporosis. Moreover, DMT1 may be a potential therapeutic target for diabetic osteoporosis. PMID:26078704

  13. Neural coordination can be enhanced by occasional interruption of normal firing patterns: a self-optimizing spiking neural network model.

    PubMed

    Woodward, Alexander; Froese, Tom; Ikegami, Takashi

    2015-02-01

    The state space of a conventional Hopfield network typically exhibits many different attractors of which only a small subset satisfies constraints between neurons in a globally optimal fashion. It has recently been demonstrated that combining Hebbian learning with occasional alterations of normal neural states avoids this problem by means of self-organized enlargement of the best basins of attraction. However, so far it is not clear to what extent this process of self-optimization is also operative in real brains. Here we demonstrate that it can be transferred to more biologically plausible neural networks by implementing a self-optimizing spiking neural network model. In addition, by using this spiking neural network to emulate a Hopfield network with Hebbian learning, we attempt to make a connection between rate-based and temporal coding based neural systems. Although further work is required to make this model more realistic, it already suggests that the efficacy of the self-optimizing process is independent from the simplifying assumptions of a conventional Hopfield network. We also discuss natural and cultural processes that could be responsible for occasional alteration of neural firing patterns in actual brains. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. [Monitoring method of extraction process for Schisandrae Chinensis Fructus based on near infrared spectroscopy and multivariate statistical process control].

    PubMed

    Xu, Min; Zhang, Lei; Yue, Hong-Shui; Pang, Hong-Wei; Ye, Zheng-Liang; Ding, Li

    2017-10-01

    To establish an on-line monitoring method for extraction process of Schisandrae Chinensis Fructus, the formula medicinal material of Yiqi Fumai lyophilized injection by combining near infrared spectroscopy with multi-variable data analysis technology. The multivariate statistical process control (MSPC) model was established based on 5 normal batches in production and 2 test batches were monitored by PC scores, DModX and Hotelling T2 control charts. The results showed that MSPC model had a good monitoring ability for the extraction process. The application of the MSPC model to actual production process could effectively achieve on-line monitoring for extraction process of Schisandrae Chinensis Fructus, and can reflect the change of material properties in the production process in real time. This established process monitoring method could provide reference for the application of process analysis technology in the process quality control of traditional Chinese medicine injections. Copyright© by the Chinese Pharmaceutical Association.

  15. Mathematical model with autoregressive process for electrocardiogram signals

    NASA Astrophysics Data System (ADS)

    Evaristo, Ronaldo M.; Batista, Antonio M.; Viana, Ricardo L.; Iarosz, Kelly C.; Szezech, José D., Jr.; Godoy, Moacir F. de

    2018-04-01

    The cardiovascular system is composed of the heart, blood and blood vessels. Regarding the heart, cardiac conditions are determined by the electrocardiogram, that is a noninvasive medical procedure. In this work, we propose autoregressive process in a mathematical model based on coupled differential equations in order to obtain the tachograms and the electrocardiogram signals of young adults with normal heartbeats. Our results are compared with experimental tachogram by means of Poincaré plot and dentrended fluctuation analysis. We verify that the results from the model with autoregressive process show good agreement with experimental measures from tachogram generated by electrical activity of the heartbeat. With the tachogram we build the electrocardiogram by means of coupled differential equations.

  16. Brain extraction from normal and pathological images: A joint PCA/Image-Reconstruction approach.

    PubMed

    Han, Xu; Kwitt, Roland; Aylward, Stephen; Bakas, Spyridon; Menze, Bjoern; Asturias, Alexander; Vespa, Paul; Van Horn, John; Niethammer, Marc

    2018-08-01

    Brain extraction from 3D medical images is a common pre-processing step. A variety of approaches exist, but they are frequently only designed to perform brain extraction from images without strong pathologies. Extracting the brain from images exhibiting strong pathologies, for example, the presence of a brain tumor or of a traumatic brain injury (TBI), is challenging. In such cases, tissue appearance may substantially deviate from normal tissue appearance and hence violates algorithmic assumptions for standard approaches to brain extraction; consequently, the brain may not be correctly extracted. This paper proposes a brain extraction approach which can explicitly account for pathologies by jointly modeling normal tissue appearance and pathologies. Specifically, our model uses a three-part image decomposition: (1) normal tissue appearance is captured by principal component analysis (PCA), (2) pathologies are captured via a total variation term, and (3) the skull and surrounding tissue is captured by a sparsity term. Due to its convexity, the resulting decomposition model allows for efficient optimization. Decomposition and image registration steps are alternated to allow statistical modeling of normal tissue appearance in a fixed atlas coordinate system. As a beneficial side effect, the decomposition model allows for the identification of potentially pathological areas and the reconstruction of a quasi-normal image in atlas space. We demonstrate the effectiveness of our approach on four datasets: the publicly available IBSR and LPBA40 datasets which show normal image appearance, the BRATS dataset containing images with brain tumors, and a dataset containing clinical TBI images. We compare the performance with other popular brain extraction models: ROBEX, BEaST, MASS, BET, BSE and a recently proposed deep learning approach. Our model performs better than these competing approaches on all four datasets. Specifically, our model achieves the best median (97.11) and mean (96.88) Dice scores over all datasets. The two best performing competitors, ROBEX and MASS, achieve scores of 96.23/95.62 and 96.67/94.25 respectively. Hence, our approach is an effective method for high quality brain extraction for a wide variety of images. Copyright © 2018 Elsevier Inc. All rights reserved.

  17. Modeling operators' emergency response time for chemical processing operations.

    PubMed

    Murray, Susan L; Harputlu, Emrah; Mentzer, Ray A; Mannan, M Sam

    2014-01-01

    Operators have a crucial role during emergencies at a variety of facilities such as chemical processing plants. When an abnormality occurs in the production process, the operator often has limited time to either take corrective actions or evacuate before the situation becomes deadly. It is crucial that system designers and safety professionals can estimate the time required for a response before procedures and facilities are designed and operations are initiated. There are existing industrial engineering techniques to establish time standards for tasks performed at a normal working pace. However, it is reasonable to expect the time required to take action in emergency situations will be different than working at a normal production pace. It is possible that in an emergency, operators will act faster compared to a normal pace. It would be useful for system designers to be able to establish a time range for operators' response times for emergency situations. This article develops a modeling approach to estimate the time standard range for operators taking corrective actions or following evacuation procedures in emergency situations. This will aid engineers and managers in establishing time requirements for operators in emergency situations. The methodology used for this study combines a well-established industrial engineering technique for determining time requirements (predetermined time standard system) and adjustment coefficients for emergency situations developed by the authors. Numerous videos of workers performing well-established tasks at a maximum pace were studied. As an example, one of the tasks analyzed was pit crew workers changing tires as quickly as they could during a race. The operations in these videos were decomposed into basic, fundamental motions (such as walking, reaching for a tool, and bending over) by studying the videos frame by frame. A comparison analysis was then performed between the emergency pace and the normal working pace operations to determine performance coefficients. These coefficients represent the decrease in time required for various basic motions in emergency situations and were used to model an emergency response. This approach will make hazardous operations requiring operator response, alarm management, and evacuation processes easier to design and predict. An application of this methodology is included in the article. The time required for an emergency response was roughly a one-third faster than for a normal response time.

  18. The enhancement mechanism of wine-processed Radix Scutellaria on NTG-induced migraine rats.

    PubMed

    Cui, Cheng-Long; He, Xin; Dong, Cui-Lan; Song, Zi-Jing; Ji, Jun; Wang, Xue; Wang, Ling; Wang, Jiao-Ying; Du, Wen-Juan; Wang, Chong-Zhi; Yuan, Chun-Su; Guo, Chang-Run; Zhang, Chun-Feng

    2017-07-01

    To elucidate the increasing dissolution and enhancement mechanism of wine-processed Radix Scutellaria (RS) by fractal theory in nitroglycerin (NTG)-induced migraine rats. We prepared three RS from the process with 10% (S1), 15% (S2), 20% (S3) (v/m) rice wine. Mercury intrusion porosimetry and scanning electron microscope were employed to explore the internal structure of RS and the components dissolution of RS was analyzed by HPLC. Rats were randomly allocated into following groups and orally given different solutions for 10days: normal group (NOR, normal saline), model group (MOD, normal saline), Tianshu capsule group (TSC, 0.425mg/kg), ibuprofen group (IBU, 0.0821mg/kg), crude RS group (CRU, 1.04mg/kg) and wine-processed RS group (WP, 1.04mg/kg) followed by bolus subcutaneously injection of NTG (10mg/kg) to induce migraine model except NOR. Biochemical indexes (nitric oxide-NO, calcitonin-gene-related peptide-CGRP, and endothelin-ET) and c-fos positive cells were measured with commercial kits and immunohistochemical method, separately. Total surface area significantly increased in wine-processed RS (p<0.05) while fractal dimension markedly decreased (p<0.05) compared with crude RS. Additionally, S3 owned the highest increase of dissolution including the percentage increase of total extract, total flavonoids and main compounds (all p<0.05 vs S1 and S2). Pharmacodynamic data showed c-fos positive cells significantly decreased (p<0.05) in WP compared with MOD and the level of NO, CGRP, ET in WP was better than that of CRU. Wine-processed RS could be a promising candidate medicine for migraine treatment due to its increased component dissolution. Copyright © 2017 Elsevier Masson SAS. All rights reserved.

  19. Surrogate modelling for the prediction of spatial fields based on simultaneous dimensionality reduction of high-dimensional input/output spaces.

    PubMed

    Crevillén-García, D

    2018-04-01

    Time-consuming numerical simulators for solving groundwater flow and dissolution models of physico-chemical processes in deep aquifers normally require some of the model inputs to be defined in high-dimensional spaces in order to return realistic results. Sometimes, the outputs of interest are spatial fields leading to high-dimensional output spaces. Although Gaussian process emulation has been satisfactorily used for computing faithful and inexpensive approximations of complex simulators, these have been mostly applied to problems defined in low-dimensional input spaces. In this paper, we propose a method for simultaneously reducing the dimensionality of very high-dimensional input and output spaces in Gaussian process emulators for stochastic partial differential equation models while retaining the qualitative features of the original models. This allows us to build a surrogate model for the prediction of spatial fields in such time-consuming simulators. We apply the methodology to a model of convection and dissolution processes occurring during carbon capture and storage.

  20. Stereopsis, vertical disparity and relief transformations.

    PubMed

    Gårding, J; Porrill, J; Mayhew, J E; Frisby, J P

    1995-03-01

    The pattern of retinal binocular disparities acquired by a fixating visual system depends on both the depth structure of the scene and the viewing geometry. This paper treats the problem of interpreting the disparity pattern in terms of scene structure without relying on estimates of fixation position from eye movement control and proprioception mechanisms. We propose a sequential decomposition of this interpretation process into disparity correction, which is used to compute three-dimensional structure up to a relief transformation, and disparity normalization, which is used to resolve the relief ambiguity to obtain metric structure. We point out that the disparity normalization stage can often be omitted, since relief transformations preserve important properties such as depth ordering and coplanarity. Based on this framework we analyse three previously proposed computational models of disparity processing; the Mayhew and Longuet-Higgins model, the deformation model and the polar angle disparity model. We show how these models are related, and argue that none of them can account satisfactorily for available psychophysical data. We therefore propose an alternative model, regional disparity correction. Using this model we derive predictions for a number of experiments based on vertical disparity manipulations, and compare them to available experimental data. The paper is concluded with a summary and a discussion of the possible architectures and mechanisms underling stereopsis in the human visual system.

  1. Associations Between Changes in Normal Personality Traits and Borderline Personality Disorder Symptoms over 16 years

    PubMed Central

    Wright, Aidan G.C.; Hopwood, Christopher J.; Zanarini, Mary C.

    2014-01-01

    There has been significant movement toward conceptualizing borderline personality disorder (BPD) with normal personality traits. However one critical assumption underlying this transition, that longitudinal trajectories of BPD symptoms and normal traits track together, has not been tested. We evaluated the prospective longitudinal associations of changes in five-factor model traits and BPD symptoms over the course of 16 years using parallel process latent growth curve models in 362 patients with BPD (N=290) or other PDs (N=72). Moderate to strong cross-sectional and longitudinal associations were observed between BPD symptoms and Neuroticism, Extraversion, Agreeableness, and Conscientiousness. This study is the first to demonstrate a longitudinal link between changes in BPD symptoms and changes in traits over an extended interval in a clinical sample. These findings imply that changes in BPD symptoms occur in concert with changes in normal traits, and support the proposed transition to conceptualizing BPD, at least in part, with trait dimensions. PMID:25364942

  2. Associations between changes in normal personality traits and borderline personality disorder symptoms over 16 years.

    PubMed

    Wright, Aidan G C; Hopwood, Christopher J; Zanarini, Mary C

    2015-01-01

    There has been significant movement toward conceptualizing borderline personality disorder (BPD) with normal personality traits. However, 1 critical assumption underlying this transition, that longitudinal trajectories of BPD symptoms and normal traits track together, has not been tested. We evaluated the prospective longitudinal associations of changes in Five-Factor Model traits and BPD symptoms over the course of 16 years using parallel process latent growth curve models in 362 patients with BPD (n = 290) or other PDs (n = 72). Moderate to strong cross-sectional and longitudinal associations were observed between BPD symptoms and Neuroticism, Extraversion, Agreeableness, and Conscientiousness. This study is the first to demonstrate a longitudinal link between changes in BPD symptoms and changes in traits over an extended interval in a clinical sample. These findings imply that changes in BPD symptoms occur in concert with changes in normal traits, and support the proposed transition to conceptualizing BPD, at least in part, with trait dimensions. (PsycINFO Database Record (c) 2015 APA, all rights reserved).

  3. Reaction times of moderate and severe stutterers to monaural verbal stimuli: some implications for neurolinguistic organization.

    PubMed

    Rastatter, M P; Dell, C W

    1987-03-01

    Fourteen right-handed stutterers and 14 normal speakers (7 men & 7 women) responded to monaurally presented stimuli with their right and left hands. Results of an ANOVA with repeated measures showed that a significant ear-hand interaction existed in the normal subjects' data, with the right-ear, right-hand configuration producing the fastest responses. These findings were in concert with an efficiency model of neurolinguistic organization that suggests that the left hemisphere is dominant for language processing with the right hemisphere being capable of performing less efficient auditory-verbal analysis. Results of a similar ANOVA procedure showed that all main effects and interactions were nonsignificant for the stutterers. From these data a bilateral model of neurolinguistic organization was derived for the stutterers where both hemispheres must participate simultaneously in the decoding process. This held true regardless of sex or severity of stuttering.

  4. Neuronal network models of epileptogenesis

    PubMed Central

    Abdullahi, Aminu T.; Adamu, Lawan H.

    2017-01-01

    Epilepsy is a chronic neurological condition, following some trigger, transforming a normal brain to one that produces recurrent unprovoked seizures. In the search for the mechanisms that best explain the epileptogenic process, there is a growing body of evidence suggesting that the epilepsies are network level disorders. In this review, we briefly describe the concept of neuronal networks and highlight 2 methods used to analyse such networks. The first method, graph theory, is used to describe general characteristics of a network to facilitate comparison between normal and abnormal networks. The second, dynamic causal modelling, is useful in the analysis of the pathways of seizure spread. We concluded that the end results of the epileptogenic process are best understood as abnormalities of neuronal circuitry and not simply as molecular or cellular abnormalities. The network approach promises to generate new understanding and more targeted treatment of epilepsy. PMID:28416779

  5. Use of technetium-99m methylene diphosphonate and gallium-67 citrate scans after intraarticular injection of Staphylococcus aureus into knee joints of rabbits with chronic antigen-induced arthritis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mahowald, M.L.; Raskind, J.R.; Peterson, L.

    1986-08-01

    Numerous clinical studies have questioned the ability of radionuclide scans to differentiate septic from aseptic joint inflammation. A clinical study may not be able to document an underlying disease process or duration of infection and, thus, may make conclusions about the accuracy of scan interpretations open to debate. In this study, the Dumonde-Glynn model of antigen-induced arthritis in rabbits was used as the experimental model to study technetium and gallium scans in Staphylococcus aureus infection of arthritic and normal joints. Gallium scans were negative in normal rabbits, usually negative in antigen-induced arthritis, but positive in septic arthritis. The bone scanmore » was usually negative in early infection but positive in late septic arthritis, a finding reflecting greater penetration of bacteria into subchondral bone because of the underlying inflammatory process.« less

  6. A stochastic model for the normal tissue complication probability (NTCP) and applicationss.

    PubMed

    Stocks, Theresa; Hillen, Thomas; Gong, Jiafen; Burger, Martin

    2017-12-11

    The normal tissue complication probability (NTCP) is a measure for the estimated side effects of a given radiation treatment schedule. Here we use a stochastic logistic birth-death process to define an organ-specific and patient-specific NTCP. We emphasize an asymptotic simplification which relates the NTCP to the solution of a logistic differential equation. This framework is based on simple modelling assumptions and it prepares a framework for the use of the NTCP model in clinical practice. As example, we consider side effects of prostate cancer brachytherapy such as increase in urinal frequency, urinal retention and acute rectal dysfunction. © The authors 2016. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved.

  7. One Dimension Analytical Model of Normal Ballistic Impact on Ceramic/Metal Gradient Armor

    NASA Astrophysics Data System (ADS)

    Liu, Lisheng; Zhang, Qingjie; Zhai, Pengcheng; Cao, Dongfeng

    2008-02-01

    An analytical model of normal ballistic impact on the ceramic/metal gradient armor, which is based on modified Alekseevskii-Tate equations, has been developed. The process of gradient armour impacted by the long rod can be divided into four stages in this model. First stage is projectile's mass erosion or flowing phase, mushrooming phase and rigid phase; second one is the formation of comminuted ceramic conoid; third one is the penetration of gradient layer and last one is the penetration of metal back-up plate. The equations of third stage have been advanced by assuming the behavior of gradient layer as rigid-plastic and considering the effect of strain rate on the dynamic yield strength.

  8. One Dimension Analytical Model of Normal Ballistic Impact on Ceramic/Metal Gradient Armor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu Lisheng; Zhang Qingjie; Zhai Pengcheng

    2008-02-15

    An analytical model of normal ballistic impact on the ceramic/metal gradient armor, which is based on modified Alekseevskii-Tate equations, has been developed. The process of gradient armour impacted by the long rod can be divided into four stages in this model. First stage is projectile's mass erosion or flowing phase, mushrooming phase and rigid phase; second one is the formation of comminuted ceramic conoid; third one is the penetration of gradient layer and last one is the penetration of metal back-up plate. The equations of third stage have been advanced by assuming the behavior of gradient layer as rigid-plastic andmore » considering the effect of strain rate on the dynamic yield strength.« less

  9. A Comparison of Normal Forgetting, Psychopathology, and Information-Processing Models of Reported Amnesia for Recent Sexual Trauma

    PubMed Central

    Mechanic, Mindy B.; Resick, Patricia A.; Griffin, Michael G.

    2010-01-01

    This study assessed memories for sexual trauma in a nontreatment-seeking sample of recent rape victims and considered competing explanations for failed recall. Participants were 92 female rape victims assessed within 2 weeks of the rape; 62 were also assessed 3 months postassault. Memory deficits for parts of the rape were common 2 weeks postassault (37%) but improved over the 3-month window studied (16% still partially amnesic). Hypotheses evaluated competing models of explanation that may account for reported recall deficits. Results are most consistent with information-processing models of traumatic memory. PMID:9874908

  10. Importance of vesicle release stochasticity in neuro-spike communication.

    PubMed

    Ramezani, Hamideh; Akan, Ozgur B

    2017-07-01

    Aim of this paper is proposing a stochastic model for vesicle release process, a part of neuro-spike communication. Hence, we study biological events occurring in this process and use microphysiological simulations to observe functionality of these events. Since the most important source of variability in vesicle release probability is opening of voltage dependent calcium channels (VDCCs) followed by influx of calcium ions through these channels, we propose a stochastic model for this event, while using a deterministic model for other variability sources. To capture the stochasticity of calcium influx to pre-synaptic neuron in our model, we study its statistics and find that it can be modeled by a distribution defined based on Normal and Logistic distributions.

  11. Active simultaneous uplift and margin-normal extension in a forearc high, Crete, Greece

    NASA Astrophysics Data System (ADS)

    Gallen, S. F.; Wegmann, K. W.; Bohnenstiehl, D. R.; Pazzaglia, F. J.; Brandon, M. T.; Fassoulas, C.

    2014-07-01

    The island of Crete occupies a forearc high in the central Hellenic subduction zone and is characterized by sustained exhumation, surface uplift and extension. The processes governing orogenesis and topographic development here remain poorly understood. Dramatic topographic relief (2-6 km) astride the southern coastline of Crete is associated with large margin-parallel faults responsible for deep bathymetric depressions known as the Hellenic troughs. These structures have been interpreted as both active and inactive with either contractional, strike-slip, or extensional movement histories. Distinguishing between these different structural styles and kinematic histories here allows us to explore more general models for improving our global understanding of the tectonic and geodynamic processes of syn-convergent extension. We present new observations from the south-central coastline of Crete that clarifies the role of these faults in the late Cenozoic evolution of the central Hellenic margin and the processes controlling Quaternary surface uplift. Pleistocene marine terraces are used in conjunction with optically stimulated luminesce dating and correlation to the Quaternary eustatic curve to document coastal uplift and identify active faults. Two south-dipping normal faults are observed, which extend offshore, offset these marine terrace deposits and indicate active N-S (margin-normal) extension. Further, marine terraces preserved in the footwall and hanging wall of both faults demonstrate that regional net uplift of Crete is occurring despite active extension. Field mapping and geometric reconstructions of an active onshore normal fault reveal that the subaqueous range-front fault of south-central Crete is synthetic to the south-dipping normal faults on shore. These findings are inconsistent with models of active horizontal shortening in the upper crust of the Hellenic forearc. Rather, they are consistent with topographic growth of the forearc in a viscous orogenic wedge, where crustal thickening and uplift are a result of basal underplating of material that is accompanied by extension in the upper portions of the wedge. Within this framework a new conceptual model is presented for the late Cenozoic vertical tectonics of the Hellenic forearc.

  12. Taking the Missing Propensity Into Account When Estimating Competence Scores

    PubMed Central

    Pohl, Steffi; Carstensen, Claus H.

    2014-01-01

    When competence tests are administered, subjects frequently omit items. These missing responses pose a threat to correctly estimating the proficiency level. Newer model-based approaches aim to take nonignorable missing data processes into account by incorporating a latent missing propensity into the measurement model. Two assumptions are typically made when using these models: (1) The missing propensity is unidimensional and (2) the missing propensity and the ability are bivariate normally distributed. These assumptions may, however, be violated in real data sets and could, thus, pose a threat to the validity of this approach. The present study focuses on modeling competencies in various domains, using data from a school sample (N = 15,396) and an adult sample (N = 7,256) from the National Educational Panel Study. Our interest was to investigate whether violations of unidimensionality and the normal distribution assumption severely affect the performance of the model-based approach in terms of differences in ability estimates. We propose a model with a competence dimension, a unidimensional missing propensity and a distributional assumption more flexible than a multivariate normal. Using this model for ability estimation results in different ability estimates compared with a model ignoring missing responses. Implications for ability estimation in large-scale assessments are discussed. PMID:29795844

  13. Brain activation upon ideal-body media exposure and peer feedback in late adolescent girls.

    PubMed

    van der Meulen, Mara; Veldhuis, Jolanda; Braams, Barbara R; Peters, Sabine; Konijn, Elly A; Crone, Eveline A

    2017-08-01

    Media's prevailing thin-body ideal plays a vital role in adolescent girls' body image development, but the co-occurring impact of peer feedback is understudied. The present study used functional magnetic resonance imaging (fMRI) to test media imagery and peer feedback combinations on neural activity related to thin-body ideals. Twenty-four healthy female late adolescents rated precategorized body sizes of bikini models (too thin or normal), directly followed by ostensible peer feedback (too thin or normal). Consistent with prior studies on social feedback processing, results showed increased brain activity in the dorsal medial prefrontal cortex (dmPFC)/anterior cingulate cortex (ACC) and bilateral insula in incongruent situations: when participants rated media models' body size as normal while peer feedback indicated the models as too thin (or vice versa). This effect was stronger for girls with lower self-esteem. A subsequent behavioral study (N = 34 female late adolescents, separate sample) demonstrated that participants changed behavior in the direction of the peer feedback: precategorized normal sized models were rated as too thin more often after receiving too thin peer feedback. This suggests that the neural responses upon peer feedback may influence subsequent choice. Our results show that media-by-peer interactions have pronounced effects on girls' body ideals.

  14. Real-time physiological monitoring with distributed networks of sensors and object-oriented programming techniques

    NASA Astrophysics Data System (ADS)

    Wiesmann, William P.; Pranger, L. Alex; Bogucki, Mary S.

    1998-05-01

    Remote monitoring of physiologic data from individual high- risk workers distributed over time and space is a considerable challenge. This is often due to an inadequate capability to accurately integrate large amounts of data into usable information in real time. In this report, we have used the vertical and horizontal organization of the 'fireground' as a framework to design a distributed network of sensors. In this system, sensor output is linked through a hierarchical object oriented programing process to accurately interpret physiological data, incorporate these data into a synchronous model and relay processed data, trends and predictions to members of the fire incident command structure. There are several unique aspects to this approach. The first includes a process to account for variability in vital parameter values for each individual's normal physiologic response by including an adaptive network in each data process. This information is used by the model in an iterative process to baseline a 'normal' physiologic response to a given stress for each individual and to detect deviations that indicate dysfunction or a significant insult. The second unique capability of the system orders the information for each user including the subject, local company officers, medical personnel and the incident commanders. Information can be retrieved and used for training exercises and after action analysis. Finally this system can easily be adapted to existing communication and processing links along with incorporating the best parts of current models through the use of object oriented programming techniques. These modern software techniques are well suited to handling multiple data processes independently over time in a distributed network.

  15. Human Language Technology: Opportunities and Challenges

    DTIC Science & Technology

    2005-01-01

    because of the connections to and reliance on signal processing. Audio diarization critically includes indexing of speakers [12], since speaker ...to reduce inter- speaker variability in training. Standard techniques include vocal-tract length normalization, adaptation of acoustic models using...maximum likelihood linear regression (MLLR), and speaker -adaptive training based on MLLR. The acoustic models are mixtures of Gaussians, typically with

  16. Protein Biomarkers Associated With Growth And Synaptogenesis In a cell culture model of neuronal development

    EPA Science Inventory

    Cerebellar granule cells (CGC) provide a homogenous population of cells which can be used as an in vitro model for studying the cellular processes involved in the normal development of the CNS. They may also be useful for hazard identification as in vitro screens fo...

  17. Class and Home Problems. Modeling an Explosion: The Devil Is in the Details

    ERIC Educational Resources Information Center

    Hart, Peter W.; Rudie, Alan W.

    2011-01-01

    Within the past 15 years, three North American pulp mills experienced catastrophic equipment failures while using 50 wt% hydrogen peroxide. In two cases, explosions occurred when normal pulp flow was interrupted due to other process problems. To understand the accidents, a kinetic model of alkali-catalyzed decomposition of peroxide was developed.…

  18. The Source of Adult Age Differences in Event-Based Prospective Memory: A Multinomial Modeling Approach

    ERIC Educational Resources Information Center

    Smith, Rebekah E.; Bayen, Ute J.

    2006-01-01

    Event-based prospective memory involves remembering to perform an action in response to a particular future event. Normal younger and older adults performed event-based prospective memory tasks in 2 experiments. The authors applied a formal multinomial processing tree model of prospective memory (Smith & Bayen, 2004) to disentangle age differences…

  19. An Ensemble System Based on Hybrid EGARCH-ANN with Different Distributional Assumptions to Predict S&P 500 Intraday Volatility

    NASA Astrophysics Data System (ADS)

    Lahmiri, S.; Boukadoum, M.

    2015-10-01

    Accurate forecasting of stock market volatility is an important issue in portfolio risk management. In this paper, an ensemble system for stock market volatility is presented. It is composed of three different models that hybridize the exponential generalized autoregressive conditional heteroscedasticity (GARCH) process and the artificial neural network trained with the backpropagation algorithm (BPNN) to forecast stock market volatility under normal, t-Student, and generalized error distribution (GED) assumption separately. The goal is to design an ensemble system where each single hybrid model is capable to capture normality, excess skewness, or excess kurtosis in the data to achieve complementarity. The performance of each EGARCH-BPNN and the ensemble system is evaluated by the closeness of the volatility forecasts to realized volatility. Based on mean absolute error and mean of squared errors, the experimental results show that proposed ensemble model used to capture normality, skewness, and kurtosis in data is more accurate than the individual EGARCH-BPNN models in forecasting the S&P 500 intra-day volatility based on one and five-minute time horizons data.

  20. A branching process model for the analysis of abortive colony size distributions in carbon ion-irradiated normal human fibroblasts.

    PubMed

    Sakashita, Tetsuya; Hamada, Nobuyuki; Kawaguchi, Isao; Hara, Takamitsu; Kobayashi, Yasuhiko; Saito, Kimiaki

    2014-05-01

    A single cell can form a colony, and ionizing irradiation has long been known to reduce such a cellular clonogenic potential. Analysis of abortive colonies unable to continue to grow should provide important information on the reproductive cell death (RCD) following irradiation. Our previous analysis with a branching process model showed that the RCD in normal human fibroblasts can persist over 16 generations following irradiation with low linear energy transfer (LET) γ-rays. Here we further set out to evaluate the RCD persistency in abortive colonies arising from normal human fibroblasts exposed to high-LET carbon ions (18.3 MeV/u, 108 keV/µm). We found that the abortive colony size distribution determined by biological experiments follows a linear relationship on the log-log plot, and that the Monte Carlo simulation using the RCD probability estimated from such a linear relationship well simulates the experimentally determined surviving fraction and the relative biological effectiveness (RBE). We identified the short-term phase and long-term phase for the persistent RCD following carbon-ion irradiation, which were similar to those previously identified following γ-irradiation. Taken together, our results suggest that subsequent secondary or tertiary colony formation would be invaluable for understanding the long-lasting RCD. All together, our framework for analysis with a branching process model and a colony formation assay is applicable to determination of cellular responses to low- and high-LET radiation, and suggests that the long-lasting RCD is a pivotal determinant of the surviving fraction and the RBE.

  1. Why the chameleon has spiral-shaped muscle fibres in its tongue

    PubMed Central

    Leeuwen, J. L. van

    1997-01-01

    The intralingual accelerator muscle is the primary actuator for the remarkable ballistic tongue projection of the chameleon. At rest, this muscle envelopes the elongated entoglossal process, a cylindrically shaped bone with a tapering distal end. During tongue projection, the accelerator muscle elongates and slides forward along the entoglossal process until the entire muscle extends beyond the distal end of the process. The accelerator muscle fibres are arranged in transverse planes (small deviations are possible), and form (hitherto unexplained) spiral-shaped arcs from the peripheral to the internal boundary. To initiate tongue projection, the muscle fibres probably generate a high intramuscular pressure. The resulting negative pressure gradient (from base to tip) causes the muscle to elongate and to accelerate forward. Effective forward sliding is made possible by a lubricant and a relatively low normal stress exerted on the proximal cylindrical part of the entoglossal process. A relatively high normal stress is, however, probably required for an effective acceleration of muscle tissue over the tapered end of the process. For optimal performance, the fast extension movement should occur without significant (energy absorbing) torsional motion of the tongue. In addition, the tongue extension movement is aided by a close packing of the muscles fibres (required for a high power density) and a uniform strain and work output in every cross-section of the muscle. A quantitative model of the accelerator muscle was developed that predicts internal muscle fibre arrangements based on the functional requirements above and the physical principle of mechanical stability. The curved shapes and orientations of the muscle fibres typically found in the accelerator muscle were accurately predicted by the model. Furthermore, the model predicts that the reduction of the entoglossal radius towards the tip (and thus the internal radius of the muscle) tends to increase the normal stress on the entoglossal bone.

  2. Specificity and timescales of cortical adaptation as inferences about natural movie statistics.

    PubMed

    Snow, Michoel; Coen-Cagli, Ruben; Schwartz, Odelia

    2016-10-01

    Adaptation is a phenomenological umbrella term under which a variety of temporal contextual effects are grouped. Previous models have shown that some aspects of visual adaptation reflect optimal processing of dynamic visual inputs, suggesting that adaptation should be tuned to the properties of natural visual inputs. However, the link between natural dynamic inputs and adaptation is poorly understood. Here, we extend a previously developed Bayesian modeling framework for spatial contextual effects to the temporal domain. The model learns temporal statistical regularities of natural movies and links these statistics to adaptation in primary visual cortex via divisive normalization, a ubiquitous neural computation. In particular, the model divisively normalizes the present visual input by the past visual inputs only to the degree that these are inferred to be statistically dependent. We show that this flexible form of normalization reproduces classical findings on how brief adaptation affects neuronal selectivity. Furthermore, prior knowledge acquired by the Bayesian model from natural movies can be modified by prolonged exposure to novel visual stimuli. We show that this updating can explain classical results on contrast adaptation. We also simulate the recent finding that adaptation maintains population homeostasis, namely, a balanced level of activity across a population of neurons with different orientation preferences. Consistent with previous disparate observations, our work further clarifies the influence of stimulus-specific and neuronal-specific normalization signals in adaptation.

  3. Specificity and timescales of cortical adaptation as inferences about natural movie statistics

    PubMed Central

    Snow, Michoel; Coen-Cagli, Ruben; Schwartz, Odelia

    2016-01-01

    Adaptation is a phenomenological umbrella term under which a variety of temporal contextual effects are grouped. Previous models have shown that some aspects of visual adaptation reflect optimal processing of dynamic visual inputs, suggesting that adaptation should be tuned to the properties of natural visual inputs. However, the link between natural dynamic inputs and adaptation is poorly understood. Here, we extend a previously developed Bayesian modeling framework for spatial contextual effects to the temporal domain. The model learns temporal statistical regularities of natural movies and links these statistics to adaptation in primary visual cortex via divisive normalization, a ubiquitous neural computation. In particular, the model divisively normalizes the present visual input by the past visual inputs only to the degree that these are inferred to be statistically dependent. We show that this flexible form of normalization reproduces classical findings on how brief adaptation affects neuronal selectivity. Furthermore, prior knowledge acquired by the Bayesian model from natural movies can be modified by prolonged exposure to novel visual stimuli. We show that this updating can explain classical results on contrast adaptation. We also simulate the recent finding that adaptation maintains population homeostasis, namely, a balanced level of activity across a population of neurons with different orientation preferences. Consistent with previous disparate observations, our work further clarifies the influence of stimulus-specific and neuronal-specific normalization signals in adaptation. PMID:27699416

  4. Counter-extrapolation method for conjugate interfaces in computational heat and mass transfer.

    PubMed

    Le, Guigao; Oulaid, Othmane; Zhang, Junfeng

    2015-03-01

    In this paper a conjugate interface method is developed by performing extrapolations along the normal direction. Compared to other existing conjugate models, our method has several technical advantages, including the simple and straightforward algorithm, accurate representation of the interface geometry, applicability to any interface-lattice relative orientation, and availability of the normal gradient. The model is validated by simulating the steady and unsteady convection-diffusion system with a flat interface and the steady diffusion system with a circular interface, and good agreement is observed when comparing the lattice Boltzmann results with respective analytical solutions. A more general system with unsteady convection-diffusion process and a curved interface, i.e., the cooling process of a hot cylinder in a cold flow, is also simulated as an example to illustrate the practical usefulness of our model, and the effects of the cylinder heat capacity and thermal diffusivity on the cooling process are examined. Results show that the cylinder with a larger heat capacity can release more heat energy into the fluid and the cylinder temperature cools down slower, while the enhanced heat conduction inside the cylinder can facilitate the cooling process of the system. Although these findings appear obvious from physical principles, the confirming results demonstrates the application potential of our method in more complex systems. In addition, the basic idea and algorithm of the counter-extrapolation procedure presented here can be readily extended to other lattice Boltzmann models and even other computational technologies for heat and mass transfer systems.

  5. Effect of roll compaction on granule size distribution of microcrystalline cellulose–mannitol mixtures: computational intelligence modeling and parametric analysis

    PubMed Central

    Kazemi, Pezhman; Khalid, Mohammad Hassan; Pérez Gago, Ana; Kleinebudde, Peter; Jachowicz, Renata; Szlęk, Jakub; Mendyk, Aleksander

    2017-01-01

    Dry granulation using roll compaction is a typical unit operation for producing solid dosage forms in the pharmaceutical industry. Dry granulation is commonly used if the powder mixture is sensitive to heat and moisture and has poor flow properties. The output of roll compaction is compacted ribbons that exhibit different properties based on the adjusted process parameters. These ribbons are then milled into granules and finally compressed into tablets. The properties of the ribbons directly affect the granule size distribution (GSD) and the quality of final products; thus, it is imperative to study the effect of roll compaction process parameters on GSD. The understanding of how the roll compactor process parameters and material properties interact with each other will allow accurate control of the process, leading to the implementation of quality by design practices. Computational intelligence (CI) methods have a great potential for being used within the scope of quality by design approach. The main objective of this study was to show how the computational intelligence techniques can be useful to predict the GSD by using different process conditions of roll compaction and material properties. Different techniques such as multiple linear regression, artificial neural networks, random forest, Cubist and k-nearest neighbors algorithm assisted by sevenfold cross-validation were used to present generalized models for the prediction of GSD based on roll compaction process setting and material properties. The normalized root-mean-squared error and the coefficient of determination (R2) were used for model assessment. The best fit was obtained by Cubist model (normalized root-mean-squared error =3.22%, R2=0.95). Based on the results, it was confirmed that the material properties (true density) followed by compaction force have the most significant effect on GSD. PMID:28176905

  6. Effect of roll compaction on granule size distribution of microcrystalline cellulose-mannitol mixtures: computational intelligence modeling and parametric analysis.

    PubMed

    Kazemi, Pezhman; Khalid, Mohammad Hassan; Pérez Gago, Ana; Kleinebudde, Peter; Jachowicz, Renata; Szlęk, Jakub; Mendyk, Aleksander

    2017-01-01

    Dry granulation using roll compaction is a typical unit operation for producing solid dosage forms in the pharmaceutical industry. Dry granulation is commonly used if the powder mixture is sensitive to heat and moisture and has poor flow properties. The output of roll compaction is compacted ribbons that exhibit different properties based on the adjusted process parameters. These ribbons are then milled into granules and finally compressed into tablets. The properties of the ribbons directly affect the granule size distribution (GSD) and the quality of final products; thus, it is imperative to study the effect of roll compaction process parameters on GSD. The understanding of how the roll compactor process parameters and material properties interact with each other will allow accurate control of the process, leading to the implementation of quality by design practices. Computational intelligence (CI) methods have a great potential for being used within the scope of quality by design approach. The main objective of this study was to show how the computational intelligence techniques can be useful to predict the GSD by using different process conditions of roll compaction and material properties. Different techniques such as multiple linear regression, artificial neural networks, random forest, Cubist and k-nearest neighbors algorithm assisted by sevenfold cross-validation were used to present generalized models for the prediction of GSD based on roll compaction process setting and material properties. The normalized root-mean-squared error and the coefficient of determination ( R 2 ) were used for model assessment. The best fit was obtained by Cubist model (normalized root-mean-squared error =3.22%, R 2 =0.95). Based on the results, it was confirmed that the material properties (true density) followed by compaction force have the most significant effect on GSD.

  7. Temporary dietary iron restriction affects the process of thrombus resolution in a rat model of deep vein thrombosis.

    PubMed

    Oboshi, Makiko; Naito, Yoshiro; Sawada, Hisashi; Hirotani, Shinichi; Iwasaku, Toshihiro; Okuhara, Yoshitaka; Morisawa, Daisuke; Eguchi, Akiyo; Nishimura, Koichi; Fujii, Kenichi; Mano, Toshiaki; Ishihara, Masaharu; Masuyama, Tohru

    2015-01-01

    Deep vein thrombosis (DVT) is a major cause of pulmonary thromboembolism and sudden death. Thus, it is important to consider the pathophysiology of DVT. Recently, iron has been reported to be associated with thrombotic diseases. Hence, in this study, we investigate the effects of dietary iron restriction on the process of thrombus resolution in a rat model of DVT. We induced DVT in 8-week-old male Sprague-Dawley rats by performing ligations of their inferior venae cavae. The rats were then given either a normal diet (DVT group) or an iron-restricted diet (DVT+IR group). Thrombosed inferior venae cavae were harvested at 5 days after ligation. The iron-restricted diet reduced venous thrombus size compared to the normal diet. Intrathrombotic collagen content was diminished in the DVT+IR group compared to the DVT group. In addition, intrathrombotic gene expression and the activity of matrix metalloproteinase-9 were increased in the DVT+IR group compared to the DVT group. Furthermore, the DVT+IR group had greater intrathrombotic neovascularization as well as higher gene expression levels of urokinase-type plasminogen activator and tissue-type plasminogen activator than the DVT group. The iron-restricted diet decreased intrathrombotic superoxide production compared to the normal diet. These results suggest that dietary iron restriction affects the process of thrombus resolution in DVT.

  8. Temporary Dietary Iron Restriction Affects the Process of Thrombus Resolution in a Rat Model of Deep Vein Thrombosis

    PubMed Central

    Oboshi, Makiko; Naito, Yoshiro; Sawada, Hisashi; Hirotani, Shinichi; Iwasaku, Toshihiro; Okuhara, Yoshitaka; Morisawa, Daisuke; Eguchi, Akiyo; Nishimura, Koichi; Fujii, Kenichi; Mano, Toshiaki; Ishihara, Masaharu; Masuyama, Tohru

    2015-01-01

    Background Deep vein thrombosis (DVT) is a major cause of pulmonary thromboembolism and sudden death. Thus, it is important to consider the pathophysiology of DVT. Recently, iron has been reported to be associated with thrombotic diseases. Hence, in this study, we investigate the effects of dietary iron restriction on the process of thrombus resolution in a rat model of DVT. Methods We induced DVT in 8-week-old male Sprague-Dawley rats by performing ligations of their inferior venae cavae. The rats were then given either a normal diet (DVT group) or an iron-restricted diet (DVT+IR group). Thrombosed inferior venae cavae were harvested at 5 days after ligation. Results The iron-restricted diet reduced venous thrombus size compared to the normal diet. Intrathrombotic collagen content was diminished in the DVT+IR group compared to the DVT group. In addition, intrathrombotic gene expression and the activity of matrix metalloproteinase-9 were increased in the DVT+IR group compared to the DVT group. Furthermore, the DVT+IR group had greater intrathrombotic neovascularization as well as higher gene expression levels of urokinase-type plasminogen activator and tissue-type plasminogen activator than the DVT group. The iron-restricted diet decreased intrathrombotic superoxide production compared to the normal diet. Conclusions These results suggest that dietary iron restriction affects the process of thrombus resolution in DVT. PMID:25962140

  9. Thermodynamic Model of Spatial Memory

    NASA Astrophysics Data System (ADS)

    Kaufman, Miron; Allen, P.

    1998-03-01

    We develop and test a thermodynamic model of spatial memory. Our model is an application of statistical thermodynamics to cognitive science. It is related to applications of the statistical mechanics framework in parallel distributed processes research. Our macroscopic model allows us to evaluate an entropy associated with spatial memory tasks. We find that older adults exhibit higher levels of entropy than younger adults. Thurstone's Law of Categorical Judgment, according to which the discriminal processes along the psychological continuum produced by presentations of a single stimulus are normally distributed, is explained by using a Hooke spring model of spatial memory. We have also analyzed a nonlinear modification of the ideal spring model of spatial memory. This work is supported by NIH/NIA grant AG09282-06.

  10. Social neuroscience and its potential contribution to psychiatry

    PubMed Central

    Cacioppo, John T; Cacioppo, Stephanie; Dulawa, Stephanie; Palmer, Abraham A

    2014-01-01

    Most mental disorders involve disruptions of normal social behavior. Social neuroscience is an interdisciplinary field devoted to understanding the biological systems underlying social processes and behavior, and the influence of the social environment on biological processes, health and well-being. Research in this field has grown dramatically in recent years. Active areas of research include brain imaging studies in normal children and adults, animal models of social behavior, studies of stroke patients, imaging studies of psychiatric patients, and research on social determinants of peripheral neural, neuroendocrine and immunological processes. Although research in these areas is proceeding along largely independent trajectories, there is increasing evidence for connections across these trajectories. We focus here on the progress and potential of social neuroscience in psychiatry, including illustrative evidence for a rapid growth of neuroimaging and genetic studies of mental disorders. We also argue that neuroimaging and genetic research focused on specific component processes underlying social living is needed. PMID:24890058

  11. Principles of Temporal Processing Across the Cortical Hierarchy.

    PubMed

    Himberger, Kevin D; Chien, Hsiang-Yun; Honey, Christopher J

    2018-05-02

    The world is richly structured on multiple spatiotemporal scales. In order to represent spatial structure, many machine-learning models repeat a set of basic operations at each layer of a hierarchical architecture. These iterated spatial operations - including pooling, normalization and pattern completion - enable these systems to recognize and predict spatial structure, while robust to changes in the spatial scale, contrast and noisiness of the input signal. Because our brains also process temporal information that is rich and occurs across multiple time scales, might the brain employ an analogous set of operations for temporal information processing? Here we define a candidate set of temporal operations, and we review evidence that they are implemented in the mammalian cerebral cortex in a hierarchical manner. We conclude that multiple consecutive stages of cortical processing can be understood to perform temporal pooling, temporal normalization and temporal pattern completion. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.

  12. Feature-constrained surface reconstruction approach for point cloud data acquired with 3D laser scanner

    NASA Astrophysics Data System (ADS)

    Wang, Yongbo; Sheng, Yehua; Lu, Guonian; Tian, Peng; Zhang, Kai

    2008-04-01

    Surface reconstruction is an important task in the field of 3d-GIS, computer aided design and computer graphics (CAD & CG), virtual simulation and so on. Based on available incremental surface reconstruction methods, a feature-constrained surface reconstruction approach for point cloud is presented. Firstly features are extracted from point cloud under the rules of curvature extremes and minimum spanning tree. By projecting local sample points to the fitted tangent planes and using extracted features to guide and constrain the process of local triangulation and surface propagation, topological relationship among sample points can be achieved. For the constructed models, a process named consistent normal adjustment and regularization is adopted to adjust normal of each face so that the correct surface model is achieved. Experiments show that the presented approach inherits the convenient implementation and high efficiency of traditional incremental surface reconstruction method, meanwhile, it avoids improper propagation of normal across sharp edges, which means the applicability of incremental surface reconstruction is greatly improved. Above all, appropriate k-neighborhood can help to recognize un-sufficient sampled areas and boundary parts, the presented approach can be used to reconstruct both open and close surfaces without additional interference.

  13. Working memory, situation models, and synesthesia

    DOE PAGES

    Radvansky, Gabriel A.; Gibson, Bradley S.; McNerney, M. Windy

    2013-03-04

    Research on language comprehension suggests a strong relationship between working memory span measures and language comprehension. However, there is also evidence that this relationship weakens at higher levels of comprehension, such as the situation model level. The current study explored this relationship by comparing 10 grapheme–color synesthetes who have additional color experiences when they read words that begin with different letters and 48 normal controls on a number of tests of complex working memory capacity and processing at the situation model level. On all tests of working memory capacity, the synesthetes outperformed the controls. Importantly, there was no carryover benefitmore » for the synesthetes for processing at the situation model level. This reinforces the idea that although some aspects of language comprehension are related to working memory span scores, this applies less directly to situation model levels. As a result, this suggests that theories of working memory must take into account this limitation, and the working memory processes that are involved in situation model construction and processing must be derived.« less

  14. Working memory, situation models, and synesthesia

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Radvansky, Gabriel A.; Gibson, Bradley S.; McNerney, M. Windy

    Research on language comprehension suggests a strong relationship between working memory span measures and language comprehension. However, there is also evidence that this relationship weakens at higher levels of comprehension, such as the situation model level. The current study explored this relationship by comparing 10 grapheme–color synesthetes who have additional color experiences when they read words that begin with different letters and 48 normal controls on a number of tests of complex working memory capacity and processing at the situation model level. On all tests of working memory capacity, the synesthetes outperformed the controls. Importantly, there was no carryover benefitmore » for the synesthetes for processing at the situation model level. This reinforces the idea that although some aspects of language comprehension are related to working memory span scores, this applies less directly to situation model levels. As a result, this suggests that theories of working memory must take into account this limitation, and the working memory processes that are involved in situation model construction and processing must be derived.« less

  15. [Reason and emotion: integration of cognitive-behavioural and experiential interventions in the treatment of long evolution food disorders].

    PubMed

    Vilariño Besteiro, M P; Pérez Franco, C; Gallego Morales, L; Calvo Sagardoy, R; García de Lorenzo, A

    2009-01-01

    This paper intends to show the combination of therapeutical strategies in the treatment of long evolution food disorders. This fashion of work entitled "Modelo Santa Cristina" is based on several theoretical paradigms: Enabling Model, Action Control Model, Change Process Transtheoretical Model and Cognitive-Behavioural Model (Cognitive Restructuring and Learning Theories). Furthermore, Gestalt, Systemic and Psychodrama Orientation Techniques. The purpose of the treatment is both the normalization of food patterns and the increase in self-knowledge, self-acceptance and self-efficacy of patients. The exploration of ambivalence to change, the discovery of the functions of symptoms and the search for alternative behaviours, the normalization of food patterns, bodily image, cognitive restructuring, decision taking, communication skills and elaboration of traumatic experiences are among the main areas of intervention.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Kandler A; Santhanagopalan, Shriram; Yang, Chuanbo

    Computer models are helping to accelerate the design and validation of next generation batteries and provide valuable insights not possible through experimental testing alone. Validated 3-D physics-based models exist for predicting electrochemical performance, thermal and mechanical response of cells and packs under normal and abuse scenarios. The talk describes present efforts to make the models better suited for engineering design, including improving their computation speed, developing faster processes for model parameter identification including under aging, and predicting the performance of a proposed electrode material recipe a priori using microstructure models.

  17. Relating normalization to neuronal populations across cortical areas.

    PubMed

    Ruff, Douglas A; Alberts, Joshua J; Cohen, Marlene R

    2016-09-01

    Normalization, which divisively scales neuronal responses to multiple stimuli, is thought to underlie many sensory, motor, and cognitive processes. In every study where it has been investigated, neurons measured in the same brain area under identical conditions exhibit a range of normalization, ranging from suppression by nonpreferred stimuli (strong normalization) to additive responses to combinations of stimuli (no normalization). Normalization has been hypothesized to arise from interactions between neuronal populations, either in the same or different brain areas, but current models of normalization are not mechanistic and focus on trial-averaged responses. To gain insight into the mechanisms underlying normalization, we examined interactions between neurons that exhibit different degrees of normalization. We recorded from multiple neurons in three cortical areas while rhesus monkeys viewed superimposed drifting gratings. We found that neurons showing strong normalization shared less trial-to-trial variability with other neurons in the same cortical area and more variability with neurons in other cortical areas than did units with weak normalization. Furthermore, the cortical organization of normalization was not random: neurons recorded on nearby electrodes tended to exhibit similar amounts of normalization. Together, our results suggest that normalization reflects a neuron's role in its local network and that modulatory factors like normalization share the topographic organization typical of sensory tuning properties. Copyright © 2016 the American Physiological Society.

  18. Relating normalization to neuronal populations across cortical areas

    PubMed Central

    Alberts, Joshua J.; Cohen, Marlene R.

    2016-01-01

    Normalization, which divisively scales neuronal responses to multiple stimuli, is thought to underlie many sensory, motor, and cognitive processes. In every study where it has been investigated, neurons measured in the same brain area under identical conditions exhibit a range of normalization, ranging from suppression by nonpreferred stimuli (strong normalization) to additive responses to combinations of stimuli (no normalization). Normalization has been hypothesized to arise from interactions between neuronal populations, either in the same or different brain areas, but current models of normalization are not mechanistic and focus on trial-averaged responses. To gain insight into the mechanisms underlying normalization, we examined interactions between neurons that exhibit different degrees of normalization. We recorded from multiple neurons in three cortical areas while rhesus monkeys viewed superimposed drifting gratings. We found that neurons showing strong normalization shared less trial-to-trial variability with other neurons in the same cortical area and more variability with neurons in other cortical areas than did units with weak normalization. Furthermore, the cortical organization of normalization was not random: neurons recorded on nearby electrodes tended to exhibit similar amounts of normalization. Together, our results suggest that normalization reflects a neuron's role in its local network and that modulatory factors like normalization share the topographic organization typical of sensory tuning properties. PMID:27358313

  19. A dual-task investigation of automaticity in visual word processing

    NASA Technical Reports Server (NTRS)

    McCann, R. S.; Remington, R. W.; Van Selst, M.

    2000-01-01

    An analysis of activation models of visual word processing suggests that frequency-sensitive forms of lexical processing should proceed normally while unattended. This hypothesis was tested by having participants perform a speeded pitch discrimination task followed by lexical decisions or word naming. As the stimulus onset asynchrony between the tasks was reduced, lexical-decision and naming latencies increased dramatically. Word-frequency effects were additive with the increase, indicating that frequency-sensitive processing was subject to postponement while attention was devoted to the other task. Either (a) the same neural hardware shares responsibility for lexical processing and central stages of choice reaction time task processing and cannot perform both computations simultaneously, or (b) lexical processing is blocked in order to optimize performance on the pitch discrimination task. Either way, word processing is not as automatic as activation models suggest.

  20. Qualitative and Quantitative Distinctions in Personality Disorder

    PubMed Central

    Wright, Aidan G. C.

    2011-01-01

    The “categorical-dimensional debate” has catalyzed a wealth of empirical advances in the study of personality pathology. However, this debate is merely one articulation of a broader conceptual question regarding whether to define and describe psychopathology as a quantitatively extreme expression of normal functioning or as qualitatively distinct in its process. In this paper I argue that dynamic models of personality (e.g., object-relations, cognitive-affective processing system) offer the conceptual scaffolding to reconcile these seemingly incompatible approaches to characterizing the relationship between normal and pathological personality. I propose that advances in personality assessment that sample behavior and experiences intensively provide the empirical techniques, whereas interpersonal theory offers an integrative theoretical framework, for accomplishing this goal. PMID:22804676

  1. [Effect of Small Knife Needle on β-enorpin and Enkehalin Contents of Tansverse Process Syndrome of the Third Vertebra].

    PubMed

    Liu, Nai-gang; Guo, Chang-qing; Sun, Hong-mei; Li, Xiao-hong; Wu, Hai-xia; Xu, Hong

    2016-04-01

    To explore the analgesic mechanism of small knife needle for treating transverse process syndrome of the third vertebra (TPSTV) by observing peripheral and central changesof β-endorphin (β-EP) and enkephalin (ENK) contents. Totally 30 Japanese white big-ear rabbits of clean grade were divided into 5 groups according to random digit table, i.e., the normal control group, the model group, the small knife needle group, the electroacupunture (EA) group, and the small knife needle plus EA group, 6 in each group. The TPSTV model was established by inserting a piece of gelatin sponge into the left transverse process of 3rd lumbar vertebrae. Rabbits in the small knife needlegroup were intervened by small knife needle. Those in the EA group were intervened by EA at bilateralWeizhong (BL40). Those in the small knife needle plus EA group were intervened by small knife needleand EA at bilateral Weizhong (BL40). Contents of β-EP and ENK in plasma, muscle, spinal cord, and hypothalamus were determined after sample collection at day 28 after modeling. Compared with the normal control group, contents of β-EP and ENK in plasma and muscle increased significantly, and contents of β-EP and ENK in spinal cord and hypothalamus decreased significantly in the model group (P < 0.05, P < 0.01). Contents of β-EP and ENK approximated normal levels in the three treatment groups after respective treatment. Compared with the model group, the content of β-EP in muscle decreased, and contents of β-EP and ENK in hypothalamus increased in the three treatment groups after respective treatment (P < 0.05). There were no significant difference among the three treatment groups (P > 0.05). Small knife needle treatment and EA had benign regulation on peripheral and central β-EP and ENK in TPSTV rabbits. Small knife needle treatment showed better effect than that of EA.

  2. Electrochemical oxidation of ampicillin antibiotic at boron-doped diamond electrodes and process optimization using response surface methodology.

    PubMed

    Körbahti, Bahadır K; Taşyürek, Selin

    2015-03-01

    Electrochemical oxidation and process optimization of ampicillin antibiotic at boron-doped diamond electrodes (BDD) were investigated in a batch electrochemical reactor. The influence of operating parameters, such as ampicillin concentration, electrolyte concentration, current density, and reaction temperature, on ampicillin removal, COD removal, and energy consumption was analyzed in order to optimize the electrochemical oxidation process under specified cost-driven constraints using response surface methodology. Quadratic models for the responses satisfied the assumptions of the analysis of variance well according to normal probability, studentized residuals, and outlier t residual plots. Residual plots followed a normal distribution, and outlier t values indicated that the approximations of the fitted models to the quadratic response surfaces were very good. Optimum operating conditions were determined at 618 mg/L ampicillin concentration, 3.6 g/L electrolyte concentration, 13.4 mA/cm(2) current density, and 36 °C reaction temperature. Under response surface optimized conditions, ampicillin removal, COD removal, and energy consumption were obtained as 97.1 %, 92.5 %, and 71.7 kWh/kg CODr, respectively.

  3. A Near-Wall Reynolds-Stress Closure Without Wall Normals

    NASA Technical Reports Server (NTRS)

    Yuan, S. P.; So, R. M. C.

    1997-01-01

    Turbulent wall-bounded complex flows are commonly encountered in engineering practice and are of considerable interest in a variety of industrial applications. The presence of a wall significantly affects turbulence characteristics. In addition to the wall effects, turbulent wall-bounded flows become more complicated by the presence of additional body forces (e.g. centrifugal force and Coriolis force) and complex geometry. Most near-wall Reynolds stress models are developed from a high-Reynolds-number model which assumes turbulence is homogenous (or quasi-homogenous). Near-wall modifications are proposed to include wall effects in near-wall regions. In this process, wall normals are introduced. Good predictions could be obtained by Reynolds stress models with wall normals. However, ambiguity arises when the models are applied in flows with multiple walls. Many models have been proposed to model turbulent flows. Among them, Reynolds stress models, in which turbulent stresses are obtained by solving the Reynolds stress transport equations, have been proved to be the most successful ones. To apply the Reynolds stress models to wall-bounded flows, near-wall corrections accounting for the wall effects are needed, and the resulting models are called near-wall Reynolds stress models. In most of the existing near-wall models, the near-wall corrections invoke wall normals. These wall-dependent near-wall models are difficult to implement for turbulent flows with complex geometry and may give inaccurate predictions due to the ambiguity of wall normals at corners connecting multiple walls. The objective of this study is to develop a more general and flexible near-wall Reynolds stress model without using any wall-dependent variable for wall-bounded turbulent flows. With the aid of near-wall asymptotic analysis and results of direct numerical simulation, a new near-wall Reynolds stress model (NNWRS) is formulated based on Speziale et al.'s high-Reynolds-stress model with wall-independent near-wall corrections. Moreover, only one damping function is used for flows with a wide range of Reynolds numbers to ensure that the near-wall modifications diminish away from the walls.

  4. Dissociation of doubly charged clusters of lithium acetate: Asymmetric fission and breakdown of the liquid drop model: Dissociation of doubly charged clusters of lithium acetate

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shukla, Anil

    2016-06-08

    Unimolecular and collision-induced dissociation of doubly charged lithium acetate clusters, (CH3COOLi)nLi22+, demonstrated that Coulomb fission via charge separation is the dominant dissociation process with no contribution from the neutral evaporation processes for all such ions from the critical limit to larger cluster ions, although latter process have normally been observed in all earlier studies. These results are clearly in disagreement with the Rayleigh’s liquid drop model that has been used successfully to predict the critical size and explain the fragmentation behavior of multiply charged clusters.

  5. [Topographological-anatomic changes in the structure of temporo-mandibular joint in case of fracture of the mandible condylar process at cervical level].

    PubMed

    Volkov, S I; Bazhenov, D V; Semkin, V A

    2011-01-01

    Pathological changes in soft tissues surrounding the fracture site as well as in the structural elements of temporo-mandibular joint always occured in condylar process fracture with shift at cervical mandibular jaw level. Other changes were also seen in the joint on the opposite normal side. Modelling of condylar process fracture at mandibular cervical level by means of three-dimensional computer model of temporo-mandibular joint contributed to proper understanding of this pathology emergence as well as to prediction and elimination of disorders arising in adjacent to the fracture site tissues.

  6. Impaired Albumin Uptake and Processing Promote Albuminuria in OVE26 Diabetic Mice

    PubMed Central

    Long, Y. S.; Zheng, S.; Kralik, P. M.; Benz, F. W.

    2016-01-01

    The importance of proximal tubules dysfunction to diabetic albuminuria is uncertain. OVE26 mice have the most severe albuminuria of all diabetic mouse models but it is not known if impaired tubule uptake and processing are contributing factors. In the current study fluorescent albumin was used to follow the fate of albumin in OVE26 and normal mice. Compared to normal urine, OVE26 urine contained at least 23 times more intact fluorescent albumin but only 3-fold more 70 kD fluorescent dextran. This indicated that a function other than size selective glomerular sieving contributed to OVE26 albuminuria. Imaging of albumin was similar in normal and diabetic tubules for 3 hrs after injection. However 3 days after injection a subset of OVE26 tubules retained strong albumin fluorescence, which was never observed in normal mice. OVE26 tubules with prolonged retention of injected albumin lost the capacity to take up albumin and there was a significant correlation between tubules unable to eliminate fluorescent albumin and total albuminuria. TUNEL staining revealed a 76-fold increase in cell death in OVE26 tubules that retained fluorescent albumin. These results indicate that failure to process and dispose of internalized albumin leads to impaired albumin uptake, increased albuminuria, and tubule cell apoptosis. PMID:27822483

  7. Modelling stock order flows with non-homogeneous intensities from high-frequency data

    NASA Astrophysics Data System (ADS)

    Gorshenin, Andrey K.; Korolev, Victor Yu.; Zeifman, Alexander I.; Shorgin, Sergey Ya.; Chertok, Andrey V.; Evstafyev, Artem I.; Korchagin, Alexander Yu.

    2013-10-01

    A micro-scale model is proposed for the evolution of such information system as the limit order book in financial markets. Within this model, the flows of orders (claims) are described by doubly stochastic Poisson processes taking account of the stochastic character of intensities of buy and sell orders that determine the price discovery mechanism. The proposed multiplicative model of stochastic intensities makes it possible to analyze the characteristics of the order flows as well as the instantaneous proportion of the forces of buyers and sellers, that is, the imbalance process, without modelling the external information background. The proposed model gives the opportunity to link the micro-scale (high-frequency) dynamics of the limit order book with the macro-scale models of stock price processes of the form of subordinated Wiener processes by means of limit theorems of probability theory and hence, to use the normal variance-mean mixture models of the corresponding heavy-tailed distributions. The approach can be useful in different areas with similar properties (e.g., in plasma physics).

  8. Uranium Pyrophoricity Phenomena and Prediction (FAI/00-39)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    PLYS, M.G.

    2000-10-10

    The purpose of this report is to provide a topical reference on the phenomena and prediction of uranium pyrophoricity for the Hanford Spent Nuclear Fuel (SNF) Project with specific applications to SNF Project processes and situations. Spent metallic uranium nuclear fuel is currently stored underwater at the K basins in the Hanford 100 area, and planned processing steps include: (1) At the basins, cleaning and placing fuel elements and scrap into stainless steel multi-canister overpacks (MCOs) holding about 6 MT of fuel apiece; (2) At nearby cold vacuum drying (CVD) stations, draining, vacuum drying, and mechanically sealing the MCOs; (3)more » Shipping the MCOs to the Canister Storage Building (CSB) on the 200 Area plateau; and (4) Welding shut and placing the MCOs for interim (40 year) dry storage in closed CSB storage tubes cooled by natural air circulation through the surrounding vault. Damaged fuel elements have exposed and corroded fuel surfaces, which can exothermically react with water vapor and oxygen during normal process steps and in off-normal situations, A key process safety concern is the rate of reaction of damaged fuel and the potential for self-sustaining or runaway reactions, also known as uranium fires or fuel ignition. Uranium metal and one of its corrosion products, uranium hydride, are potentially pyrophoric materials. Dangers of pyrophoricity of uranium and its hydride have long been known in the U.S. Department of Energy (Atomic Energy Commission/DOE) complex and will be discussed more below; it is sufficient here to note that there are numerous documented instances of uranium fires during normal operations. The motivation for this work is to place the safety of the present process in proper perspective given past operational experience. Steps in development of such a perspective are: (1) Description of underlying physical causes for runaway reactions, (2) Modeling physical processes to explain runaway reactions, (3) Validation of the method against experimental data, (4) Application of the method to plausibly explain operational experience, and (5) Application of the method to present process steps to demonstrate process safety and margin. Essentially, the logic above is used to demonstrate that runaway reactions cannot occur during normal SNF Project process steps, and to illustrate the depth of the technical basis for such a conclusion. Some off-normal conditions are identified here that could potentially lead to runaway reactions. However, this document is not intended to provide an exhaustive analysis of such cases. In summary, this report provides a ''toolkit'' of models and approaches for analysis of pyrophoricity safety issues at Hanford, and the technical basis for the recommended approaches. A summary of recommended methods appears in Section 9.0.« less

  9. WEST-3 wind turbine simulator development

    NASA Technical Reports Server (NTRS)

    Hoffman, J. A.; Sridhar, S.

    1985-01-01

    The software developed for WEST-3, a new, all digital, and fully programmable wind turbine simulator is given. The process of wind turbine simulation on WEST-3 is described in detail. The major steps are, the processing of the mathematical models, the preparation of the constant data, and the use of system software generated executable code for running on WEST-3. The mechanics of reformulation, normalization, and scaling of the mathematical models is discussed in detail, in particulr, the significance of reformulation which leads to accurate simulations. Descriptions for the preprocessor computer programs which are used to prepare the constant data needed in the simulation are given. These programs, in addition to scaling and normalizing all the constants, relieve the user from having to generate a large number of constants used in the simulation. Also given are brief descriptions of the components of the WEST-3 system software: Translator, Assembler, Linker, and Loader. Also included are: details of the aeroelastic rotor analysis, which is the center of a wind turbine simulation model, analysis of the gimbal subsystem; and listings of the variables, constants, and equations used in the simulation.

  10. Using Multi-scale Dynamic Rupture Models to Improve Ground Motion Estimates: ALCF-2 Early Science Program Technical Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ely, Geoffrey P.

    2013-10-31

    This project uses dynamic rupture simulations to investigate high-frequency seismic energy generation. The relevant phenomena (frictional breakdown, shear heating, effective normal-stress fluctuations, material damage, etc.) controlling rupture are strongly interacting and span many orders of magnitude in spatial scale, requiring highresolution simulations that couple disparate physical processes (e.g., elastodynamics, thermal weakening, pore-fluid transport, and heat conduction). Compounding the computational challenge, we know that natural faults are not planar, but instead have roughness that can be approximated by power laws potentially leading to large, multiscale fluctuations in normal stress. The capacity to perform 3D rupture simulations that couple these processes willmore » provide guidance for constructing appropriate source models for high-frequency ground motion simulations. The improved rupture models from our multi-scale dynamic rupture simulations will be used to conduct physicsbased (3D waveform modeling-based) probabilistic seismic hazard analysis (PSHA) for California. These calculation will provide numerous important seismic hazard results, including a state-wide extended earthquake rupture forecast with rupture variations for all significant events, a synthetic seismogram catalog for thousands of scenario events and more than 5000 physics-based seismic hazard curves for California.« less

  11. Feedforward Inhibition and Synaptic Scaling – Two Sides of the Same Coin?

    PubMed Central

    Lücke, Jörg

    2012-01-01

    Feedforward inhibition and synaptic scaling are important adaptive processes that control the total input a neuron can receive from its afferents. While often studied in isolation, the two have been reported to co-occur in various brain regions. The functional implications of their interactions remain unclear, however. Based on a probabilistic modeling approach, we show here that fast feedforward inhibition and synaptic scaling interact synergistically during unsupervised learning. In technical terms, we model the input to a neural circuit using a normalized mixture model with Poisson noise. We demonstrate analytically and numerically that, in the presence of lateral inhibition introducing competition between different neurons, Hebbian plasticity and synaptic scaling approximate the optimal maximum likelihood solutions for this model. Our results suggest that, beyond its conventional use as a mechanism to remove undesired pattern variations, input normalization can make typical neural interaction and learning rules optimal on the stimulus subspace defined through feedforward inhibition. Furthermore, learning within this subspace is more efficient in practice, as it helps avoid locally optimal solutions. Our results suggest a close connection between feedforward inhibition and synaptic scaling which may have important functional implications for general cortical processing. PMID:22457610

  12. Feedforward inhibition and synaptic scaling--two sides of the same coin?

    PubMed

    Keck, Christian; Savin, Cristina; Lücke, Jörg

    2012-01-01

    Feedforward inhibition and synaptic scaling are important adaptive processes that control the total input a neuron can receive from its afferents. While often studied in isolation, the two have been reported to co-occur in various brain regions. The functional implications of their interactions remain unclear, however. Based on a probabilistic modeling approach, we show here that fast feedforward inhibition and synaptic scaling interact synergistically during unsupervised learning. In technical terms, we model the input to a neural circuit using a normalized mixture model with Poisson noise. We demonstrate analytically and numerically that, in the presence of lateral inhibition introducing competition between different neurons, Hebbian plasticity and synaptic scaling approximate the optimal maximum likelihood solutions for this model. Our results suggest that, beyond its conventional use as a mechanism to remove undesired pattern variations, input normalization can make typical neural interaction and learning rules optimal on the stimulus subspace defined through feedforward inhibition. Furthermore, learning within this subspace is more efficient in practice, as it helps avoid locally optimal solutions. Our results suggest a close connection between feedforward inhibition and synaptic scaling which may have important functional implications for general cortical processing.

  13. Bayesian soft X-ray tomography using non-stationary Gaussian Processes

    NASA Astrophysics Data System (ADS)

    Li, Dong; Svensson, J.; Thomsen, H.; Medina, F.; Werner, A.; Wolf, R.

    2013-08-01

    In this study, a Bayesian based non-stationary Gaussian Process (GP) method for the inference of soft X-ray emissivity distribution along with its associated uncertainties has been developed. For the investigation of equilibrium condition and fast magnetohydrodynamic behaviors in nuclear fusion plasmas, it is of importance to infer, especially in the plasma center, spatially resolved soft X-ray profiles from a limited number of noisy line integral measurements. For this ill-posed inversion problem, Bayesian probability theory can provide a posterior probability distribution over all possible solutions under given model assumptions. Specifically, the use of a non-stationary GP to model the emission allows the model to adapt to the varying length scales of the underlying diffusion process. In contrast to other conventional methods, the prior regularization is realized in a probability form which enhances the capability of uncertainty analysis, in consequence, scientists who concern the reliability of their results will benefit from it. Under the assumption of normally distributed noise, the posterior distribution evaluated at a discrete number of points becomes a multivariate normal distribution whose mean and covariance are analytically available, making inversions and calculation of uncertainty fast. Additionally, the hyper-parameters embedded in the model assumption can be optimized through a Bayesian Occam's Razor formalism and thereby automatically adjust the model complexity. This method is shown to produce convincing reconstructions and good agreements with independently calculated results from the Maximum Entropy and Equilibrium-Based Iterative Tomography Algorithm methods.

  14. Bayesian soft X-ray tomography using non-stationary Gaussian Processes.

    PubMed

    Li, Dong; Svensson, J; Thomsen, H; Medina, F; Werner, A; Wolf, R

    2013-08-01

    In this study, a Bayesian based non-stationary Gaussian Process (GP) method for the inference of soft X-ray emissivity distribution along with its associated uncertainties has been developed. For the investigation of equilibrium condition and fast magnetohydrodynamic behaviors in nuclear fusion plasmas, it is of importance to infer, especially in the plasma center, spatially resolved soft X-ray profiles from a limited number of noisy line integral measurements. For this ill-posed inversion problem, Bayesian probability theory can provide a posterior probability distribution over all possible solutions under given model assumptions. Specifically, the use of a non-stationary GP to model the emission allows the model to adapt to the varying length scales of the underlying diffusion process. In contrast to other conventional methods, the prior regularization is realized in a probability form which enhances the capability of uncertainty analysis, in consequence, scientists who concern the reliability of their results will benefit from it. Under the assumption of normally distributed noise, the posterior distribution evaluated at a discrete number of points becomes a multivariate normal distribution whose mean and covariance are analytically available, making inversions and calculation of uncertainty fast. Additionally, the hyper-parameters embedded in the model assumption can be optimized through a Bayesian Occam's Razor formalism and thereby automatically adjust the model complexity. This method is shown to produce convincing reconstructions and good agreements with independently calculated results from the Maximum Entropy and Equilibrium-Based Iterative Tomography Algorithm methods.

  15. Contact resistance and normal zone formation in coated yttrium barium copper oxide superconductors

    NASA Astrophysics Data System (ADS)

    Duckworth, Robert Calvin

    2001-11-01

    This project presents a systematic study of contact resistance and normal zone formation in silver coated YBa2CU3Ox (YBCO) superconductors. A unique opportunity exists in YBCO superconductors because of the ability to use oxygen annealing to influence the interfacial properties and the planar geometry of this type of superconductor to characterize the contact resistance between the silver and YBCO. The interface represents a region that current must cross when normal zones form in the superconductor and a high contact resistance could impede the current transfer or produce excess Joule heating that would result in premature quench or damage of the sample. While it has been shown in single-crystalline YBCO processing methods that the contact resistance of the silver/YBCO interface can be influenced by post-process oxygen annealing, this has not previously been confirmed for high-density films, nor for samples with complete layers of silver deposited on top of the YBCO. Both the influence of contact resistance and the knowledge of normal zone formation on conductor sized samples is essential for their successful implementation into superconducting applications such as transmission lines and magnets. While normal zone formation and propagation have been studied in other high temperature superconductors, the amount of information with respect to YBCO has been very limited. This study establishes that the processing method for the YBCO does not affect the contact resistance and mirrors the dependence of contact resistance on oxygen annealing temperature observed in earlier work. It has also been experimentally confirmed that the current transfer length provides an effective representation of the contact resistance when compared to more direct measurements using the traditional four-wire method. Finally for samples with low contact resistance, a combination of experiments and modeling demonstrate an accurate understanding of the key role of silver thickness and substrate thickness on the stability of silver-coated YBCO Rolling Assisted Bi-Axially Textured Substrates conductors. Both the experimental measurements and the one-dimensional model show that increasing the silver thickness results in an increased thermal runaway current; that is, the current above which normal zones continue to grow due to insufficient local cooling.

  16. A reduced order, test verified component mode synthesis approach for system modeling applications

    NASA Astrophysics Data System (ADS)

    Butland, Adam; Avitabile, Peter

    2010-05-01

    Component mode synthesis (CMS) is a very common approach used for the generation of large system models. In general, these modeling techniques can be separated into two categories: those utilizing a combination of constraint modes and fixed interface normal modes and those based on a combination of free interface normal modes and residual flexibility terms. The major limitation of the methods utilizing constraint modes and fixed interface normal modes is the inability to easily obtain the required information from testing; the result of this limitation is that constraint mode-based techniques are primarily used with numerical models. An alternate approach is proposed which utilizes frequency and shape information acquired from modal testing to update reduced order finite element models using exact analytical model improvement techniques. The connection degrees of freedom are then rigidly constrained in the test verified, reduced order model to provide the boundary conditions necessary for constraint modes and fixed interface normal modes. The CMS approach is then used with this test verified, reduced order model to generate the system model for further analysis. A laboratory structure is used to show the application of the technique with both numerical and simulated experimental components to describe the system and validate the proposed approach. Actual test data is then used in the approach proposed. Due to typical measurement data contaminants that are always included in any test, the measured data is further processed to remove contaminants and is then used in the proposed approach. The final case using improved data with the reduced order, test verified components is shown to produce very acceptable results from the Craig-Bampton component mode synthesis approach. Use of the technique with its strengths and weaknesses are discussed.

  17. In vitro experimental investigation of voice production

    PubMed Central

    Horáčcek, Jaromír; Brücker, Christoph; Becker, Stefan

    2012-01-01

    The process of human phonation involves a complex interaction between the physical domains of structural dynamics, fluid flow, and acoustic sound production and radiation. Given the high degree of nonlinearity of these processes, even small anatomical or physiological disturbances can significantly affect the voice signal. In the worst cases, patients can lose their voice and hence the normal mode of speech communication. To improve medical therapies and surgical techniques it is very important to understand better the physics of the human phonation process. Due to the limited experimental access to the human larynx, alternative strategies, including artificial vocal folds, have been developed. The following review gives an overview of experimental investigations of artificial vocal folds within the last 30 years. The models are sorted into three groups: static models, externally driven models, and self-oscillating models. The focus is on the different models of the human vocal folds and on the ways in which they have been applied. PMID:23181007

  18. A novel framework to simulating non-stationary, non-linear, non-Normal hydrological time series using Markov Switching Autoregressive Models

    NASA Astrophysics Data System (ADS)

    Birkel, C.; Paroli, R.; Spezia, L.; Tetzlaff, D.; Soulsby, C.

    2012-12-01

    In this paper we present a novel model framework using the class of Markov Switching Autoregressive Models (MSARMs) to examine catchments as complex stochastic systems that exhibit non-stationary, non-linear and non-Normal rainfall-runoff and solute dynamics. Hereby, MSARMs are pairs of stochastic processes, one observed and one unobserved, or hidden. We model the unobserved process as a finite state Markov chain and assume that the observed process, given the hidden Markov chain, is conditionally autoregressive, which means that the current observation depends on its recent past (system memory). The model is fully embedded in a Bayesian analysis based on Markov Chain Monte Carlo (MCMC) algorithms for model selection and uncertainty assessment. Hereby, the autoregressive order and the dimension of the hidden Markov chain state-space are essentially self-selected. The hidden states of the Markov chain represent unobserved levels of variability in the observed process that may result from complex interactions of hydroclimatic variability on the one hand and catchment characteristics affecting water and solute storage on the other. To deal with non-stationarity, additional meteorological and hydrological time series along with a periodic component can be included in the MSARMs as covariates. This extension allows identification of potential underlying drivers of temporal rainfall-runoff and solute dynamics. We applied the MSAR model framework to streamflow and conservative tracer (deuterium and oxygen-18) time series from an intensively monitored 2.3 km2 experimental catchment in eastern Scotland. Statistical time series analysis, in the form of MSARMs, suggested that the streamflow and isotope tracer time series are not controlled by simple linear rules. MSARMs showed that the dependence of current observations on past inputs observed by transport models often in form of the long-tailing of travel time and residence time distributions can be efficiently explained by non-stationarity either of the system input (climatic variability) and/or the complexity of catchment storage characteristics. The statistical model is also capable of reproducing short (event) and longer-term (inter-event) and wet and dry dynamical "hydrological states". These reflect the non-linear transport mechanisms of flow pathways induced by transient climatic and hydrological variables and modified by catchment characteristics. We conclude that MSARMs are a powerful tool to analyze the temporal dynamics of hydrological data, allowing for explicit integration of non-stationary, non-linear and non-Normal characteristics.

  19. The social architecture of capitalism

    NASA Astrophysics Data System (ADS)

    Wright, Ian

    2005-02-01

    A dynamic model of the social relations between workers and capitalists is introduced. The model self-organises into a dynamic equilibrium with statistical properties that are in close qualitative and in many cases quantitative agreement with a broad range of known empirical distributions of developed capitalism, including the power-law firm size distribution, the Laplace firm and GDP growth distribution, the lognormal firm demises distribution, the exponential recession duration distribution, the lognormal-Pareto income distribution, and the gamma-like firm rate-of-profit distribution. Normally these distributions are studied in isolation, but this model unifies and connects them within a single causal framework. The model also generates business cycle phenomena, including fluctuating wage and profit shares in national income about values consistent with empirical studies. The generation of an approximately lognormal-Pareto income distribution and an exponential-Pareto wealth distribution demonstrates that the power-law regime of the income distribution can be explained by an additive process on a power-law network that models the social relation between employers and employees organised in firms, rather than a multiplicative process that models returns to investment in financial markets. A testable consequence of the model is the conjecture that the rate-of-profit distribution is consistent with a parameter-mix of a ratio of normal variates with means and variances that depend on a firm size parameter that is distributed according to a power-law.

  20. Normalization is a general neural mechanism for context-dependent decision making

    PubMed Central

    Louie, Kenway; Khaw, Mel W.; Glimcher, Paul W.

    2013-01-01

    Understanding the neural code is critical to linking brain and behavior. In sensory systems, divisive normalization seems to be a canonical neural computation, observed in areas ranging from retina to cortex and mediating processes including contrast adaptation, surround suppression, visual attention, and multisensory integration. Recent electrophysiological studies have extended these insights beyond the sensory domain, demonstrating an analogous algorithm for the value signals that guide decision making, but the effects of normalization on choice behavior are unknown. Here, we show that choice models using normalization generate significant (and classically irrational) choice phenomena driven by either the value or number of alternative options. In value-guided choice experiments, both monkey and human choosers show novel context-dependent behavior consistent with normalization. These findings suggest that the neural mechanism of value coding critically influences stochastic choice behavior and provide a generalizable quantitative framework for examining context effects in decision making. PMID:23530203

  1. A predictive processing theory of sensorimotor contingencies: Explaining the puzzle of perceptual presence and its absence in synesthesia.

    PubMed

    Seth, Anil K

    2014-01-01

    Normal perception involves experiencing objects within perceptual scenes as real, as existing in the world. This property of "perceptual presence" has motivated "sensorimotor theories" which understand perception to involve the mastery of sensorimotor contingencies. However, the mechanistic basis of sensorimotor contingencies and their mastery has remained unclear. Sensorimotor theory also struggles to explain instances of perception, such as synesthesia, that appear to lack perceptual presence and for which relevant sensorimotor contingencies are difficult to identify. On alternative "predictive processing" theories, perceptual content emerges from probabilistic inference on the external causes of sensory signals, however, this view has addressed neither the problem of perceptual presence nor synesthesia. Here, I describe a theory of predictive perception of sensorimotor contingencies which (1) accounts for perceptual presence in normal perception, as well as its absence in synesthesia, and (2) operationalizes the notion of sensorimotor contingencies and their mastery. The core idea is that generative models underlying perception incorporate explicitly counterfactual elements related to how sensory inputs would change on the basis of a broad repertoire of possible actions, even if those actions are not performed. These "counterfactually-rich" generative models encode sensorimotor contingencies related to repertoires of sensorimotor dependencies, with counterfactual richness determining the degree of perceptual presence associated with a stimulus. While the generative models underlying normal perception are typically counterfactually rich (reflecting a large repertoire of possible sensorimotor dependencies), those underlying synesthetic concurrents are hypothesized to be counterfactually poor. In addition to accounting for the phenomenology of synesthesia, the theory naturally accommodates phenomenological differences between a range of experiential states including dreaming, hallucination, and the like. It may also lead to a new view of the (in)determinacy of normal perception.

  2. Spatio-temporal dynamics and laterality effects of face inversion, feature presence and configuration, and face outline

    PubMed Central

    Marinkovic, Ksenija; Courtney, Maureen G.; Witzel, Thomas; Dale, Anders M.; Halgren, Eric

    2014-01-01

    Although a crucial role of the fusiform gyrus (FG) in face processing has been demonstrated with a variety of methods, converging evidence suggests that face processing involves an interactive and overlapping processing cascade in distributed brain areas. Here we examine the spatio-temporal stages and their functional tuning to face inversion, presence and configuration of inner features, and face contour in healthy subjects during passive viewing. Anatomically-constrained magnetoencephalography (aMEG) combines high-density whole-head MEG recordings and distributed source modeling with high-resolution structural MRI. Each person's reconstructed cortical surface served to constrain noise-normalized minimum norm inverse source estimates. The earliest activity was estimated to the occipital cortex at ~100 ms after stimulus onset and was sensitive to an initial coarse level visual analysis. Activity in the right-lateralized ventral temporal area (inclusive of the FG) peaked at ~160 ms and was largest to inverted faces. Images containing facial features in the veridical and rearranged configuration irrespective of the facial outline elicited intermediate level activity. The M160 stage may provide structural representations necessary for downstream distributed areas to process identity and emotional expression. However, inverted faces additionally engaged the left ventral temporal area at ~180 ms and were uniquely subserved by bilateral processing. This observation is consistent with the dual route model and spared processing of inverted faces in prosopagnosia. The subsequent deflection, peaking at ~240 ms in the anterior temporal areas bilaterally, was largest to normal, upright faces. It may reflect initial engagement of the distributed network subserving individuation and familiarity. These results support dynamic models suggesting that processing of unfamiliar faces in the absence of a cognitive task is subserved by a distributed and interactive neural circuit. PMID:25426044

  3. Dynamic Rupture Simulations of 11 March 2011 Tohoku Earthquake

    NASA Astrophysics Data System (ADS)

    Kozdon, J. E.; Dunham, E. M.

    2012-12-01

    There is strong observational evidence that the 11 March 2011 Tohoku earthquake rupture reached the seafloor. This was unexpected because the shallow portion of the plate interface is believed to be frictionally stable and thus not capable of sustaining coseismic rupture. In order to explore this seeming inconsistency we have developed a two-dimensional dynamic rupture model of the Tohoku earthquake. The model uses a complex fault, seafloor, and material interface structure as derived from seismic surveys. We use a rate-and-state friction model with steady state shear strength depending logarithmically on slip velocity, i.e., there is no dynamic weakening in the model. The frictional parameters are depth dependent with the shallowest portions of the fault beneath the accretionary prism being velocity strengthening. The total normal stress on the fault is taken to be lithostatic and the pore pressure is hydrostatic until a maximum effective normal stress is reached (40 MPa in our preferred model) after which point the pore pressure follows the lithostatic gradient. We also account for poroelastic buffering of effective normal stress changes on the fault. The off-fault response is linear elastic. Using this model we find that large stress changes are dynamically transmitted to the shallowest portions of the fault by waves released by deep slip that are reflected off the seafloor. These stress changes are significant enough to drive the rupture through a velocity strengthening region that is tens of kilometers long. Rupture to the trench is therefore consistent with standard assumptions about depth-dependence of subduction zone properties, and does not require extreme dynamic weakening, shallow high stress drop asperities, or other exceptional processes. We also make direct comparisons with measured seafloor deformation and onshore 1-Hz GPS data from the Tohoku earthquake. Through these comparisons we are able to determine the sensitivity of these data to several dynamic source parameters (prestress, seismogenic depth, and the extent and frictional properties of the shallow plate interface). We find that there is a trade-off between the near-trench frictional properties and effective normal stress, particularly for onshore measurements. That is, the data can be equally well fit by either a velocity strengthening or velocity weakening near-trench fault segment, provided that compensating adjustments are also made to the maximum effective normal stress on the fault. On the other hand, the seismogenic depth is fairly well constrained from the static displacement field, independent of effective normal stress and near-trench properties. Finally, we show that a water layer (modeled as an isotropic linear acoustic material) has a negligible effect on the rupture process. That said, the inclusion of a water layer allows us to make important predictions concerning hydroacoustic signals that were observed by ocean bottom pressure sensors.

  4. Dynamic Bayesian network modeling for longitudinal brain morphometry

    PubMed Central

    Chen, Rong; Resnick, Susan M; Davatzikos, Christos; Herskovits, Edward H

    2011-01-01

    Identifying interactions among brain regions from structural magnetic-resonance images presents one of the major challenges in computational neuroanatomy. We propose a Bayesian data-mining approach to the detection of longitudinal morphological changes in the human brain. Our method uses a dynamic Bayesian network to represent evolving inter-regional dependencies. The major advantage of dynamic Bayesian network modeling is that it can represent complicated interactions among temporal processes. We validated our approach by analyzing a simulated atrophy study, and found that this approach requires only a small number of samples to detect the ground-truth temporal model. We further applied dynamic Bayesian network modeling to a longitudinal study of normal aging and mild cognitive impairment — the Baltimore Longitudinal Study of Aging. We found that interactions among regional volume-change rates for the mild cognitive impairment group are different from those for the normal-aging group. PMID:21963916

  5. Neurophysiological model of the normal and abnormal human pupil

    NASA Technical Reports Server (NTRS)

    Krenz, W.; Robin, M.; Barez, S.; Stark, L.

    1985-01-01

    Anatomical, experimental, and computer simulation studies were used to determine the structure of the neurophysiological model of the pupil size control system. The computer simulation of this model demonstrates the role played by each of the elements in the neurological pathways influencing the size of the pupil. Simulations of the effect of drugs and common abnormalities in the system help to illustrate the workings of the pathways and processes involved. The simulation program allows the user to select pupil condition (normal or an abnormality), specific site along the neurological pathway (retina, hypothalamus, etc.) drug class input (barbiturate, narcotic, etc.), stimulus/response mode, display mode, stimulus type and input waveform, stimulus or background intensity and frequency, the input and output conditions, and the response at the neuroanatomical site. The model can be used as a teaching aid or as a tool for testing hypotheses regarding the system.

  6. Designing management strategies for carbon dioxide storage and utilization under uncertainty using inexact modelling

    NASA Astrophysics Data System (ADS)

    Wang, Yu; Fan, Jie; Xu, Ye; Sun, Wei; Chen, Dong

    2017-06-01

    Effective application of carbon capture, utilization and storage (CCUS) systems could help to alleviate the influence of climate change by reducing carbon dioxide (CO2) emissions. The research objective of this study is to develop an equilibrium chance-constrained programming model with bi-random variables (ECCP model) for supporting the CCUS management system under random circumstances. The major advantage of the ECCP model is that it tackles random variables as bi-random variables with a normal distribution, where the mean values follow a normal distribution. This could avoid irrational assumptions and oversimplifications in the process of parameter design and enrich the theory of stochastic optimization. The ECCP model is solved by an equilibrium change-constrained programming algorithm, which provides convenience for decision makers to rank the solution set using the natural order of real numbers. The ECCP model is applied to a CCUS management problem, and the solutions could be useful in helping managers to design and generate rational CO2-allocation patterns under complexities and uncertainties.

  7. Development of the Elastic Rebound Strike-slip (ERS) Fault Model for Teaching Earthquake Science to Non-science Students

    NASA Astrophysics Data System (ADS)

    Glesener, G. B.; Peltzer, G.; Stubailo, I.; Cochran, E. S.; Lawrence, J. F.

    2009-12-01

    The Modeling and Educational Demonstrations Laboratory (MEDL) at the University of California, Los Angeles has developed a fourth version of the Elastic Rebound Strike-slip (ERS) Fault Model to be used to educate students and the general public about the process and mechanics of earthquakes from strike-slip faults. The ERS Fault Model is an interactive hands-on teaching tool which produces failure on a predefined fault embedded in an elastic medium, with adjustable normal stress. With the addition of an accelerometer sensor, called the Joy Warrior, the user can experience what it is like for a field geophysicist to collect and observe ground shaking data from an earthquake without having to experience a real earthquake. Two knobs on the ERS Fault Model control the normal and shear stress on the fault. Adjusting the normal stress knob will increase or decrease the friction on the fault. The shear stress knob displaces one side of the elastic medium parallel to the strike of the fault, resulting in changing shear stress on the fault surface. When the shear stress exceeds the threshold defined by the static friction of the fault, an earthquake on the model occurs. The accelerometer sensor then sends the data to a computer where the shaking of the model due to the sudden slip on the fault can be displayed and analyzed by the student. The experiment clearly illustrates the relationship between earthquakes and seismic waves. One of the major benefits to using the ERS Fault Model in undergraduate courses is that it helps to connect non-science students with the work of scientists. When students that are not accustomed to scientific thought are able to experience the scientific process first hand, a connection is made between the scientists and students. Connections like this might inspire a student to become a scientist, or promote the advancement of scientific research through public policy.

  8. Impaired activity-dependent neural circuit assembly and refinement in autism spectrum disorder genetic models

    PubMed Central

    Doll, Caleb A.; Broadie, Kendal

    2014-01-01

    Early-use activity during circuit-specific critical periods refines brain circuitry by the coupled processes of eliminating inappropriate synapses and strengthening maintained synapses. We theorize these activity-dependent (A-D) developmental processes are specifically impaired in autism spectrum disorders (ASDs). ASD genetic models in both mouse and Drosophila have pioneered our insights into normal A-D neural circuit assembly and consolidation, and how these developmental mechanisms go awry in specific genetic conditions. The monogenic fragile X syndrome (FXS), a common cause of heritable ASD and intellectual disability, has been particularly well linked to defects in A-D critical period processes. The fragile X mental retardation protein (FMRP) is positively activity-regulated in expression and function, in turn regulates excitability and activity in a negative feedback loop, and appears to be required for the A-D remodeling of synaptic connectivity during early-use critical periods. The Drosophila FXS model has been shown to functionally conserve the roles of human FMRP in synaptogenesis, and has been centrally important in generating our current mechanistic understanding of the FXS disease state. Recent advances in Drosophila optogenetics, transgenic calcium reporters, highly-targeted transgenic drivers for individually-identified neurons, and a vastly improved connectome of the brain are now being combined to provide unparalleled opportunities to both manipulate and monitor A-D processes during critical period brain development in defined neural circuits. The field is now poised to exploit this new Drosophila transgenic toolbox for the systematic dissection of A-D mechanisms in normal versus ASD brain development, particularly utilizing the well-established Drosophila FXS disease model. PMID:24570656

  9. Using normalization 3D model for automatic clinical brain quantative analysis and evaluation

    NASA Astrophysics Data System (ADS)

    Lin, Hong-Dun; Yao, Wei-Jen; Hwang, Wen-Ju; Chung, Being-Tau; Lin, Kang-Ping

    2003-05-01

    Functional medical imaging, such as PET or SPECT, is capable of revealing physiological functions of the brain, and has been broadly used in diagnosing brain disorders by clinically quantitative analysis for many years. In routine procedures, physicians manually select desired ROIs from structural MR images and then obtain physiological information from correspondent functional PET or SPECT images. The accuracy of quantitative analysis thus relies on that of the subjectively selected ROIs. Therefore, standardizing the analysis procedure is fundamental and important in improving the analysis outcome. In this paper, we propose and evaluate a normalization procedure with a standard 3D-brain model to achieve precise quantitative analysis. In the normalization process, the mutual information registration technique was applied for realigning functional medical images to standard structural medical images. Then, the standard 3D-brain model that shows well-defined brain regions was used, replacing the manual ROIs in the objective clinical analysis. To validate the performance, twenty cases of I-123 IBZM SPECT images were used in practical clinical evaluation. The results show that the quantitative analysis outcomes obtained from this automated method are in agreement with the clinical diagnosis evaluation score with less than 3% error in average. To sum up, the method takes advantage of obtaining precise VOIs, information automatically by well-defined standard 3-D brain model, sparing manually drawn ROIs slice by slice from structural medical images in traditional procedure. That is, the method not only can provide precise analysis results, but also improve the process rate for mass medical images in clinical.

  10. Comparison of different modelling approaches of drive train temperature for the purposes of wind turbine failure detection

    NASA Astrophysics Data System (ADS)

    Tautz-Weinert, J.; Watson, S. J.

    2016-09-01

    Effective condition monitoring techniques for wind turbines are needed to improve maintenance processes and reduce operational costs. Normal behaviour modelling of temperatures with information from other sensors can help to detect wear processes in drive trains. In a case study, modelling of bearing and generator temperatures is investigated with operational data from the SCADA systems of more than 100 turbines. The focus is here on automated training and testing on a farm level to enable an on-line system, which will detect failures without human interpretation. Modelling based on linear combinations, artificial neural networks, adaptive neuro-fuzzy inference systems, support vector machines and Gaussian process regression is compared. The selection of suitable modelling inputs is discussed with cross-correlation analyses and a sensitivity study, which reveals that the investigated modelling techniques react in different ways to an increased number of inputs. The case study highlights advantages of modelling with linear combinations and artificial neural networks in a feedforward configuration.

  11. Simulating large-scale pedestrian movement using CA and event driven model: Methodology and case study

    NASA Astrophysics Data System (ADS)

    Li, Jun; Fu, Siyao; He, Haibo; Jia, Hongfei; Li, Yanzhong; Guo, Yi

    2015-11-01

    Large-scale regional evacuation is an important part of national security emergency response plan. Large commercial shopping area, as the typical service system, its emergency evacuation is one of the hot research topics. A systematic methodology based on Cellular Automata with the Dynamic Floor Field and event driven model has been proposed, and the methodology has been examined within context of a case study involving the evacuation within a commercial shopping mall. Pedestrians walking is based on Cellular Automata and event driven model. In this paper, the event driven model is adopted to simulate the pedestrian movement patterns, the simulation process is divided into normal situation and emergency evacuation. The model is composed of four layers: environment layer, customer layer, clerk layer and trajectory layer. For the simulation of movement route of pedestrians, the model takes into account purchase intention of customers and density of pedestrians. Based on evacuation model of Cellular Automata with Dynamic Floor Field and event driven model, we can reflect behavior characteristics of customers and clerks at the situations of normal and emergency evacuation. The distribution of individual evacuation time as a function of initial positions and the dynamics of the evacuation process is studied. Our results indicate that the evacuation model using the combination of Cellular Automata with Dynamic Floor Field and event driven scheduling can be used to simulate the evacuation of pedestrian flows in indoor areas with complicated surroundings and to investigate the layout of shopping mall.

  12. Cognitive Function in Normal-Weight, Overweight, and Obese Older Adults: An Analysis of the Advanced Cognitive Training for Independent and Vital Elderly Cohort

    PubMed Central

    Kuo, Hsu-Ko; Jones, Richard N.; Milberg, William P.; Tennstedt, Sharon; Talbot, Laura; Morris, John N.; Lipsitz, Lewis A.

    2010-01-01

    OBJECTIVES To assess how elevated body mass index (BMI) affects cognitive function in elderly people. DESIGN Cross-sectional study. SETTING Data for this cross-sectional study were taken from a multicenter randomized controlled trial, the Advanced Cognitive Training for Independent and Vital Elderly trial. PARTICIPANTS The analytic sample included 2,684 normal-weight, overweight, or obese subjects aged 65 to 94. MEASUREMENTS Evaluation of cognitive abilities was performed in several domains: global cognition, memory, reasoning, and speed of processing. Cross-sectional association between body weight status and cognitive functions was analyzed using multiple linear regression. RESULTS Overweight subjects had better performance on a reasoning task (β = 0.23, standard error (SE) = 0.11, P = .04) and the Useful Field of View (UFOV) measure (β = −39.46, SE = 12.95, P = .002), a test of visuospatial speed of processing, after controlling for age, sex, race, years of education, intervention group, study site, and cardiovascular risk factors. Subjects with class I (BMI 30.0–34.9 kg/m2) and class II (BMI>35.0 kg/m2) obesity had better UFOV measure scores (β = −38.98, SE = 14.77, P = .008; β = −35.75, SE = 17.65, and P = .04, respectively) in the multivariate model than normal-weight subjects. The relationships between BMI and individual cognitive domains were nonlinear. CONCLUSION Overweight participants had better cognitive performance in terms of reasoning and visuospatial speed of processing than normal-weight participants. Obesity was associated with better performance in visuospatial speed of processing than normal weight. The relationship between BMI and cognitive function should be studied prospectively. PMID:16420204

  13. [The Application of Grief Theories to Bereaved Family Members].

    PubMed

    Wu, Lee-Jen Suen; Chou, Chuan-Chiang; Lin, Yen-Chun

    2017-12-01

    Loss is an inevitable experience for humans for which grief is a natural response. Nurses must have an adequate understanding of grief and bereavement in order to be more sensitive to these painful emotions and to provide appropriate care to families who have lost someone they love deeply. This article introduces four important grief theories: Freud's grief theory, Bowlby's attachment theory, Stroebe and Schuts' dual process model, and Neiyemer's meaning reconstruction model. Freud's grief theory holds that the process of grief adaptation involves a bereaved family adopting alternative ways to connect with the death of a loved one and to restore their self-ego. Attachment theory holds that individuals who undergo grieving that is caused by separation from significant others and that triggers the process of grief adaptation will fail to adapt if they resist change. The dual process model holds that bereaved families undergo grief adaptation not only as a way to face their loss but also to restore normality in their lives. Finally, the meaning reconstruction model holds that the grief-adaptation strength of bereaved families comes from their meaning reconstruction in response to encountered events. It is hoped that these theories offer nurses different perspectives on the grieving process and provide a practical framework for grief assessment and interventions. Additionally, specific interventions that are based on these four grief theories are recommended. Furthermore, theories of grief may help nurses gain insight into their own practice-related reactions and healing processes, which is an important part of caring for the grieving. Although the grieving process is time consuming, nurses who better understand grief will be better able to help family members prepare in advance for the death of a loved one and, in doing so, help facilitate their healing, with a view to the future and to finally returning to normal daily life.

  14. Murine Electrophysiological Models of Cardiac Arrhythmogenesis

    PubMed Central

    2016-01-01

    Cardiac arrhythmias can follow disruption of the normal cellular electrophysiological processes underlying excitable activity and their tissue propagation as coherent wavefronts from the primary sinoatrial node pacemaker, through the atria, conducting structures and ventricular myocardium. These physiological events are driven by interacting, voltage-dependent, processes of activation, inactivation, and recovery in the ion channels present in cardiomyocyte membranes. Generation and conduction of these events are further modulated by intracellular Ca2+ homeostasis, and metabolic and structural change. This review describes experimental studies on murine models for known clinical arrhythmic conditions in which these mechanisms were modified by genetic, physiological, or pharmacological manipulation. These exemplars yielded molecular, physiological, and structural phenotypes often directly translatable to their corresponding clinical conditions, which could be investigated at the molecular, cellular, tissue, organ, and whole animal levels. Arrhythmogenesis could be explored during normal pacing activity, regular stimulation, following imposed extra-stimuli, or during progressively incremented steady pacing frequencies. Arrhythmic substrate was identified with temporal and spatial functional heterogeneities predisposing to reentrant excitation phenomena. These could arise from abnormalities in cardiac pacing function, tissue electrical connectivity, and cellular excitation and recovery. Triggering events during or following recovery from action potential excitation could thereby lead to sustained arrhythmia. These surface membrane processes were modified by alterations in cellular Ca2+ homeostasis and energetics, as well as cellular and tissue structural change. Study of murine systems thus offers major insights into both our understanding of normal cardiac activity and its propagation, and their relationship to mechanisms generating clinical arrhythmias. PMID:27974512

  15. Memory and Learning--Using Mouse to Model Neurobiological and Behavioural Aspects of Down Syndrome and Assess Pharmacotherapeutics

    ERIC Educational Resources Information Center

    Gardiner, Katheleen

    2009-01-01

    Mouse models are a standard tool in the study of many human diseases, providing insights into the normal functions of a gene, how these are altered in disease and how they contribute to a disease process, as well as information on drug action, efficacy and side effects. Our knowledge of human genes, their genetics, functions, interactions and…

  16. Disordered models of acquired dyslexia

    NASA Astrophysics Data System (ADS)

    Virasoro, M. A.

    We show that certain specific correlations in the probability of errors observed in dyslexic patients that are normally explained by introducing additional complexity in the model for the reading process are typical of any Neural Network system that has learned to deal with a quasiregular environment. On the other hand we show that in Neural Networks the more regular behavior does not become naturally the default behavior.

  17. The Effects of Modeled Microgravity on Nucleocytoplasmic Localization of Human Apurinic/Apyrimidinic

    NASA Technical Reports Server (NTRS)

    Gonda, Steve; Jackson, E.B.

    2004-01-01

    Exposure to space radiation and microgravity occurs to humans during space flight. In order to have accurate risk estimations, answering questions to whether increased DNA damage seen during space flight in modified by microgravity are important. Several studies have examined whether intercellular repair of radiation-induced DNA lesions are modified by microgravity. Results from these studies show no modification of the repair processes due to microgravity. However, it is known that in studies not involving radiation that microgravity interferes with normal development. Interestingly, there is no data that attempts to analyze the possible effects of microgravity on the trafficking of DNA repair proteins. In this study, we analyze the effects of modeled microgravity on nucleocytoplasmic shuttling of the human DNA repair enzyme apurinic/apyrimidinic endonuclease 1 (APE1/Ref1) which is involved in base excision repair. We examined nuclear translocation of APE1 using enhanced green fluorescent protein (EGFP) fused to APE1 as a reporter. While APE1 under normal gravity showed normal nuclear localization, APE1 nuclear localization under modeled microgravity was decreased. These results suggest that nucleocytoplasmic translocation of APE1 is modified under modeled microgravity.

  18. Extinction models for cancer stem cell therapy

    PubMed Central

    Sehl, Mary; Zhou, Hua; Sinsheimer, Janet S.; Lange, Kenneth L.

    2012-01-01

    Cells with stem cell-like properties are now viewed as initiating and sustaining many cancers. This suggests that cancer can be cured by driving these cancer stem cells to extinction. The problem with this strategy is that ordinary stem cells are apt to be killed in the process. This paper sets bounds on the killing differential (difference between death rates of cancer stem cells and normal stem cells) that must exist for the survival of an adequate number of normal stem cells. Our main tools are birth–death Markov chains in continuous time. In this framework, we investigate the extinction times of cancer stem cells and normal stem cells. Application of extreme value theory from mathematical statistics yields an accurate asymptotic distribution and corresponding moments for both extinction times. We compare these distributions for the two cell populations as a function of the killing rates. Perhaps a more telling comparison involves the number of normal stem cells NH at the extinction time of the cancer stem cells. Conditioning on the asymptotic time to extinction of the cancer stem cells allows us to calculate the asymptotic mean and variance of NH. The full distribution of NH can be retrieved by the finite Fourier transform and, in some parameter regimes, by an eigenfunction expansion. Finally, we discuss the impact of quiescence (the resting state) on stem cell dynamics. Quiescence can act as a sanctuary for cancer stem cells and imperils the proposed therapy. We approach the complication of quiescence via multitype branching process models and stochastic simulation. Improvements to the τ-leaping method of stochastic simulation make it a versatile tool in this context. We conclude that the proposed therapy must target quiescent cancer stem cells as well as actively dividing cancer stem cells. The current cancer models demonstrate the virtue of attacking the same quantitative questions from a variety of modeling, mathematical, and computational perspectives. PMID:22001354

  19. Extinction models for cancer stem cell therapy.

    PubMed

    Sehl, Mary; Zhou, Hua; Sinsheimer, Janet S; Lange, Kenneth L

    2011-12-01

    Cells with stem cell-like properties are now viewed as initiating and sustaining many cancers. This suggests that cancer can be cured by driving these cancer stem cells to extinction. The problem with this strategy is that ordinary stem cells are apt to be killed in the process. This paper sets bounds on the killing differential (difference between death rates of cancer stem cells and normal stem cells) that must exist for the survival of an adequate number of normal stem cells. Our main tools are birth-death Markov chains in continuous time. In this framework, we investigate the extinction times of cancer stem cells and normal stem cells. Application of extreme value theory from mathematical statistics yields an accurate asymptotic distribution and corresponding moments for both extinction times. We compare these distributions for the two cell populations as a function of the killing rates. Perhaps a more telling comparison involves the number of normal stem cells NH at the extinction time of the cancer stem cells. Conditioning on the asymptotic time to extinction of the cancer stem cells allows us to calculate the asymptotic mean and variance of NH. The full distribution of NH can be retrieved by the finite Fourier transform and, in some parameter regimes, by an eigenfunction expansion. Finally, we discuss the impact of quiescence (the resting state) on stem cell dynamics. Quiescence can act as a sanctuary for cancer stem cells and imperils the proposed therapy. We approach the complication of quiescence via multitype branching process models and stochastic simulation. Improvements to the τ-leaping method of stochastic simulation make it a versatile tool in this context. We conclude that the proposed therapy must target quiescent cancer stem cells as well as actively dividing cancer stem cells. The current cancer models demonstrate the virtue of attacking the same quantitative questions from a variety of modeling, mathematical, and computational perspectives. Copyright © 2011 Elsevier Inc. All rights reserved.

  20. Feed-Forward Neural Network Prediction of the Mechanical Properties of Sandcrete Materials

    PubMed Central

    Asteris, Panagiotis G.; Roussis, Panayiotis C.; Douvika, Maria G.

    2017-01-01

    This work presents a soft-sensor approach for estimating critical mechanical properties of sandcrete materials. Feed-forward (FF) artificial neural network (ANN) models are employed for building soft-sensors able to predict the 28-day compressive strength and the modulus of elasticity of sandcrete materials. To this end, a new normalization technique for the pre-processing of data is proposed. The comparison of the derived results with the available experimental data demonstrates the capability of FF ANNs to predict with pinpoint accuracy the mechanical properties of sandcrete materials. Furthermore, the proposed normalization technique has been proven effective and robust compared to other normalization techniques available in the literature. PMID:28598400

  1. An efficient reliable method to estimate the vaporization enthalpy of pure substances according to the normal boiling temperature and critical properties

    PubMed Central

    Mehmandoust, Babak; Sanjari, Ehsan; Vatani, Mostafa

    2013-01-01

    The heat of vaporization of a pure substance at its normal boiling temperature is a very important property in many chemical processes. In this work, a new empirical method was developed to predict vaporization enthalpy of pure substances. This equation is a function of normal boiling temperature, critical temperature, and critical pressure. The presented model is simple to use and provides an improvement over the existing equations for 452 pure substances in wide boiling range. The results showed that the proposed correlation is more accurate than the literature methods for pure substances in a wide boiling range (20.3–722 K). PMID:25685493

  2. An efficient reliable method to estimate the vaporization enthalpy of pure substances according to the normal boiling temperature and critical properties.

    PubMed

    Mehmandoust, Babak; Sanjari, Ehsan; Vatani, Mostafa

    2014-03-01

    The heat of vaporization of a pure substance at its normal boiling temperature is a very important property in many chemical processes. In this work, a new empirical method was developed to predict vaporization enthalpy of pure substances. This equation is a function of normal boiling temperature, critical temperature, and critical pressure. The presented model is simple to use and provides an improvement over the existing equations for 452 pure substances in wide boiling range. The results showed that the proposed correlation is more accurate than the literature methods for pure substances in a wide boiling range (20.3-722 K).

  3. Absence of diabetes and pancreatic exocrine dysfunction in a transgenic model of carboxyl-ester lipase-MODY (maturity-onset diabetes of the young).

    PubMed

    Ræder, Helge; Vesterhus, Mette; El Ouaamari, Abdelfattah; Paulo, Joao A; McAllister, Fiona E; Liew, Chong Wee; Hu, Jiang; Kawamori, Dan; Molven, Anders; Gygi, Steven P; Njølstad, Pål R; Kahn, C Ronald; Kulkarni, Rohit N

    2013-01-01

    CEL-MODY is a monogenic form of diabetes with exocrine pancreatic insufficiency caused by mutations in CARBOXYL-ESTER LIPASE (CEL). The pathogenic processes underlying CEL-MODY are poorly understood, and the global knockout mouse model of the CEL gene (CELKO) did not recapitulate the disease. We therefore aimed to create and phenotype a mouse model specifically over-expressing mutated CEL in the pancreas. We established a monotransgenic floxed (flanking LOX sequences) mouse line carrying the human CEL mutation c.1686delT and crossed it with an elastase-Cre mouse to derive a bitransgenic mouse line with pancreas-specific over-expression of CEL carrying this disease-associated mutation (TgCEL). Following confirmation of murine pancreatic expression of the human transgene by real-time quantitative PCR, we phenotyped the mouse model fed a normal chow and compared it with mice fed a 60% high fat diet (HFD) as well as the effects of short-term and long-term cerulein exposure. Pancreatic exocrine function was normal in TgCEL mice on normal chow as assessed by serum lipid and lipid-soluble vitamin levels, fecal elastase and fecal fat absorption, and the normoglycemic mice exhibited normal pancreatic morphology. On 60% HFD, the mice gained weight to the same extent as controls, had normal pancreatic exocrine function and comparable glucose tolerance even after resuming normal diet and follow up up to 22 months of age. The cerulein-exposed TgCEL mice gained weight and remained glucose tolerant, and there were no detectable mutation-specific differences in serum amylase, islet hormones or the extent of pancreatic tissue inflammation. In this murine model of human CEL-MODY diabetes, we did not detect mutation-specific endocrine or exocrine pancreatic phenotypes, in response to altered diets or exposure to cerulein.

  4. [Stress analysis of the mandible by 3D FEA in normal human being under three loading conditions].

    PubMed

    Sun, Jian; Zhang, Fu-qiang; Wang, Dong-wei; Yu, Jia; Wang, Cheng-tao

    2004-02-01

    The condition and character of stress distribution in the mandibular in normal human being during centric, protrusive, laterotrusive occlusion were analysed. The three-dimensional finite element model of the mandibular was developed by helica CT scanning and CAD/CAM software, and three-dimensional finite element stress analysis was done by ANSYS software. Three-dimensional finite element model of the mandibular was generated. Under these three occlusal conditions, the stress of various regions in the mandible were distributed unequally, and the stress feature was different;while the stress of corresponding region in bilateral mandibular was in symmetric distribution. The stress value of condyle neck, the posterior surface of coronoid process and mandibular angle were high. The material properties of mandible were closely correlated to the value of stress. Stress distribution were similar according to the three different loading patterns, but had different effects on TMJ joint. The concentrated areas of stress were in the condyle neck, the posterior surface of coronoid process and mandibular angle.

  5. Modelling the impact, spreading and freezing of a water droplet on horizontal and inclined superhydrophobic cooled surfaces

    NASA Astrophysics Data System (ADS)

    Yao, Yina; Li, Cong; Zhang, Hui; Yang, Rui

    2017-10-01

    It is quite important to clearly understand the dynamic and freezing process of water droplets impacting a cold substrate for the prevention of ice accretion. In this study, a three-dimensional model including an extended phase change method was developed on OpenFOAM platform to simulate the impact, spreading and freezing of a water droplet on a cooled solid substrate. Both normal and oblique impact conditions were studied numerically. The evolution of the droplet shape and dynamic characteristics such as area ratio and spread factor were compared between numerical and experimental results. Good agreements were obtained. The effects of Weber number and Ohnersorge number on the oblique impact and freezing process were investigated. A regime map which depicts the different responses of droplets as a function of normal Weber number and Ohnesorge number was obtained. Moreover, the impact, spreading and freezing behaviour of water droplets were analyzed in detail from the numerical results.

  6. Lorenz system in the thermodynamic modelling of leukaemia malignancy.

    PubMed

    Alexeev, Igor

    2017-05-01

    The core idea of the proposed thermodynamic modelling of malignancy in leukaemia is entropy arising within normal haematopoiesis. Mathematically its description is supposed to be similar to the Lorenz system of ordinary differential equations for simplified processes of heat flow in fluids. The hypothetical model provides a description of remission and relapse in leukaemia as two hierarchical and qualitatively different states of normal haematopoiesis with their own phase spaces. Phase space transition is possible through pitchfork bifurcation, which is considered the common symmetrical scenario for relapse, induced remission and the spontaneous remission of leukaemia. Cytopenia is regarded as an adaptive reaction of haematopoiesis to an increase in entropy caused by leukaemia clones. The following predictions are formulated: a) the percentage of leukaemia cells in marrow as a criterion of remission or relapse is not necessarily constant but is a variable value; b) the probability of remission depends upon normal haematopoiesis reaching bifurcation; c) the duration of remission depends upon the eradication of leukaemia cells through induction or consolidation therapies; d) excessively high doses of chemotherapy in consolidation may induce relapse. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. The effect of signal variability on the histograms of anthropomorphic channel outputs: factors resulting in non-normally distributed data

    NASA Astrophysics Data System (ADS)

    Elshahaby, Fatma E. A.; Ghaly, Michael; Jha, Abhinav K.; Frey, Eric C.

    2015-03-01

    Model Observers are widely used in medical imaging for the optimization and evaluation of instrumentation, acquisition parameters and image reconstruction and processing methods. The channelized Hotelling observer (CHO) is a commonly used model observer in nuclear medicine and has seen increasing use in other modalities. An anthropmorphic CHO consists of a set of channels that model some aspects of the human visual system and the Hotelling Observer, which is the optimal linear discriminant. The optimality of the CHO is based on the assumption that the channel outputs for data with and without the signal present have a multivariate normal distribution with equal class covariance matrices. The channel outputs result from the dot product of channel templates with input images and are thus the sum of a large number of random variables. The central limit theorem is thus often used to justify the assumption that the channel outputs are normally distributed. In this work, we aim to examine this assumption for realistically simulated nuclear medicine images when various types of signal variability are present.

  8. Supra-salt normal fault growth during the rise and fall of a diapir: Perspectives from 3D seismic reflection data, Norwegian North Sea

    NASA Astrophysics Data System (ADS)

    Tvedt, Anette B. M.; Rotevatn, Atle; Jackson, Christopher A.-L.

    2016-10-01

    Normal faulting and the deep subsurface flow of salt are key processes controlling the structural development of many salt-bearing sedimentary basins. However, our detailed understanding of the spatial and temporal relationship between normal faulting and salt movement is poor due to a lack of natural examples constraining their geometric and kinematic relationship in three-dimensions. To improve our understanding of these processes, we here use 3D seismic reflection and borehole data from the Egersund Basin, offshore Norway, to determine the structure and growth of a normal fault array formed during the birth, growth and decay of an array of salt structures. We show that the fault array and salt structures developed in response to: (i) Late Triassic-to-Middle Jurassic extension, which involved thick-skinned, sub-salt and thin-skinned supra-salt faulting with the latter driving reactive diapirism; (ii) Early Cretaceous extensional collapse of the walls; and (iii) Jurassic-to-Neogene, active and passive diapirism, which was at least partly coeval with and occurred along-strike from areas of reactive diapirism and wall collapse. Our study supports physical model predictions, showcasing a three-dimensional example of how protracted, multiphase salt diapirism can influence the structure and growth of normal fault arrays.

  9. Nonpoint Source Solute Transport Normal to Aquifer Bedding in Heterogeneous, Markov Chain Random Fields

    NASA Astrophysics Data System (ADS)

    Zhang, H.; Harter, T.; Sivakumar, B.

    2005-12-01

    Facies-based geostatistical models have become important tools for the stochastic analysis of flow and transport processes in heterogeneous aquifers. However, little is known about the dependency of these processes on the parameters of facies- based geostatistical models. This study examines the nonpoint source solute transport normal to the major bedding plane in the presence of interconnected high conductivity (coarse- textured) facies in the aquifer medium and the dependence of the transport behavior upon the parameters of the constitutive facies model. A facies-based Markov chain geostatistical model is used to quantify the spatial variability of the aquifer system hydrostratigraphy. It is integrated with a groundwater flow model and a random walk particle transport model to estimate the solute travel time probability distribution functions (pdfs) for solute flux from the water table to the bottom boundary (production horizon) of the aquifer. The cases examined include, two-, three-, and four-facies models with horizontal to vertical facies mean length anisotropy ratios, ek, from 25:1 to 300:1, and with a wide range of facies volume proportions (e.g, from 5% to 95% coarse textured facies). Predictions of travel time pdfs are found to be significantly affected by the number of hydrostratigraphic facies identified in the aquifer, the proportions of coarse-textured sediments, the mean length of the facies (particularly the ratio of length to thickness of coarse materials), and - to a lesser degree - the juxtapositional preference among the hydrostratigraphic facies. In transport normal to the sedimentary bedding plane, travel time pdfs are not log- normally distributed as is often assumed. Also, macrodispersive behavior (variance of the travel time pdf) was found to not be a unique function of the conductivity variance. The skewness of the travel time pdf varied from negatively skewed to strongly positively skewed within the parameter range examined. We also show that the Markov chain approach may give significantly different travel time pdfs when compared to the more commonly used Gaussian random field approach even though the first and second order moments in the geostatistical distribution of the lnK field are identical. The choice of the appropriate geostatistical model is therefore critical in the assessment of nonpoint source transport.

  10. Future requirements in surface modeling and grid generation

    NASA Technical Reports Server (NTRS)

    Cosner, Raymond R.

    1995-01-01

    The past ten years have seen steady progress in surface modeling procedures, and wholesale changes in grid generation technology. Today, it seems fair to state that a satisfactory grid can be developed to model nearly any configuration of interest. The issues at present focus on operational concerns such as cost and quality. Continuing evolution of the engineering process is placing new demands on the technologies of surface modeling and grid generation. In the evolution toward a multidisciplinary analysis-bascd design environment, methods developed for Computational Fluid Dynamics are finding acceptance in many additional applications. These two trends, the normal evolution of the process and a watershed shift toward concurrent and multidisciplinary analysis, will be considered in assessing current capabilities and needed technological improvements.

  11. An Engineering Solution for Solving Mesh Size Effects in the Simulation of Delamination with Cohesive Zone Models

    NASA Technical Reports Server (NTRS)

    Turon, A.; Davila, C. G.; Camanho, P. P.; Costa, J.

    2007-01-01

    This paper presents a methodology to determine the parameters to be used in the constitutive equations of Cohesive Zone Models employed in the simulation of delamination in composite materials by means of decohesion finite elements. A closed-form expression is developed to define the stiffness of the cohesive layer. A novel procedure that allows the use of coarser meshes of decohesion elements in large-scale computations is also proposed. The procedure ensures that the energy dissipated by the fracture process is computed correctly. It is shown that coarse-meshed models defined using the approach proposed here yield the same results as the models with finer meshes normally used for the simulation of fracture processes.

  12. Opacity from two-photon processes

    DOE PAGES

    More, Richard M.; Hansen, Stephanie B.; Nagayama, Taisuke

    2017-07-22

    Here, the recent iron opacity measurements performed at Sandia National Laboratory by Bailey and collaborators have raised questions about the completeness of the physical models normally used to understand partially ionized hot dense plasmas. We describe calculations of two-photon absorption, which is a candidate for the observed extra opacity. Our calculations do not yet match the experiments but show that the two-photon absorption process is strong enough to require careful consideration.

  13. Performance analysis of different tuning rules for an isothermal CSTR using integrated EPC and SPC

    NASA Astrophysics Data System (ADS)

    Roslan, A. H.; Karim, S. F. Abd; Hamzah, N.

    2018-03-01

    This paper demonstrates the integration of Engineering Process Control (EPC) and Statistical Process Control (SPC) for the control of product concentration of an isothermal CSTR. The objectives of this study are to evaluate the performance of Ziegler-Nichols (Z-N), Direct Synthesis, (DS) and Internal Model Control (IMC) tuning methods and determine the most effective method for this process. The simulation model was obtained from past literature and re-constructed using SIMULINK MATLAB to evaluate the process response. Additionally, the process stability, capability and normality were analyzed using Process Capability Sixpack reports in Minitab. Based on the results, DS displays the best response for having the smallest rise time, settling time, overshoot, undershoot, Integral Time Absolute Error (ITAE) and Integral Square Error (ISE). Also, based on statistical analysis, DS yields as the best tuning method as it exhibits the highest process stability and capability.

  14. Random walks exhibiting anomalous diffusion: elephants, urns and the limits of normality

    NASA Astrophysics Data System (ADS)

    Kearney, Michael J.; Martin, Richard J.

    2018-01-01

    A random walk model is presented which exhibits a transition from standard to anomalous diffusion as a parameter is varied. The model is a variant on the elephant random walk and differs in respect of the treatment of the initial state, which in the present work consists of a given number N of fixed steps. This also links the elephant random walk to other types of history dependent random walk. As well as being amenable to direct analysis, the model is shown to be asymptotically equivalent to a non-linear urn process. This provides fresh insights into the limiting form of the distribution of the walker’s position at large times. Although the distribution is intrinsically non-Gaussian in the anomalous diffusion regime, it gradually reverts to normal form when N is large under quite general conditions.

  15. Sleep and Development in Genetically Tractable Model Organisms

    PubMed Central

    Kayser, Matthew S.; Biron, David

    2016-01-01

    Sleep is widely recognized as essential, but without a clear singular function. Inadequate sleep impairs cognition, metabolism, immune function, and many other processes. Work in genetic model systems has greatly expanded our understanding of basic sleep neurobiology as well as introduced new concepts for why we sleep. Among these is an idea with its roots in human work nearly 50 years old: sleep in early life is crucial for normal brain maturation. Nearly all known species that sleep do so more while immature, and this increased sleep coincides with a period of exuberant synaptogenesis and massive neural circuit remodeling. Adequate sleep also appears critical for normal neurodevelopmental progression. This article describes recent findings regarding molecular and circuit mechanisms of sleep, with a focus on development and the insights garnered from models amenable to detailed genetic analyses. PMID:27183564

  16. An improved model to estimate trapping parameters in polymeric materials and its application on normal and aged low-density polyethylenes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Ning, E-mail: nl4g12@soton.ac.uk; He, Miao; Alghamdi, Hisham

    2015-08-14

    Trapping parameters can be considered as one of the important attributes to describe polymeric materials. In the present paper, a more accurate charge dynamics model has been developed, which takes account of charge dynamics in both volts-on and off stage into simulation. By fitting with measured charge data with the highest R-square value, trapping parameters together with injection barrier of both normal and aged low-density polyethylene samples were estimated using the improved model. The results show that, after long-term ageing process, the injection barriers of both electrons and holes is lowered, overall trap depth is shallower, and trap density becomesmore » much greater. Additionally, the changes in parameters for electrons are more sensitive than those of holes after ageing.« less

  17. Institute for Molecular Medicine Research Program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Phelps, Michael E

    2012-12-14

    The objectives of the project are the development of new Positron Emission Tomography (PET) imaging instrumentation, chemistry technology platforms and new molecular imaging probes to examine the transformations from normal cellular and biological processes to those of disease in pre-clinical animal models. These technology platforms and imaging probes provide the means to: 1. Study the biology of disease using pre-clinical mouse models and cells. 2. Develop molecular imaging probes for imaging assays of proteins in pre-clinical models. 3. Develop imaging assays in pre-clinical models to provide to other scientists the means to guide and improve the processes for discovering newmore » drugs. 4. Develop imaging assays in pre-clinical models for others to use in judging the impact of drugs on the biology of disease.« less

  18. Improving the performance of streamflow forecasting model using data-preprocessing technique in Dungun River Basin

    NASA Astrophysics Data System (ADS)

    Khai Tiu, Ervin Shan; Huang, Yuk Feng; Ling, Lloyd

    2018-03-01

    An accurate streamflow forecasting model is important for the development of flood mitigation plan as to ensure sustainable development for a river basin. This study adopted Variational Mode Decomposition (VMD) data-preprocessing technique to process and denoise the rainfall data before putting into the Support Vector Machine (SVM) streamflow forecasting model in order to improve the performance of the selected model. Rainfall data and river water level data for the period of 1996-2016 were used for this purpose. Homogeneity tests (Standard Normal Homogeneity Test, the Buishand Range Test, the Pettitt Test and the Von Neumann Ratio Test) and normality tests (Shapiro-Wilk Test, Anderson-Darling Test, Lilliefors Test and Jarque-Bera Test) had been carried out on the rainfall series. Homogenous and non-normally distributed data were found in all the stations, respectively. From the recorded rainfall data, it was observed that Dungun River Basin possessed higher monthly rainfall from November to February, which was during the Northeast Monsoon. Thus, the monthly and seasonal rainfall series of this monsoon would be the main focus for this research as floods usually happen during the Northeast Monsoon period. The predicted water levels from SVM model were assessed with the observed water level using non-parametric statistical tests (Biased Method, Kendall's Tau B Test and Spearman's Rho Test).

  19. Study of sensor spectral responses and data processing algorithms and architectures for onboard feature identification

    NASA Technical Reports Server (NTRS)

    Huck, F. O.; Davis, R. E.; Fales, C. L.; Aherron, R. M.

    1982-01-01

    A computational model of the deterministic and stochastic processes involved in remote sensing is used to study spectral feature identification techniques for real-time onboard processing of data acquired with advanced earth-resources sensors. Preliminary results indicate that: Narrow spectral responses are advantageous; signal normalization improves mean-square distance (MSD) classification accuracy but tends to degrade maximum-likelihood (MLH) classification accuracy; and MSD classification of normalized signals performs better than the computationally more complex MLH classification when imaging conditions change appreciably from those conditions during which reference data were acquired. The results also indicate that autonomous categorization of TM signals into vegetation, bare land, water, snow and clouds can be accomplished with adequate reliability for many applications over a reasonably wide range of imaging conditions. However, further analysis is required to develop computationally efficient boundary approximation algorithms for such categorization.

  20. Kinetic models for batch ethanol production from sweet sorghum juice under normal and high gravity fermentations: Logistic and modified Gompertz models.

    PubMed

    Phukoetphim, Niphaphat; Salakkam, Apilak; Laopaiboon, Pattana; Laopaiboon, Lakkana

    2017-02-10

    The aim of this study was to model batch ethanol production from sweet sorghum juice (SSJ), under normal gravity (NG, 160g/L of total sugar) and high gravity (HG, 240g/L of total sugar) conditions with and without nutrient supplementation (9g/L of yeast extract), by Saccharomyces cerevisiae NP 01. Growth and ethanol production increased with increasing initial sugar concentration, and the addition of yeast extract enhanced both cell growth and ethanol production. From the results, either logistic or a modified Gompertz equation could be used to describe yeast growth, depending on information required. Furthermore, the modified Gompertz model was suitable for modeling ethanol production. Both the models fitted the data very well with coefficients of determination exceeding 0.98. The results clearly showed that these models can be employed in the development of ethanol production processes using SSJ under both NG and HG conditions. The models were also shown to be applicable to other ethanol fermentation systems employing pure and mixed sugars as carbon sources. Copyright © 2016 Elsevier B.V. All rights reserved.

  1. Transcriptional network control of normal and leukaemic haematopoiesis

    PubMed Central

    Sive, Jonathan I.; Göttgens, Berthold

    2014-01-01

    Transcription factors (TFs) play a key role in determining the gene expression profiles of stem/progenitor cells, and defining their potential to differentiate into mature cell lineages. TF interactions within gene-regulatory networks are vital to these processes, and dysregulation of these networks by TF overexpression, deletion or abnormal gene fusions have been shown to cause malignancy. While investigation of these processes remains a challenge, advances in genome-wide technologies and growing interactions between laboratory and computational science are starting to produce increasingly accurate network models. The haematopoietic system provides an attractive experimental system to elucidate gene regulatory mechanisms, and allows experimental investigation of both normal and dysregulated networks. In this review we examine the principles of TF-controlled gene regulatory networks and the key experimental techniques used to investigate them. We look in detail at examples of how these approaches can be used to dissect out the regulatory mechanisms controlling normal haematopoiesis, as well as the dysregulated networks associated with haematological malignancies. PMID:25014893

  2. Transcriptional network control of normal and leukaemic haematopoiesis.

    PubMed

    Sive, Jonathan I; Göttgens, Berthold

    2014-12-10

    Transcription factors (TFs) play a key role in determining the gene expression profiles of stem/progenitor cells, and defining their potential to differentiate into mature cell lineages. TF interactions within gene-regulatory networks are vital to these processes, and dysregulation of these networks by TF overexpression, deletion or abnormal gene fusions have been shown to cause malignancy. While investigation of these processes remains a challenge, advances in genome-wide technologies and growing interactions between laboratory and computational science are starting to produce increasingly accurate network models. The haematopoietic system provides an attractive experimental system to elucidate gene regulatory mechanisms, and allows experimental investigation of both normal and dysregulated networks. In this review we examine the principles of TF-controlled gene regulatory networks and the key experimental techniques used to investigate them. We look in detail at examples of how these approaches can be used to dissect out the regulatory mechanisms controlling normal haematopoiesis, as well as the dysregulated networks associated with haematological malignancies. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.

  3. Multi-objective optimization of a continuous bio-dissimilation process of glycerol to 1, 3-propanediol.

    PubMed

    Xu, Gongxian; Liu, Ying; Gao, Qunwang

    2016-02-10

    This paper deals with multi-objective optimization of continuous bio-dissimilation process of glycerol to 1, 3-propanediol. In order to maximize the production rate of 1, 3-propanediol, maximize the conversion rate of glycerol to 1, 3-propanediol, maximize the conversion rate of glycerol, and minimize the concentration of by-product ethanol, we first propose six new multi-objective optimization models that can simultaneously optimize any two of the four objectives above. Then these multi-objective optimization problems are solved by using the weighted-sum and normal-boundary intersection methods respectively. Both the Pareto filter algorithm and removal criteria are used to remove those non-Pareto optimal points obtained by the normal-boundary intersection method. The results show that the normal-boundary intersection method can successfully obtain the approximate Pareto optimal sets of all the proposed multi-objective optimization problems, while the weighted-sum approach cannot achieve the overall Pareto optimal solutions of some multi-objective problems. Copyright © 2015 Elsevier B.V. All rights reserved.

  4. Empirical validation of the triple-code model of numerical processing for complex math operations using functional MRI and group Independent Component Analysis of the mental addition and subtraction of fractions.

    PubMed

    Schmithorst, Vincent J; Brown, Rhonda Douglas

    2004-07-01

    The suitability of a previously hypothesized triple-code model of numerical processing, involving analog magnitude, auditory verbal, and visual Arabic codes of representation, was investigated for the complex mathematical task of the mental addition and subtraction of fractions. Functional magnetic resonance imaging (fMRI) data from 15 normal adult subjects were processed using exploratory group Independent Component Analysis (ICA). Separate task-related components were found with activation in bilateral inferior parietal, left perisylvian, and ventral occipitotemporal areas. These results support the hypothesized triple-code model corresponding to the activated regions found in the individual components and indicate that the triple-code model may be a suitable framework for analyzing the neuropsychological bases of the performance of complex mathematical tasks. Copyright 2004 Elsevier Inc.

  5. Hillslope Evolution by Bedrock Landslides

    PubMed

    Densmore; Anderson; McAdoo; Ellis

    1997-01-17

    Bedrock landsliding is a dominant geomorphic process in a number of high-relief landscapes, yet is neglected in landscape evolution models. A physical model of sliding in beans is presented, in which incremental lowering of one wall simulates baselevel fall and generates slides. Frequent small slides produce irregular hillslopes, on which steep toes and head scarps persist until being cleared by infrequent large slides. These steep segments are observed on hillslopes in high-relief landscapes and have been interpreted as evidence for increases in tectonic or climatic process rates. In certain cases, they may instead reflect normal hillslope evolution by landsliding.

  6. Statistical analysis of experimental data for mathematical modeling of physical processes in the atmosphere

    NASA Astrophysics Data System (ADS)

    Karpushin, P. A.; Popov, Yu B.; Popova, A. I.; Popova, K. Yu; Krasnenko, N. P.; Lavrinenko, A. V.

    2017-11-01

    In this paper, the probabilities of faultless operation of aerologic stations are analyzed, the hypothesis of normality of the empirical data required for using the Kalman filter algorithms is tested, and the spatial correlation functions of distributions of meteorological parameters are determined. The results of a statistical analysis of two-term (0, 12 GMT) radiosonde observations of the temperature and wind velocity components at some preset altitude ranges in the troposphere in 2001-2016 are presented. These data can be used in mathematical modeling of physical processes in the atmosphere.

  7. Illumination normalization of face image based on illuminant direction estimation and improved Retinex.

    PubMed

    Yi, Jizheng; Mao, Xia; Chen, Lijiang; Xue, Yuli; Rovetta, Alberto; Caleanu, Catalin-Daniel

    2015-01-01

    Illumination normalization of face image for face recognition and facial expression recognition is one of the most frequent and difficult problems in image processing. In order to obtain a face image with normal illumination, our method firstly divides the input face image into sixteen local regions and calculates the edge level percentage in each of them. Secondly, three local regions, which meet the requirements of lower complexity and larger average gray value, are selected to calculate the final illuminant direction according to the error function between the measured intensity and the calculated intensity, and the constraint function for an infinite light source model. After knowing the final illuminant direction of the input face image, the Retinex algorithm is improved from two aspects: (1) we optimize the surround function; (2) we intercept the values in both ends of histogram of face image, determine the range of gray levels, and stretch the range of gray levels into the dynamic range of display device. Finally, we achieve illumination normalization and get the final face image. Unlike previous illumination normalization approaches, the method proposed in this paper does not require any training step or any knowledge of 3D face and reflective surface model. The experimental results using extended Yale face database B and CMU-PIE show that our method achieves better normalization effect comparing with the existing techniques.

  8. Can a combination of average of normals and "real time" External Quality Assurance replace Internal Quality Control?

    PubMed

    Badrick, Tony; Graham, Peter

    2018-03-28

    Internal Quality Control and External Quality Assurance are separate but related processes that have developed independently in laboratory medicine over many years. They have different sample frequencies, statistical interpretations and immediacy. Both processes have evolved absorbing new understandings of the concept of laboratory error, sample material matrix and assay capability. However, we do not believe at the coalface that either process has led to much improvement in patient outcomes recently. It is the increasing reliability and automation of analytical platforms along with improved stability of reagents that has reduced systematic and random error, which in turn has minimised the risk of running less frequent IQC. We suggest that it is time to rethink the role of both these processes and unite them into a single approach using an Average of Normals model supported by more frequent External Quality Assurance samples. This new paradigm may lead to less confusion for laboratory staff and quicker responses to and identification of out of control situations.

  9. Deformation associated with continental normal faults

    NASA Astrophysics Data System (ADS)

    Resor, Phillip G.

    Deformation associated with normal fault earthquakes and geologic structures provide insights into the seismic cycle as it unfolds over time scales from seconds to millions of years. Improved understanding of normal faulting will lead to more accurate seismic hazard assessments and prediction of associated structures. High-precision aftershock locations for the 1995 Kozani-Grevena earthquake (Mw 6.5), Greece image a segmented master fault and antithetic faults. This three-dimensional fault geometry is typical of normal fault systems mapped from outcrop or interpreted from reflection seismic data and illustrates the importance of incorporating three-dimensional fault geometry in mechanical models. Subsurface fault slip associated with the Kozani-Grevena and 1999 Hector Mine (Mw 7.1) earthquakes is modeled using a new method for slip inversion on three-dimensional fault surfaces. Incorporation of three-dimensional fault geometry improves the fit to the geodetic data while honoring aftershock distributions and surface ruptures. GPS Surveying of deformed bedding surfaces associated with normal faulting in the western Grand Canyon reveals patterns of deformation that are similar to those observed by interferometric satellite radar interferometry (InSAR) for the Kozani Grevena earthquake with a prominent down-warp in the hanging wall and a lesser up-warp in the footwall. However, deformation associated with the Kozani-Grevena earthquake extends ˜20 km from the fault surface trace, while the folds in the western Grand Canyon only extend 500 m into the footwall and 1500 m into the hanging wall. A comparison of mechanical and kinematic models illustrates advantages of mechanical models in exploring normal faulting processes including incorporation of both deformation and causative forces, and the opportunity to incorporate more complex fault geometry and constitutive properties. Elastic models with antithetic or synthetic faults or joints in association with a master normal fault illustrate how these secondary structures influence the deformation in ways that are similar to fault/fold geometry mapped in the western Grand Canyon. Specifically, synthetic faults amplify hanging wall bedding dips, antithetic faults reduce dips, and joints act to localize deformation. The distribution of aftershocks in the hanging wall of the Kozani-Grevena earthquake suggests that secondary structures may accommodate strains associated with slip on a master fault during postseismic deformation.

  10. Using partially labeled data for normal mixture identification with application to class definition

    NASA Technical Reports Server (NTRS)

    Shahshahani, Behzad M.; Landgrebe, David A.

    1992-01-01

    The problem of estimating the parameters of a normal mixture density when, in addition to the unlabeled samples, sets of partially labeled samples are available is addressed. The density of the multidimensional feature space is modeled with a normal mixture. It is assumed that the set of components of the mixture can be partitioned into several classes and that training samples are available from each class. Since for any training sample the class of origin is known but the exact component of origin within the corresponding class is unknown, the training samples as considered to be partially labeled. The EM iterative equations are derived for estimating the parameters of the normal mixture in the presence of partially labeled samples. These equations can be used to combine the supervised and nonsupervised learning processes.

  11. Combustion of solid carbon rods in zero and normal gravity

    NASA Technical Reports Server (NTRS)

    Spuckler, C. M.; Kohl, F. J.; Miller, R. A.; Stearns, C. A.; Dewitt, K. J.

    1979-01-01

    In order to investigate the mechanism of carbon combustion, spectroscopic carbon rods were resistance ignited and burned in an oxygen environment in normal and zero gravity. Direct mass spectrometric sampling was used in the normal gravity tests to obtain concentration profiles of CO2, CO, and O2 as a function of distance from the carbon surface. The experimental concentrations were compared to those predicted by a stagnant film model. Zero gravity droptower tests were conducted in order to assess the effect of convection on the normal gravity combustion process. The ratio of flame diameter to rod diameter as a function of time for oxygen pressures of 5, 10, 15, and 20 psia was obtained for three different diameter rods. It was found that this ratio was inversely proportional to both the oxygen pressure and the rod diameter.

  12. Wall jet analysis for circulation control aerodynamics. Part 1: Fundamental CFD and turbulence modeling concepts

    NASA Technical Reports Server (NTRS)

    Dash, S. M.; York, B. J.; Sinha, N.; Dvorak, F. A.

    1987-01-01

    An overview of parabolic and PNS (Parabolized Navier-Stokes) methodology developed to treat highly curved sub and supersonic wall jets is presented. The fundamental data base to which these models were applied is discussed in detail. The analysis of strong curvature effects was found to require a semi-elliptic extension of the parabolic modeling to account for turbulent contributions to the normal pressure variations, as well as an extension to the turbulence models utilized, to account for the highly enhanced mixing rates observed in situations with large convex curvature. A noniterative, pressure split procedure is shown to extend parabolic models to account for such normal pressure variations in an efficient manner, requiring minimal additional run time over a standard parabolic approach. A new PNS methodology is presented to solve this problem which extends parabolic methodology via the addition of a characteristic base wave solver. Applications of this approach to analyze the interaction of wave and turbulence processes in wall jets is presented.

  13. Stochastic differential equation (SDE) model of opening gold share price of bursa saham malaysia

    NASA Astrophysics Data System (ADS)

    Hussin, F. N.; Rahman, H. A.; Bahar, A.

    2017-09-01

    Black and Scholes option pricing model is one of the most recognized stochastic differential equation model in mathematical finance. Two parameter estimation methods have been utilized for the Geometric Brownian model (GBM); historical and discrete method. The historical method is a statistical method which uses the property of independence and normality logarithmic return, giving out the simplest parameter estimation. Meanwhile, discrete method considers the function of density of transition from the process of diffusion normal log which has been derived from maximum likelihood method. These two methods are used to find the parameter estimates samples of Malaysians Gold Share Price data such as: Financial Times and Stock Exchange (FTSE) Bursa Malaysia Emas, and Financial Times and Stock Exchange (FTSE) Bursa Malaysia Emas Shariah. Modelling of gold share price is essential since fluctuation of gold affects worldwide economy nowadays, including Malaysia. It is found that discrete method gives the best parameter estimates than historical method due to the smallest Root Mean Square Error (RMSE) value.

  14. Universal Behavior of a Cyclic Oxidation Model

    NASA Technical Reports Server (NTRS)

    Smialek, James L.

    2003-01-01

    A mathematical model has been generated to represent the iterative, discrete growth and spallation processes associated with cyclic oxidation. Parabolic growth kinetics (k(sub p)) over and a constant spall area (F(sub A)) were assumed, with spalling occurring interfacially at the thickest regions of the scale. Although most models require numerical techniques, the regularity and simplicity of this progression permitted an approximation by algebraic expressions. Normalization could now be performed to reflect all parametric effects, and a universal cyclic oxidation response was generated: W(sub u) = 1/2 {3J(sub u)(sup 1/2)+ J(sub u)(sup 3/2)} where W, is weight change normalized by the maximum and J(sub u) is the cycle number normalized by the number to reach maximum. Similarly, the total amount of metal consumed was represented by a single normalized curve. The factor [(S(sub c)-l)(raised dot)sqrt(F(sub A)k(sub p)DELTAt)] was identified as a general figure of merit, where S(sub c) is the mass ratio of oxide to oxygen and DELTAt is the cycle duration. A cyclic oxidation failure map was constructed, in normalized k(sub p)-F(sub A) space, as defined by the locus of points corresponding to a critical amount of metal consumption in a given time. All three constructions describe behavior for every value of growth rate, spall fraction, and cycle duration by means of single curves, but with two branches corresponding to the times before and after steady state is achieved.

  15. Intravesical dosimetry applied to laser positioning in photodynamic therapy

    NASA Astrophysics Data System (ADS)

    Beslon, Guillaume; Ambroise, Philippe; Heit, Bernard; Bremont, Jacques; Guillemin, Francois H.

    1996-12-01

    Superficial bladder tumor is a challenging indication for photodynamic therapy. Due to lack of specificity of the sensitizers, the light has to be precisely monitored over the bladder surface, illuminated by an isotropic source, to restrict the cytotoxic effect to the tumor without affecting the normal epithelium. In order to assist the surgeon while processing the therapy, an urothelium illumination model is proposed. It is computed through a spline interpolation, on the basis of 12 intravesical sensors. This paper presents the overall system architecture and details the modelization and visualization processes. With this model, the surgeon is able to master the source displacement inside the bladder and to homogenize the tissue exposure.

  16. a Heuristic Approach for Multi Objective Distribution Feeder Reconfiguration: Using Fuzzy Sets in Normalization of Objective Functions

    NASA Astrophysics Data System (ADS)

    Milani, Armin Ebrahimi; Haghifam, Mahmood Reza

    2008-10-01

    The reconfiguration is an operation process used for optimization with specific objectives by means of changing the status of switches in a distribution network. In this paper each objectives is normalized with inspiration from fuzzy sets-to cause optimization more flexible- and formulized as a unique multi-objective function. The genetic algorithm is used for solving the suggested model, in which there is no risk of non-liner objective functions and constraints. The effectiveness of the proposed method is demonstrated through the examples.

  17. Numerical analysis of stress effects on Frank loop evolution during irradiation in austenitic Fe&z.sbnd;Cr&z.sbnd;Ni alloy

    NASA Astrophysics Data System (ADS)

    Tanigawa, Hiroyasu; Katoh, Yutai; Kohyama, Akira

    1995-08-01

    Effects of applied stress on early stages of interstitial type Frank loop evolution were investigated by both numerical calculation and irradiation experiments. The final objective of this research is to propose a comprehensive model of complex stress effects on microstructural evolution under various conditions. In the experimental part of this work, the microstructural analysis revealed that the differences in resolved normal stress caused those in the nucleation rates of Frank loops on {111} crystallographic family planes, and that with increasing external applied stress the total nucleation rate of Frank loops was increased. A numerical calculation was carried out primarily to evaluate the validity of models of stress effects on nucleation processes of Frank loop evolution. The calculation stands on rate equuations which describe evolution of point defects, small points defect clusters and Frank loops. The rate equations of Frank loop evolution were formulated for {111} planes, considering effects of resolved normal stress to clustering processes of small point defects and growth processes of Frank loops, separately. The experimental results and the predictions from the numerical calculation qualitatively coincided well with each other.

  18. Disadvantages of interfragmentary shear on fracture healing--mechanical insights through numerical simulation.

    PubMed

    Steiner, Malte; Claes, Lutz; Ignatius, Anita; Simon, Ulrich; Wehner, Tim

    2014-07-01

    The outcome of secondary fracture healing processes is strongly influenced by interfragmentary motion. Shear movement is assumed to be more disadvantageous than axial movement, however, experimental results are contradictory. Numerical fracture healing models allow simulation of the fracture healing process with variation of single input parameters and under comparable, normalized mechanical conditions. Thus, a comparison of the influence of different loading directions on the healing process is possible. In this study we simulated fracture healing under several axial compressive, and translational and torsional shear movement scenarios, and compared their respective healing times. Therefore, we used a calibrated numerical model for fracture healing in sheep. Numerous variations of movement amplitudes and musculoskeletal loads were simulated for the three loading directions. Our results show that isolated axial compression was more beneficial for the fracture healing success than both isolated shearing conditions for load and displacement magnitudes which were identical as well as physiological different, and even for strain-based normalized comparable conditions. Additionally, torsional shear movements had less impeding effects than translational shear movements. Therefore, our findings suggest that osteosynthesis implants can be optimized, in particular, to limit translational interfragmentary shear under musculoskeletal loading. © 2014 Orthopaedic Research Society. Published by Wiley Periodicals, Inc.

  19. An optimal state estimation model of sensory integration in human postural balance

    NASA Astrophysics Data System (ADS)

    Kuo, Arthur D.

    2005-09-01

    We propose a model for human postural balance, combining state feedback control with optimal state estimation. State estimation uses an internal model of body and sensor dynamics to process sensor information and determine body orientation. Three sensory modalities are modeled: joint proprioception, vestibular organs in the inner ear, and vision. These are mated with a two degree-of-freedom model of body dynamics in the sagittal plane. Linear quadratic optimal control is used to design state feedback and estimation gains. Nine free parameters define the control objective and the signal-to-noise ratios of the sensors. The model predicts statistical properties of human sway in terms of covariance of ankle and hip motion. These predictions are compared with normal human responses to alterations in sensory conditions. With a single parameter set, the model successfully reproduces the general nature of postural motion as a function of sensory environment. Parameter variations reveal that the model is highly robust under normal sensory conditions, but not when two or more sensors are inaccurate. This behavior is similar to that of normal human subjects. We propose that age-related sensory changes may be modeled with decreased signal-to-noise ratios, and compare the model's behavior with degraded sensors against experimental measurements from older adults. We also examine removal of the model's vestibular sense, which leads to instability similar to that observed in bilateral vestibular loss subjects. The model may be useful for predicting which sensors are most critical for balance, and how much they can deteriorate before posture becomes unstable.

  20. Design and simulation of optoelectronic complementary dual neural elements for realizing a family of normalized vector 'equivalence-nonequivalence' operations

    NASA Astrophysics Data System (ADS)

    Krasilenko, Vladimir G.; Nikolsky, Aleksandr I.; Lazarev, Alexander A.; Magas, Taras E.

    2010-04-01

    Equivalence models (EM) advantages of neural networks (NN) are shown in paper. EMs are based on vectormatrix procedures with basic operations of continuous neurologic: normalized vector operations "equivalence", "nonequivalence", "autoequivalence", "autononequivalence". The capacity of NN on the basis of EM and of its modifications, including auto-and heteroassociative memories for 2D images, exceeds in several times quantity of neurons. Such neuroparadigms are very perspective for processing, recognition, storing large size and strongly correlated images. A family of "normalized equivalence-nonequivalence" neuro-fuzzy logic operations on the based of generalized operations fuzzy-negation, t-norm and s-norm is elaborated. A biologically motivated concept and time pulse encoding principles of continuous logic photocurrent reflexions and sample-storage devices with pulse-width photoconverters have allowed us to design generalized structures for realization of the family of normalized linear vector operations "equivalence"-"nonequivalence". Simulation results show, that processing time in such circuits does not exceed units of micro seconds. Circuits are simple, have low supply voltage (1-3 V), low power consumption (milliwatts), low levels of input signals (microwatts), integrated construction, satisfy the problem of interconnections and cascading.

  1. Fast Edge Detection and Segmentation of Terrestrial Laser Scans Through Normal Variation Analysis

    NASA Astrophysics Data System (ADS)

    Che, E.; Olsen, M. J.

    2017-09-01

    Terrestrial Laser Scanning (TLS) utilizes light detection and ranging (lidar) to effectively and efficiently acquire point cloud data for a wide variety of applications. Segmentation is a common procedure of post-processing to group the point cloud into a number of clusters to simplify the data for the sequential modelling and analysis needed for most applications. This paper presents a novel method to rapidly segment TLS data based on edge detection and region growing. First, by computing the projected incidence angles and performing the normal variation analysis, the silhouette edges and intersection edges are separated from the smooth surfaces. Then a modified region growing algorithm groups the points lying on the same smooth surface. The proposed method efficiently exploits the gridded scan pattern utilized during acquisition of TLS data from most sensors and takes advantage of parallel programming to process approximately 1 million points per second. Moreover, the proposed segmentation does not require estimation of the normal at each point, which limits the errors in normal estimation propagating to segmentation. Both an indoor and outdoor scene are used for an experiment to demonstrate and discuss the effectiveness and robustness of the proposed segmentation method.

  2. Evaluating concentration estimation errors in ELISA microarray experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Daly, Don S.; White, Amanda M.; Varnum, Susan M.

    Enzyme-linked immunosorbent assay (ELISA) is a standard immunoassay to predict a protein concentration in a sample. Deploying ELISA in a microarray format permits simultaneous prediction of the concentrations of numerous proteins in a small sample. These predictions, however, are uncertain due to processing error and biological variability. Evaluating prediction error is critical to interpreting biological significance and improving the ELISA microarray process. Evaluating prediction error must be automated to realize a reliable high-throughput ELISA microarray system. Methods: In this paper, we present a statistical method based on propagation of error to evaluate prediction errors in the ELISA microarray process. Althoughmore » propagation of error is central to this method, it is effective only when comparable data are available. Therefore, we briefly discuss the roles of experimental design, data screening, normalization and statistical diagnostics when evaluating ELISA microarray prediction errors. We use an ELISA microarray investigation of breast cancer biomarkers to illustrate the evaluation of prediction errors. The illustration begins with a description of the design and resulting data, followed by a brief discussion of data screening and normalization. In our illustration, we fit a standard curve to the screened and normalized data, review the modeling diagnostics, and apply propagation of error.« less

  3. Dynamics of ultrasonic additive manufacturing.

    PubMed

    Hehr, Adam; Dapino, Marcelo J

    2017-01-01

    Ultrasonic additive manufacturing (UAM) is a solid-state technology for joining similar and dissimilar metal foils near room temperature by scrubbing them together with ultrasonic vibrations under pressure. Structural dynamics of the welding assembly and work piece influence how energy is transferred during the process and ultimately, part quality. To understand the effect of structural dynamics during UAM, a linear time-invariant model is proposed to relate the inputs of shear force and electric current to resultant welder velocity and voltage. Measured frequency response and operating performance of the welder under no load is used to identify model parameters. Using this model and in-situ measurements, shear force and welder efficiency are estimated to be near 2000N and 80% when welding Al 6061-H18 weld foil, respectively. Shear force and welder efficiency have never been estimated before in UAM. The influence of processing conditions, i.e., welder amplitude, normal force, and weld speed, on shear force and welder efficiency are investigated. Welder velocity was found to strongly influence the shear force magnitude and efficiency while normal force and weld speed showed little to no influence. The proposed model is used to describe high frequency harmonic content in the velocity response of the welder during welding operations and coupling of the UAM build with the welder. Copyright © 2016 Elsevier B.V. All rights reserved.

  4. Applying the LANL Statistical Pattern Recognition Paradigm for Structural Health Monitoring to Data from a Surface-Effect Fast Patrol Boat

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoon Sohn; Charles Farrar; Norman Hunter

    2001-01-01

    This report summarizes the analysis of fiber-optic strain gauge data obtained from a surface-effect fast patrol boat being studied by the staff at the Norwegian Defense Research Establishment (NDRE) in Norway and the Naval Research Laboratory (NRL) in Washington D.C. Data from two different structural conditions were provided to the staff at Los Alamos National Laboratory. The problem was then approached from a statistical pattern recognition paradigm. This paradigm can be described as a four-part process: (1) operational evaluation, (2) data acquisition & cleansing, (3) feature extraction and data reduction, and (4) statistical model development for feature discrimination. Given thatmore » the first two portions of this paradigm were mostly completed by the NDRE and NRL staff, this study focused on data normalization, feature extraction, and statistical modeling for feature discrimination. The feature extraction process began by looking at relatively simple statistics of the signals and progressed to using the residual errors from auto-regressive (AR) models fit to the measured data as the damage-sensitive features. Data normalization proved to be the most challenging portion of this investigation. A novel approach to data normalization, where the residual errors in the AR model are considered to be an unmeasured input and an auto-regressive model with exogenous inputs (ARX) is then fit to portions of the data exhibiting similar waveforms, was successfully applied to this problem. With this normalization procedure, a clear distinction between the two different structural conditions was obtained. A false-positive study was also run, and the procedure developed herein did not yield any false-positive indications of damage. Finally, the results must be qualified by the fact that this procedure has only been applied to very limited data samples. A more complete analysis of additional data taken under various operational and environmental conditions as well as other structural conditions is necessary before one can definitively state that the procedure is robust enough to be used in practice.« less

  5. Diagnosing a Strong-Fault Model by Conflict and Consistency

    PubMed Central

    Zhou, Gan; Feng, Wenquan

    2018-01-01

    The diagnosis method for a weak-fault model with only normal behaviors of each component has evolved over decades. However, many systems now demand a strong-fault models, the fault modes of which have specific behaviors as well. It is difficult to diagnose a strong-fault model due to its non-monotonicity. Currently, diagnosis methods usually employ conflicts to isolate possible fault and the process can be expedited when some observed output is consistent with the model’s prediction where the consistency indicates probably normal components. This paper solves the problem of efficiently diagnosing a strong-fault model by proposing a novel Logic-based Truth Maintenance System (LTMS) with two search approaches based on conflict and consistency. At the beginning, the original a strong-fault model is encoded by Boolean variables and converted into Conjunctive Normal Form (CNF). Then the proposed LTMS is employed to reason over CNF and find multiple minimal conflicts and maximal consistencies when there exists fault. The search approaches offer the best candidate efficiency based on the reasoning result until the diagnosis results are obtained. The completeness, coverage, correctness and complexity of the proposals are analyzed theoretically to show their strength and weakness. Finally, the proposed approaches are demonstrated by applying them to a real-world domain—the heat control unit of a spacecraft—where the proposed methods are significantly better than best first and conflict directly with A* search methods. PMID:29596302

  6. The Monash University Interactive Simple Climate Model

    NASA Astrophysics Data System (ADS)

    Dommenget, D.

    2013-12-01

    The Monash university interactive simple climate model is a web-based interface that allows students and the general public to explore the physical simulation of the climate system with a real global climate model. It is based on the Globally Resolved Energy Balance (GREB) model, which is a climate model published by Dommenget and Floeter [2011] in the international peer review science journal Climate Dynamics. The model simulates most of the main physical processes in the climate system in a very simplistic way and therefore allows very fast and simple climate model simulations on a normal PC computer. Despite its simplicity the model simulates the climate response to external forcings, such as doubling of the CO2 concentrations very realistically (similar to state of the art climate models). The Monash simple climate model web-interface allows you to study the results of more than a 2000 different model experiments in an interactive way and it allows you to study a number of tutorials on the interactions of physical processes in the climate system and solve some puzzles. By switching OFF/ON physical processes you can deconstruct the climate and learn how all the different processes interact to generate the observed climate and how the processes interact to generate the IPCC predicted climate change for anthropogenic CO2 increase. The presentation will illustrate how this web-base tool works and what are the possibilities in teaching students with this tool are.

  7. An adapted yield criterion for the evolution of subsequent yield surfaces

    NASA Astrophysics Data System (ADS)

    Küsters, N.; Brosius, A.

    2017-09-01

    In numerical analysis of sheet metal forming processes, the anisotropic material behaviour is often modelled with isotropic work hardening and an average Lankford coefficient. In contrast, experimental observations show an evolution of the Lankford coefficients, which can be associated with a yield surface change due to kinematic and distortional hardening. Commonly, extensive efforts are carried out to describe these phenomena. In this paper an isotropic material model based on the Yld2000-2d criterion is adapted with an evolving yield exponent in order to change the yield surface shape. The yield exponent is linked to the accumulative plastic strain. This change has the effect of a rotating yield surface normal. As the normal is directly related to the Lankford coefficient, the change can be used to model the evolution of the Lankford coefficient during yielding. The paper will focus on the numerical implementation of the adapted material model for the FE-code LS-Dyna, mpi-version R7.1.2-d. A recently introduced identification scheme [1] is used to obtain the parameters for the evolving yield surface and will be briefly described for the proposed model. The suitability for numerical analysis will be discussed for deep drawing processes in general. Efforts for material characterization and modelling will be compared to other common yield surface descriptions. Besides experimental efforts and achieved accuracy, the potential of flexibility in material models and the risk of ambiguity during identification are of major interest in this paper.

  8. Multi-parameters monitoring during traditional Chinese medicine concentration process with near infrared spectroscopy and chemometrics

    NASA Astrophysics Data System (ADS)

    Liu, Ronghua; Sun, Qiaofeng; Hu, Tian; Li, Lian; Nie, Lei; Wang, Jiayue; Zhou, Wanhui; Zang, Hengchang

    2018-03-01

    As a powerful process analytical technology (PAT) tool, near infrared (NIR) spectroscopy has been widely used in real-time monitoring. In this study, NIR spectroscopy was applied to monitor multi-parameters of traditional Chinese medicine (TCM) Shenzhiling oral liquid during the concentration process to guarantee the quality of products. Five lab scale batches were employed to construct quantitative models to determine five chemical ingredients and physical change (samples density) during concentration process. The paeoniflorin, albiflorin, liquiritin and samples density were modeled by partial least square regression (PLSR), while the content of the glycyrrhizic acid and cinnamic acid were modeled by support vector machine regression (SVMR). Standard normal variate (SNV) and/or Savitzkye-Golay (SG) smoothing with derivative methods were adopted for spectra pretreatment. Variable selection methods including correlation coefficient (CC), competitive adaptive reweighted sampling (CARS) and interval partial least squares regression (iPLS) were performed for optimizing the models. The results indicated that NIR spectroscopy was an effective tool to successfully monitoring the concentration process of Shenzhiling oral liquid.

  9. Possible Effects of Synaptic Imbalances on Oligodendrocyte–Axonic Interactions in Schizophrenia: A Hypothetical Model

    PubMed Central

    Mitterauer, Bernhard J.; Kofler-Westergren, Birgitta

    2011-01-01

    A model of glial–neuronal interactions is proposed that could be explanatory for the demyelination identified in brains with schizophrenia. It is based on two hypotheses: (1) that glia–neuron systems are functionally viable and important for normal brain function, and (2) that disruption of this postulated function disturbs the glial categorization function, as shown by formal analysis. According to this model, in schizophrenia receptors on astrocytes in glial–neuronal synaptic units are not functional, loosing their modulatory influence on synaptic neurotransmission. Hence, an unconstrained neurotransmission flux occurs that hyperactivates the axon and floods the cognate receptors of neurotransmitters on oligodendrocytes. The excess of neurotransmitters may have a toxic effect on oligodendrocytes and myelin, causing demyelination. In parallel, an increasing impairment of axons may disconnect neuronal networks. It is formally shown how oligodendrocytes normally categorize axonic information processing via their processes. Demyelination decomposes the oligodendrocyte–axonic system making it incapable to generate categories of information. This incoherence may be responsible for symptoms of disorganization in schizophrenia, such as thought disorder, inappropriate affect and incommunicable motor behavior. In parallel, the loss of oligodendrocytes affects gap junctions in the panglial syncytium, presumably responsible for memory impairment in schizophrenia. PMID:21647404

  10. A theory for modeling ground-water flow in heterogeneous media

    USGS Publications Warehouse

    Cooley, Richard L.

    2004-01-01

    Construction of a ground-water model for a field area is not a straightforward process. Data are virtually never complete or detailed enough to allow substitution into the model equations and direct computation of the results of interest. Formal model calibration through optimization, statistical, and geostatistical methods is being applied to an increasing extent to deal with this problem and provide for quantitative evaluation and uncertainty analysis of the model. However, these approaches are hampered by two pervasive problems: 1) nonlinearity of the solution of the model equations with respect to some of the model (or hydrogeologic) input variables (termed in this report system characteristics) and 2) detailed and generally unknown spatial variability (heterogeneity) of some of the system characteristics such as log hydraulic conductivity, specific storage, recharge and discharge, and boundary conditions. A theory is developed in this report to address these problems. The theory allows construction and analysis of a ground-water model of flow (and, by extension, transport) in heterogeneous media using a small number of lumped or smoothed system characteristics (termed parameters). The theory fully addresses both nonlinearity and heterogeneity in such a way that the parameters are not assumed to be effective values. The ground-water flow system is assumed to be adequately characterized by a set of spatially and temporally distributed discrete values, ?, of the system characteristics. This set contains both small-scale variability that cannot be described in a model and large-scale variability that can. The spatial and temporal variability in ? are accounted for by imagining ? to be generated by a stochastic process wherein ? is normally distributed, although normality is not essential. Because ? has too large a dimension to be estimated using the data normally available, for modeling purposes ? is replaced by a smoothed or lumped approximation y?. (where y is a spatial and temporal interpolation matrix). Set y?. has the same form as the expected value of ?, y 'line' ? , where 'line' ? is the set of drift parameters of the stochastic process; ?. is a best-fit vector to ?. A model function f(?), such as a computed hydraulic head or flux, is assumed to accurately represent an actual field quantity, but the same function written using y?., f(y?.), contains error from lumping or smoothing of ? using y?.. Thus, the replacement of ? by y?. yields nonzero mean model errors of the form E(f(?)-f(y?.)) throughout the model and covariances between model errors at points throughout the model. These nonzero means and covariances are evaluated through third and fifth-order accuracy, respectively, using Taylor series expansions. They can have a significant effect on construction and interpretation of a model that is calibrated by estimating ?.. Vector ?.. is estimated as 'hat' ? using weighted nonlinear least squares techniques to fit a set of model functions f(y'hat' ?) to a. corresponding set of observations of f(?), Y. These observations are assumed to be corrupted by zero-mean, normally distributed observation errors, although, as for ?, normality is not essential. An analytical approximation of the nonlinear least squares solution is obtained using Taylor series expansions and perturbation techniques that assume model and observation errors to be small. This solution is used to evaluate biases and other results to second-order accuracy in the errors. The correct weight matrix to use in the analysis is shown to be the inverse of the second-moment matrix E(Y-f(y?.))(Y-f(y?.))', but the weight matrix is assumed to be arbitrary in most developments. The best diagonal approximation is the inverse of the matrix of diagonal elements of E(Y-f(y?.))(Y-f(y?.))', and a method of estimating this diagonal matrix when it is unknown is developed using a special objective function to compute 'hat' ?. When considered to be an estimate of f

  11. Adaptive clutter rejection filters for airborne Doppler weather radar applied to the detection of low altitude windshear

    NASA Technical Reports Server (NTRS)

    Keel, Byron M.

    1989-01-01

    An optimum adaptive clutter rejection filter for use with airborne Doppler weather radar is presented. The radar system is being designed to operate at low-altitudes for the detection of windshear in an airport terminal area where ground clutter returns may mask the weather return. The coefficients of the adaptive clutter rejection filter are obtained using a complex form of a square root normalized recursive least squares lattice estimation algorithm which models the clutter return data as an autoregressive process. The normalized lattice structure implementation of the adaptive modeling process for determining the filter coefficients assures that the resulting coefficients will yield a stable filter and offers possible fixed point implementation. A 10th order FIR clutter rejection filter indexed by geographical location is designed through autoregressive modeling of simulated clutter data. Filtered data, containing simulated dry microburst and clutter return, are analyzed using pulse-pair estimation techniques. To measure the ability of the clutter rejection filters to remove the clutter, results are compared to pulse-pair estimates of windspeed within a simulated dry microburst without clutter. In the filter evaluation process, post-filtered pulse-pair width estimates and power levels are also used to measure the effectiveness of the filters. The results support the use of an adaptive clutter rejection filter for reducing the clutter induced bias in pulse-pair estimates of windspeed.

  12. Normalized Temperature Contrast Processing in Infrared Flash Thermography

    NASA Technical Reports Server (NTRS)

    Koshti, Ajay M.

    2016-01-01

    The paper presents further development in normalized contrast processing used in flash infrared thermography method. Method of computing normalized image or pixel intensity contrast, and normalized temperature contrast are provided. Methods of converting image contrast to temperature contrast and vice versa are provided. Normalized contrast processing in flash thermography is useful in quantitative analysis of flash thermography data including flaw characterization and comparison of experimental results with simulation. Computation of normalized temperature contrast involves use of flash thermography data acquisition set-up with high reflectivity foil and high emissivity tape such that the foil, tape and test object are imaged simultaneously. Methods of assessing other quantitative parameters such as emissivity of object, afterglow heat flux, reflection temperature change and surface temperature during flash thermography are also provided. Temperature imaging and normalized temperature contrast processing provide certain advantages over normalized image contrast processing by reducing effect of reflected energy in images and measurements, therefore providing better quantitative data. Examples of incorporating afterglow heat-flux and reflection temperature evolution in flash thermography simulation are also discussed.

  13. 10 CFR 490.204 - Process for granting exemptions.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ...) Alternative fuels that meet the normal requirements and practices of the principal business of the State fleet... requirements and practices of the principal business of the State fleet are not available for purchase or lease... must be accompanied with supporting documentation. (c) Exemptions are granted for one model year only...

  14. The Heterogeneity of Picture-Supported Narratives in Alzheimer's Disease

    ERIC Educational Resources Information Center

    Duong, A.; Giroux, F.; Tardif, A.; Ska, B.

    2005-01-01

    This study describes discourse patterns produced by 46 Alzheimer disease (AD) patients and 53 normal elderly subjects in two picture-supported narratives. Nine measures derived from a cognitive model of discourse processing were obtained and submitted to cluster analysis. Results indicate that discourse patterns elicited from both stimuli were…

  15. Simulating Limb Formation in the U.S. EPA Virtual Embryo - Risk Assessment Project

    EPA Science Inventory

    The U.S. EPA’s Virtual Embryo project (v-Embryo™) is a computer model simulation of morphogenesis that integrates cell and molecular level data from mechanistic and in vitro assays with knowledge about normal development processes to assess in silico the effects of chemicals on d...

  16. Qualitative Features Extraction from Sensor Data using Short-time Fourier Transform

    NASA Technical Reports Server (NTRS)

    Amini, Abolfazl M.; Figueroa, Fernando

    2004-01-01

    The information gathered from sensors is used to determine the health of a sensor. Once a normal mode of operation is established any deviation from the normal behavior indicates a change. This change may be due to a malfunction of the sensor(s) or the system (or process). The step-up and step-down features, as well as sensor disturbances are assumed to be exponential. An RC network is used to model the main process, which is defined by a step-up (charging), drift, and step-down (discharging). The sensor disturbances and spike are added while the system is in drift. The system runs for a period of at least three time-constants of the main process every time a process feature occurs (e.g. step change). The Short-Time Fourier Transform of the Signal is taken using the Hamming window. Three window widths are used. The DC value is removed from the windowed data prior to taking the FFT. The resulting three dimensional spectral plots provide good time frequency resolution. The results indicate distinct shapes corresponding to each process.

  17. Neuro-fuzzy model for estimating race and gender from geometric distances of human face across pose

    NASA Astrophysics Data System (ADS)

    Nanaa, K.; Rahman, M. N. A.; Rizon, M.; Mohamad, F. S.; Mamat, M.

    2018-03-01

    Classifying human face based on race and gender is a vital process in face recognition. It contributes to an index database and eases 3D synthesis of the human face. Identifying race and gender based on intrinsic factor is problematic, which is more fitting to utilizing nonlinear model for estimating process. In this paper, we aim to estimate race and gender in varied head pose. For this purpose, we collect dataset from PICS and CAS-PEAL databases, detect the landmarks and rotate them to the frontal pose. After geometric distances are calculated, all of distance values will be normalized. Implementation is carried out by using Neural Network Model and Fuzzy Logic Model. These models are combined by using Adaptive Neuro-Fuzzy Model. The experimental results showed that the optimization of address fuzzy membership. Model gives a better assessment rate and found that estimating race contributing to a more accurate gender assessment.

  18. Digital signal processing based on inverse scattering transform.

    PubMed

    Turitsyna, Elena G; Turitsyn, Sergei K

    2013-10-15

    Through numerical modeling, we illustrate the possibility of a new approach to digital signal processing in coherent optical communications based on the application of the so-called inverse scattering transform. Considering without loss of generality a fiber link with normal dispersion and quadrature phase shift keying signal modulation, we demonstrate how an initial information pattern can be recovered (without direct backward propagation) through the calculation of nonlinear spectral data of the received optical signal.

  19. Characterizing structural association alterations within brain networks in normal aging using Gaussian Bayesian networks.

    PubMed

    Guo, Xiaojuan; Wang, Yan; Chen, Kewei; Wu, Xia; Zhang, Jiacai; Li, Ke; Jin, Zhen; Yao, Li

    2014-01-01

    Recent multivariate neuroimaging studies have revealed aging-related alterations in brain structural networks. However, the sensory/motor networks such as the auditory, visual and motor networks, have obtained much less attention in normal aging research. In this study, we used Gaussian Bayesian networks (BN), an approach investigating possible inter-regional directed relationship, to characterize aging effects on structural associations between core brain regions within each of these structural sensory/motor networks using volumetric MRI data. We then further examined the discriminability of BN models for the young (N = 109; mean age =22.73 years, range 20-28) and old (N = 82; mean age =74.37 years, range 60-90) groups. The results of the BN modeling demonstrated that structural associations exist between two homotopic brain regions from the left and right hemispheres in each of the three networks. In particular, compared with the young group, the old group had significant connection reductions in each of the three networks and lesser connection numbers in the visual network. Moreover, it was found that the aging-related BN models could distinguish the young and old individuals with 90.05, 73.82, and 88.48% accuracy for the auditory, visual, and motor networks, respectively. Our findings suggest that BN models can be used to investigate the normal aging process with reliable statistical power. Moreover, these differences in structural inter-regional interactions may help elucidate the neuronal mechanism of anatomical changes in normal aging.

  20. Average of delta: a new quality control tool for clinical laboratories.

    PubMed

    Jones, Graham R D

    2016-01-01

    Average of normals is a tool used to control assay performance using the average of a series of results from patients' samples. Delta checking is a process of identifying errors in individual patient results by reviewing the difference from previous results of the same patient. This paper introduces a novel alternate approach, average of delta, which combines these concepts to use the average of a number of sequential delta values to identify changes in assay performance. Models for average of delta and average of normals were developed in a spreadsheet application. The model assessed the expected scatter of average of delta and average of normals functions and the effect of assay bias for different values of analytical imprecision and within- and between-subject biological variation and the number of samples included in the calculations. The final assessment was the number of patients' samples required to identify an added bias with 90% certainty. The model demonstrated that with larger numbers of delta values, the average of delta function was tighter (lower coefficient of variation). The optimal number of samples for bias detection with average of delta was likely to be between 5 and 20 for most settings and that average of delta outperformed average of normals when the within-subject biological variation was small relative to the between-subject variation. Average of delta provides a possible additional assay quality control tool which theoretical modelling predicts may be more valuable than average of normals for analytes where the group biological variation is wide compared with within-subject variation and where there is a high rate of repeat testing in the laboratory patient population. © The Author(s) 2015.

  1. Multiple Concurrent Visual-Motor Mappings: Implications for Models of Adaptation

    NASA Technical Reports Server (NTRS)

    Cunningham, H. A.; Welch, Robert B.

    1994-01-01

    Previous research on adaptation to visual-motor rearrangement suggests that the central nervous system represents accurately only 1 visual-motor mapping at a time. This idea was examined in 3 experiments where subjects tracked a moving target under repeated alternations between 2 initially interfering mappings (the 'normal' mapping characteristic of computer input devices and a 108' rotation of the normal mapping). Alternation between the 2 mappings led to significant reduction in error under the rotated mapping and significant reduction in the adaptation aftereffect ordinarily caused by switching between mappings. Color as a discriminative cue, interference versus decay in adaptation aftereffect, and intermanual transfer were also examined. The results reveal a capacity for multiple concurrent visual-motor mappings, possibly controlled by a parametric process near the motor output stage of processing.

  2. A normal tissue dose response model of dynamic repair processes.

    PubMed

    Alber, Markus; Belka, Claus

    2006-01-07

    A model is presented for serial, critical element complication mechanisms for irradiated volumes from length scales of a few millimetres up to the entire organ. The central element of the model is the description of radiation complication as the failure of a dynamic repair process. The nature of the repair process is seen as reestablishing the structural organization of the tissue, rather than mere replenishment of lost cells. The interactions between the cells, such as migration, involved in the repair process are assumed to have finite ranges, which limits the repair capacity and is the defining property of a finite-sized reconstruction unit. Since the details of the repair processes are largely unknown, the development aims to make the most general assumptions about them. The model employs analogies and methods from thermodynamics and statistical physics. An explicit analytical form of the dose response of the reconstruction unit for total, partial and inhomogeneous irradiation is derived. The use of the model is demonstrated with data from animal spinal cord experiments and clinical data about heart, lung and rectum. The three-parameter model lends a new perspective to the equivalent uniform dose formalism and the established serial and parallel complication models. Its implications for dose optimization are discussed.

  3. Role of erosion and isostasy in the Cordillera Blanca uplift: insights from Low-T thermochronology and landscape evolution modeling (northern Peru, Andes)

    NASA Astrophysics Data System (ADS)

    Margirier, A.; Robert, X.; Braun, J.; Laurence, A.

    2017-12-01

    The uplift and exhumation of the highest Peruvian peaks seems closely linked to the Cordillera Blanca normal fault that delimits and shape the western flank of the Cordillera Blanca. Two models have been previously proposed to explain the occurrence of extension and the presence of this active normal fault in a compression setting but the Cordillera Blanca normal fault and the uplift and exhumation of the Cordillera Blanca remain enigmatic. Recent studies suggested an increase of exhumation rates during the Quaternary in the Cordillera Blanca and related this increase to a change in climate and erosion process (glacial erosion vs. fluvial erosion). The Cordillera Blanca granite has been significantly eroded since its emplacement (12-5 Ma) indicating a significant mass of rocks removal. Whereas it has been demonstrated recently that the effect of eroding denser rocks can contribute to an increase of uplift rate, the impact of erosion and isostasy on the increase of the Cordillera Blanca uplift rates has never been explored. Based on numerical modeling of landscape evolution we address the role of erosion and isostasy in the uplift and exhumation of the Cordillera Blanca. We performed inversions of the present-day topography, total exhumation and thermochronological data using a landscape evolution model (FastScape). Our results evidence the contribution of erosion and associated flexural rebound to the uplift of the Cordillera Blanca. Our models suggest that the erosion of the Cordillera Blanca dense intrusion since 3 Ma could also explain the Quaternary exhumation rate increase in this area. Finally, our results allow to question the previous models proposed for the formation of the Cordillera Blanca normal fault.

  4. Impact of Indian ocean dipole on the coastal upwelling features off the southwest coast of India

    NASA Astrophysics Data System (ADS)

    Nigam, Tanuja; Pant, Vimlesh; Prakash, Kumar Ravi

    2018-05-01

    A three-dimensional regional ocean model is used to examine the impact of positive Indian ocean dipole (pIOD) events on the coastal upwelling features at the southwest coast of India (SWCI). Two model experiments are carried out with different surface boundary conditions that prevailed in the normal and pIOD years from 1982 to 2010. Model experiments demonstrate the weakening of coastal upwelling at the SWCI in the pIOD years. The reduced southward meridional wind stress off the SWCI leads to comparatively lower offshore Ekman transport during August-October in the pIOD years to that in normal years. The suppressed coastal upwelling results in warmer sea surface temperature and deeper thermocline in the pIOD years during June-September. The offshore spatial extent of upwelled colder (< 22 °C) water was up to 75.5° E in August-September in normal years that was limited up to 76.2° E in pIOD years. The heat budget analysis reveals the decreased contribution of vertical entrainment process to the mixed layer cooling in pIOD years which is almost half of that of normal years in October. The net heat flux term shows warming tendency during May-November with a higher magnitude (+ 0.4 °C day-1) in normal years than pIOD years (+ 0.28 °C day-1). The biological productivity is found to reduce during the pIOD years as the concentration of phytoplankton and zooplankton decreases over the region of coastal upwelling at SWCI. Nitrate concentration in the pIOD years dropped by half during August-September and dropped by an order of magnitude in October as compared to its ambient concentration of 13 μmol L-1 in normal years.

  5. Impact of Indian ocean dipole on the coastal upwelling features off the southwest coast of India

    NASA Astrophysics Data System (ADS)

    Nigam, Tanuja; Pant, Vimlesh; Prakash, Kumar Ravi

    2018-06-01

    A three-dimensional regional ocean model is used to examine the impact of positive Indian ocean dipole (pIOD) events on the coastal upwelling features at the southwest coast of India (SWCI). Two model experiments are carried out with different surface boundary conditions that prevailed in the normal and pIOD years from 1982 to 2010. Model experiments demonstrate the weakening of coastal upwelling at the SWCI in the pIOD years. The reduced southward meridional wind stress off the SWCI leads to comparatively lower offshore Ekman transport during August-October in the pIOD years to that in normal years. The suppressed coastal upwelling results in warmer sea surface temperature and deeper thermocline in the pIOD years during June-September. The offshore spatial extent of upwelled colder (< 22 °C) water was up to 75.5° E in August-September in normal years that was limited up to 76.2° E in pIOD years. The heat budget analysis reveals the decreased contribution of vertical entrainment process to the mixed layer cooling in pIOD years which is almost half of that of normal years in October. The net heat flux term shows warming tendency during May-November with a higher magnitude (+ 0.4 °C day-1) in normal years than pIOD years (+ 0.28 °C day-1). The biological productivity is found to reduce during the pIOD years as the concentration of phytoplankton and zooplankton decreases over the region of coastal upwelling at SWCI. Nitrate concentration in the pIOD years dropped by half during August-September and dropped by an order of magnitude in October as compared to its ambient concentration of 13 μmol L-1 in normal years.

  6. Pulsatile flows and wall-shear stresses in models simulating normal and stenosed aortic arches

    NASA Astrophysics Data System (ADS)

    Huang, Rong Fung; Yang, Ten-Fang; Lan, Y.-K.

    2010-03-01

    Pulsatile aqueous glycerol solution flows in the models simulating normal and stenosed human aortic arches are measured by means of particle image velocimetry. Three transparent models were used: normal, 25% stenosed, and 50% stenosed aortic arches. The Womersley parameter, Dean number, and time-averaged Reynolds number are 17.31, 725, and 1,081, respectively. The Reynolds numbers based on the peak velocities of the normal, 25% stenosed, and 50% stenosed aortic arches are 2,484, 3,456, and 3,931, respectively. The study presents the temporal/spatial evolution processes of the flow pattern, velocity distribution, and wall-shear stress during the systolic and diastolic phases. It is found that the flow pattern evolving in the central plane of normal and stenosed aortic arches exhibits (1) a separation bubble around the inner arch, (2) a recirculation vortex around the outer arch wall upstream of the junction of the brachiocephalic artery, (3) an accelerated main stream around the outer arch wall near the junctions of the left carotid and the left subclavian arteries, and (4) the vortices around the entrances of the three main branches. The study identifies and discusses the reasons for the flow physics’ contribution to the formation of these features. The oscillating wall-shear stress distributions are closely related to the featured flow structures. On the outer wall of normal and slightly stenosed aortas, large wall-shear stresses appear in the regions upstream of the junction of the brachiocephalic artery as well as the corner near the junctions of the left carotid artery and the left subclavian artery. On the inner wall, the largest wall-shear stress appears in the region where the boundary layer separates.

  7. Physiological modeling for detecting degree of perception of a color-deficient person.

    PubMed

    Rajalakshmi, T; Prince, Shanthi

    2017-04-01

    Physiological modeling of retina plays a vital role in the development of high-performance image processing methods to produce better visual perception. People with normal vision have an ability to discern different colors. The situation is different in the case of people with color blindness. The aim of this work is to develop a human visual system model for detecting the level of perception of people with red, green and blue deficiency by considering properties like luminance, spatial and temporal frequencies. Simulation results show that in the photoreceptor, outer plexiform and inner plexiform layers, the energy and intensity level of the red, green and blue component for a normal person is proved to be significantly higher than for dichromats. The proposed method explains with appropriate results that red and blue color blindness people could not perceive red and blue color completely.

  8. Joint Segmentation and Deformable Registration of Brain Scans Guided by a Tumor Growth Model

    PubMed Central

    Gooya, Ali; Pohl, Kilian M.; Bilello, Michel; Biros, George; Davatzikos, Christos

    2011-01-01

    This paper presents an approach for joint segmentation and deformable registration of brain scans of glioma patients to a normal atlas. The proposed method is based on the Expectation Maximization (EM) algorithm that incorporates a glioma growth model for atlas seeding, a process which modifies the normal atlas into one with a tumor and edema. The modified atlas is registered into the patient space and utilized for the posterior probability estimation of various tissue labels. EM iteratively refines the estimates of the registration parameters, the posterior probabilities of tissue labels and the tumor growth model parameters. We have applied this approach to 10 glioma scans acquired with four Magnetic Resonance (MR) modalities (T1, T1-CE, T2 and FLAIR ) and validated the result by comparing them to manual segmentations by clinical experts. The resulting segmentations look promising and quantitatively match well with the expert provided ground truth. PMID:21995070

  9. Joint segmentation and deformable registration of brain scans guided by a tumor growth model.

    PubMed

    Gooya, Ali; Pohl, Kilian M; Bilello, Michel; Biros, George; Davatzikos, Christos

    2011-01-01

    This paper presents an approach for joint segmentation and deformable registration of brain scans of glioma patients to a normal atlas. The proposed method is based on the Expectation Maximization (EM) algorithm that incorporates a glioma growth model for atlas seeding, a process which modifies the normal atlas into one with a tumor and edema. The modified atlas is registered into the patient space and utilized for the posterior probability estimation of various tissue labels. EM iteratively refines the estimates of the registration parameters, the posterior probabilities of tissue labels and the tumor growth model parameters. We have applied this approach to 10 glioma scans acquired with four Magnetic Resonance (MR) modalities (T1, T1-CE, T2 and FLAIR) and validated the result by comparing them to manual segmentations by clinical experts. The resulting segmentations look promising and quantitatively match well with the expert provided ground truth.

  10. Anxiety, social skills, friendship quality, and peer victimization: an integrated model.

    PubMed

    Crawford, A Melissa; Manassis, Katharina

    2011-10-01

    This cross-sectional study investigated whether anxiety and social functioning interact in their prediction of peer victimization. A structural equation model linking anxiety, social skills, and friendship quality to victimization was tested separately for children with anxiety disorders and normal comparison children to explore whether the processes involved in victimization differ for these groups. Participants were 8-14 year old children: 55 (34 boys, 21 girls) diagnosed with an anxiety disorder and 85 (37 boys, 48 girls) normal comparison children. The final models for both groups yielded two independent pathways to victimization: (a) anxiety independently predicted being victimized; and (b) poor social skills predicted lower friendship quality, which in turn, placed a child at risk for victimization. These findings have important implications for the treatment of childhood anxiety disorders and for school-based anti-bullying interventions, but replication with larger samples is indicated. Copyright © 2011 Elsevier Ltd. All rights reserved.

  11. The emergence of asymmetric normal fault systems under symmetric boundary conditions

    NASA Astrophysics Data System (ADS)

    Schöpfer, Martin P. J.; Childs, Conrad; Manzocchi, Tom; Walsh, John J.; Nicol, Andrew; Grasemann, Bernhard

    2017-11-01

    Many normal fault systems and, on a smaller scale, fracture boudinage often exhibit asymmetry with one fault dip direction dominating. It is a common belief that the formation of domino and shear band boudinage with a monoclinic symmetry requires a component of layer parallel shearing. Moreover, domains of parallel faults are frequently used to infer the presence of a décollement. Using Distinct Element Method (DEM) modelling we show, that asymmetric fault systems can emerge under symmetric boundary conditions. A statistical analysis of DEM models suggests that the fault dip directions and system polarities can be explained using a random process if the strength contrast between the brittle layer and the surrounding material is high. The models indicate that domino and shear band boudinage are unreliable shear-sense indicators. Moreover, the presence of a décollement should not be inferred on the basis of a domain of parallel faults alone.

  12. Sleep and Development in Genetically Tractable Model Organisms.

    PubMed

    Kayser, Matthew S; Biron, David

    2016-05-01

    Sleep is widely recognized as essential, but without a clear singular function. Inadequate sleep impairs cognition, metabolism, immune function, and many other processes. Work in genetic model systems has greatly expanded our understanding of basic sleep neurobiology as well as introduced new concepts for why we sleep. Among these is an idea with its roots in human work nearly 50 years old: sleep in early life is crucial for normal brain maturation. Nearly all known species that sleep do so more while immature, and this increased sleep coincides with a period of exuberant synaptogenesis and massive neural circuit remodeling. Adequate sleep also appears critical for normal neurodevelopmental progression. This article describes recent findings regarding molecular and circuit mechanisms of sleep, with a focus on development and the insights garnered from models amenable to detailed genetic analyses. Copyright © 2016 by the Genetics Society of America.

  13. Modulation of the inter-hemispheric processing of semantic information during normal aging. A divided visual field experiment.

    PubMed

    Hoyau, E; Cousin, E; Jaillard, A; Baciu, M

    2016-12-01

    We evaluated the effect of normal aging on the inter-hemispheric processing of semantic information by using the divided visual field (DVF) method, with words and pictures. Two main theoretical models have been considered, (a) the HAROLD model which posits that aging is associated with supplementary recruitment of the right hemisphere (RH) and decreased hemispheric specialization, and (b) the RH decline theory, which assumes that the RH becomes less efficient with aging, associated with increased LH specialization. Two groups of subjects were examined, a Young Group (YG) and an Old Group (OG), while participants performed a semantic categorization task (living vs. non-living) in words and pictures. The DVF was realized in two steps: (a) unilateral DVF presentation with stimuli presented separately in each visual field, left or right, allowing for their initial processing by only one hemisphere, right or left, respectively; (b) bilateral DVF presentation (BVF) with stimuli presented simultaneously in both visual fields, followed by their processing by both hemispheres. These two types of presentation permitted the evaluation of two main characteristics of the inter-hemispheric processing of information, the hemispheric specialization (HS) and the inter-hemispheric cooperation (IHC). Moreover, the BVF allowed determining the driver-hemisphere for processing information presented in BVF. Results obtained in OG indicated that: (a) semantic categorization was performed as accurately as YG, even if more slowly, (b) a non-semantic RH decline was observed, and (c) the LH controls the semantic processing during the BVF, suggesting an increased role of the LH in aging. However, despite the stronger involvement of the LH in OG, the RH is not completely devoid of semantic abilities. As discussed in the paper, neither the HAROLD nor the RH decline does fully explain this pattern of results. We rather suggest that the effect of aging on the hemispheric specialization and inter-hemispheric cooperation during semantic processing is explained not by only one model, but by an interaction between several complementary mechanisms and models. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. Characterization of the canine urinary proteome.

    PubMed

    Brandt, Laura E; Ehrhart, E J; Scherman, Hataichanok; Olver, Christine S; Bohn, Andrea A; Prenni, Jessica E

    2014-06-01

    Urine is an attractive biofluid for biomarker discovery as it is easy and minimally invasive to obtain. While numerous studies have focused on the characterization of human urine, much less research has focused on canine urine. The objectives of this study were to characterize the universal canine urinary proteome (both soluble and exosomal), to determine the overlap between the canine proteome and a representative human urinary proteome study, to generate a resource for future canine studies, and to determine the suitability of the dog as a large animal model for human diseases. The soluble and exosomal fractions of normal canine urine were characterized using liquid chromatography tandem mass spectrometry (LC-MS/MS). Biological Networks Gene Ontology (BiNGO) software was utilized to assign the canine urinary proteome to respective Gene Ontology categories, such as Cellular Component, Molecular Function, and Biological Process. Over 500 proteins were confidently identified in normal canine urine. Gene Ontology analysis revealed that exosomal proteins were largely derived from an intracellular location, while soluble proteins included both extracellular and membrane proteins. Exosome proteins were assigned to metabolic processes and localization, while soluble proteins were primarily annotated to specific localization processes. Several proteins identified in normal canine urine have previously been identified in human urine where these proteins are related to various extrarenal and renal diseases. The results of this study illustrate the potential of the dog as an animal model for human disease states and provide the framework for future studies of canine renal diseases. © 2014 American Society for Veterinary Clinical Pathology and European Society for Veterinary Clinical Pathology.

  15. Seismic and aseismic deformations and impact on reservoir permeability: The case of EGS stimulation at The Geysers, California, USA

    NASA Astrophysics Data System (ADS)

    Jeanne, Pierre; Rutqvist, Jonny; Rinaldi, Antonio Pio; Dobson, Patrick F.; Walters, Mark; Hartline, Craig; Garcia, Julio

    2015-11-01

    In this paper, we use the Seismicity-Based Reservoir Characterization approach to study the spatiotemporal dynamics of an injection-induced microseismic cloud, monitored during the stimulation of an enhanced geothermal system, and associated with the Northwest Geysers Enhanced Geothermal System (EGS) Demonstration project (California). We identified the development of a seismically quiet domain around the injection well surrounded by a seismically active domain. Then we compare these observations with the results of 3-D Thermo-Hydro-Mechanical simulations of the EGS, which accounts for changes in permeability as a function of the effective normal stress and the plastic strain. The results of our modeling show that (1) the aseismic domain is caused by both the presence of the injected cold water and by thermal processes. These thermal processes cause a cooling-stress reduction, which prevent shear reactivation and favors fracture opening by reducing effective normal stress and locally increasing the permeability. This process is accompanied by aseismic plastic shear strain. (2) In the seismic domain, microseismicity is caused by the reactivation of the preexisting fractures, resulting from an increase in injection-induced pore pressure. Our modeling indicates that in this domain, permeability evolves according to the effective normal stress acting on the shear zones, whereas shearing of preexisting fractures may have a low impact on permeability. We attribute this lack of permeability gain to the fact that the initial permeabilities of these preexisting fractures are already high (up to 2 orders of magnitude higher than the host rock) and may already be fully dilated by past tectonic straining.

  16. The auditory basis of language impairments: temporal processing versus processing efficiency hypotheses.

    PubMed

    Hartley, Douglas E H; Hill, Penny R; Moore, David R

    2003-12-01

    Claims have been made that language-impaired children have deficits processing rapidly presented or brief sensory information. These claims, known as the 'temporal processing hypothesis', are supported by demonstrations that language-impaired children have excess backward masking (BM). One explanation for these results is that BM is developmentally delayed in these children. However, little was known about how BM normally develops. Recently, we assessed BM in normally developing 6- and 8-year-old children and adults. Results showed that BM thresholds continue to improve over a comparatively protracted period (>10 years old). We also analysed reported deficits in BM in language-impaired and younger children, in terms of a model of temporal resolution. This analysis suggests that poor processing efficiency, rather than deficits in temporal resolution, can account for these results. This 'processing efficiency hypothesis' was recently tested in our laboratory. This experiment measured BM as a function of delays between the tone and the noise in children and adults. Results supported the processing efficiency hypothesis, and suggested that reduced processing efficiency alone could account for differences between adults and children. These findings provide a new perspective on the mechanisms underlying communication disorders, and imply that remediation strategies should be directed towards improving processing efficiency, not temporal resolution.

  17. Speech Processing to Improve the Perception of Speech in Background Noise for Children With Auditory Processing Disorder and Typically Developing Peers.

    PubMed

    Flanagan, Sheila; Zorilă, Tudor-Cătălin; Stylianou, Yannis; Moore, Brian C J

    2018-01-01

    Auditory processing disorder (APD) may be diagnosed when a child has listening difficulties but has normal audiometric thresholds. For adults with normal hearing and with mild-to-moderate hearing impairment, an algorithm called spectral shaping with dynamic range compression (SSDRC) has been shown to increase the intelligibility of speech when background noise is added after the processing. Here, we assessed the effect of such processing using 8 children with APD and 10 age-matched control children. The loudness of the processed and unprocessed sentences was matched using a loudness model. The task was to repeat back sentences produced by a female speaker when presented with either speech-shaped noise (SSN) or a male competing speaker (CS) at two signal-to-background ratios (SBRs). Speech identification was significantly better with SSDRC processing than without, for both groups. The benefit of SSDRC processing was greater for the SSN than for the CS background. For the SSN, scores were similar for the two groups at both SBRs. For the CS, the APD group performed significantly more poorly than the control group. The overall improvement produced by SSDRC processing could be useful for enhancing communication in a classroom where the teacher's voice is broadcast using a wireless system.

  18. Toward a normalized clinical drug knowledge base in China-applying the RxNorm model to Chinese clinical drugs.

    PubMed

    Wang, Li; Zhang, Yaoyun; Jiang, Min; Wang, Jingqi; Dong, Jiancheng; Liu, Yun; Tao, Cui; Jiang, Guoqian; Zhou, Yi; Xu, Hua

    2018-07-01

    In recent years, electronic health record systems have been widely implemented in China, making clinical data available electronically. However, little effort has been devoted to making drug information exchangeable among these systems. This study aimed to build a Normalized Chinese Clinical Drug (NCCD) knowledge base, by applying and extending the information model of RxNorm to Chinese clinical drugs. Chinese drugs were collected from 4 major resources-China Food and Drug Administration, China Health Insurance Systems, Hospital Pharmacy Systems, and China Pharmacopoeia-for integration and normalization in NCCD. Chemical drugs were normalized using the information model in RxNorm without much change. Chinese patent drugs (i.e., Chinese herbal extracts), however, were represented using an expanded RxNorm model to incorporate the unique characteristics of these drugs. A hybrid approach combining automated natural language processing technologies and manual review by domain experts was then applied to drug attribute extraction, normalization, and further generation of drug names at different specification levels. Lastly, we reported the statistics of NCCD, as well as the evaluation results using several sets of randomly selected Chinese drugs. The current version of NCCD contains 16 976 chemical drugs and 2663 Chinese patent medicines, resulting in 19 639 clinical drugs, 250 267 unique concepts, and 2 602 760 relations. By manual review of 1700 chemical drugs and 250 Chinese patent drugs randomly selected from NCCD (about 10%), we showed that the hybrid approach could achieve an accuracy of 98.60% for drug name extraction and normalization. Using a collection of 500 chemical drugs and 500 Chinese patent drugs from other resources, we showed that NCCD achieved coverages of 97.0% and 90.0% for chemical drugs and Chinese patent drugs, respectively. Evaluation results demonstrated the potential to improve interoperability across various electronic drug systems in China.

  19. Nuclear Criticality Safety Data Book

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hollenbach, D. F.

    The objective of this document is to support the revision of criticality safety process studies (CSPSs) for the Uranium Processing Facility (UPF) at the Y-12 National Security Complex (Y-12). This design analysis and calculation (DAC) document contains development and justification for generic inputs typically used in Nuclear Criticality Safety (NCS) DACs to model both normal and abnormal conditions of processes at UPF to support CSPSs. This will provide consistency between NCS DACs and efficiency in preparation and review of DACs, as frequently used data are provided in one reference source.

  20. The effectiveness of flipped classroom learning model in secondary physics classroom setting

    NASA Astrophysics Data System (ADS)

    Prasetyo, B. D.; Suprapto, N.; Pudyastomo, R. N.

    2018-03-01

    The research aimed to describe the effectiveness of flipped classroom learning model on secondary physics classroom setting during Fall semester of 2017. The research object was Secondary 3 Physics group of Singapore School Kelapa Gading. This research was initiated by giving a pre-test, followed by treatment setting of the flipped classroom learning model. By the end of the learning process, the pupils were given a post-test and questionnaire to figure out pupils' response to the flipped classroom learning model. Based on the data analysis, 89% of pupils had passed the minimum criteria of standardization. The increment level in the students' mark was analysed by normalized n-gain formula, obtaining a normalized n-gain score of 0.4 which fulfil medium category range. Obtains from the questionnaire distributed to the students that 93% of students become more motivated to study physics and 89% of students were very happy to carry on hands-on activity based on the flipped classroom learning model. Those three aspects were used to generate a conclusion that applying flipped classroom learning model in Secondary Physics Classroom setting is effectively applicable.

  1. Physics of collisionless scrape-off-layer plasma during normal and off-normal Tokamak operating conditions.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hassanein, A.; Konkashbaev, I.

    1999-03-15

    The structure of a collisionless scrape-off-layer (SOL) plasma in tokamak reactors is being studied to define the electron distribution function and the corresponding sheath potential between the divertor plate and the edge plasma. The collisionless model is shown to be valid during the thermal phase of a plasma disruption, as well as during the newly desired low-recycling normal phase of operation with low-density, high-temperature, edge plasma conditions. An analytical solution is developed by solving the Fokker-Planck equation for electron distribution and balance in the SOL. The solution is in good agreement with numerical studies using Monte-Carlo methods. The analytical solutionsmore » provide an insight to the role of different physical and geometrical processes in a collisionless SOL during disruptions and during the enhanced phase of normal operation over a wide range of parameters.« less

  2. Iris Segmentation and Normalization Algorithm Based on Zigzag Collarette

    NASA Astrophysics Data System (ADS)

    Rizky Faundra, M.; Ratna Sulistyaningrum, Dwi

    2017-01-01

    In this paper, we proposed iris segmentation and normalization algorithm based on the zigzag collarette. First of all, iris images are processed by using Canny Edge Detection to detect pupil edge, then finding the center and the radius of the pupil with the Hough Transform Circle. Next, isolate important part in iris based zigzag collarette area. Finally, Daugman Rubber Sheet Model applied to get the fixed dimensions or normalization iris by transforming cartesian into polar format and thresholding technique to remove eyelid and eyelash. This experiment will be conducted with a grayscale eye image data taken from a database of iris-Chinese Academy of Sciences Institute of Automation (CASIA). Data iris taken is the data reliable and widely used to study the iris biometrics. The result show that specific threshold level is 0.3 have better accuracy than other, so the present algorithm can be used to segmentation and normalization zigzag collarette with accuracy is 98.88%

  3. A new EEG synchronization strength analysis method: S-estimator based normalized weighted-permutation mutual information.

    PubMed

    Cui, Dong; Pu, Weiting; Liu, Jing; Bian, Zhijie; Li, Qiuli; Wang, Lei; Gu, Guanghua

    2016-10-01

    Synchronization is an important mechanism for understanding information processing in normal or abnormal brains. In this paper, we propose a new method called normalized weighted-permutation mutual information (NWPMI) for double variable signal synchronization analysis and combine NWPMI with S-estimator measure to generate a new method named S-estimator based normalized weighted-permutation mutual information (SNWPMI) for analyzing multi-channel electroencephalographic (EEG) synchronization strength. The performances including the effects of time delay, embedding dimension, coupling coefficients, signal to noise ratios (SNRs) and data length of the NWPMI are evaluated by using Coupled Henon mapping model. The results show that the NWPMI is superior in describing the synchronization compared with the normalized permutation mutual information (NPMI). Furthermore, the proposed SNWPMI method is applied to analyze scalp EEG data from 26 amnestic mild cognitive impairment (aMCI) subjects and 20 age-matched controls with normal cognitive function, who both suffer from type 2 diabetes mellitus (T2DM). The proposed methods NWPMI and SNWPMI are suggested to be an effective index to estimate the synchronization strength. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. [Normal aging of frontal lobe functions].

    PubMed

    Calso, Cristina; Besnard, Jérémy; Allain, Philippe

    2016-03-01

    Normal aging in individuals is often associated with morphological, metabolic and cognitive changes, which particularly concern the cerebral frontal regions. Starting from the "frontal lobe hypothesis of cognitive aging" (West, 1996), the present review is based on the neuroanatomical model developed by Stuss (2008), introducing four categories of frontal lobe functions: executive control, behavioural and emotional self-regulation and decision-making, energization and meta-cognitive functions. The selected studies only address the changes of one at least of these functions. The results suggest a deterioration of several cognitive frontal abilities in normal aging: flexibility, inhibition, planning, verbal fluency, implicit decision-making, second-order and affective theory of mind. Normal aging seems also to be characterised by a general reduction in processing speed observed during neuropsychological assessment (Salthouse, 1996). Nevertheless many cognitive functions remain preserved such as automatic or non-conscious inhibition, specific capacities of flexibility and first-order theory of mind. Therefore normal aging doesn't seem to be associated with a global cognitive decline but rather with a selective change in some frontal systems, conclusion which should be taken into account for designing caring programs in normal aging.

  5. Robust analysis of semiparametric renewal process models

    PubMed Central

    Lin, Feng-Chang; Truong, Young K.; Fine, Jason P.

    2013-01-01

    Summary A rate model is proposed for a modulated renewal process comprising a single long sequence, where the covariate process may not capture the dependencies in the sequence as in standard intensity models. We consider partial likelihood-based inferences under a semiparametric multiplicative rate model, which has been widely studied in the context of independent and identical data. Under an intensity model, gap times in a single long sequence may be used naively in the partial likelihood with variance estimation utilizing the observed information matrix. Under a rate model, the gap times cannot be treated as independent and studying the partial likelihood is much more challenging. We employ a mixing condition in the application of limit theory for stationary sequences to obtain consistency and asymptotic normality. The estimator's variance is quite complicated owing to the unknown gap times dependence structure. We adapt block bootstrapping and cluster variance estimators to the partial likelihood. Simulation studies and an analysis of a semiparametric extension of a popular model for neural spike train data demonstrate the practical utility of the rate approach in comparison with the intensity approach. PMID:24550568

  6. Experimental Studies on the Mechanical Behaviour of Rock Joints with Various Openings

    NASA Astrophysics Data System (ADS)

    Li, Y.; Oh, J.; Mitra, R.; Hebblewhite, B.

    2016-03-01

    The mechanical behaviour of rough joints is markedly affected by the degree of joint opening. A systematic experimental study was conducted to investigate the effect of the initial opening on both normal and shear deformations of rock joints. Two types of joints with triangular asperities were produced in the laboratory and subjected to compression tests and direct shear tests with different initial opening values. The results showed that opened rock joints allow much greater normal closure and result in much lower normal stiffness. A semi-logarithmic law incorporating the degree of interlocking is proposed to describe the normal deformation of opened rock joints. The proposed equation agrees well with the experimental results. Additionally, the results of direct shear tests demonstrated that shear strength and dilation are reduced because of reduced involvement of and increased damage to asperities in the process of shearing. The results indicate that constitutive models of rock joints that consider the true asperity contact area can be used to predict shear resistance along opened rock joints. Because rock masses are loosened and rock joints become open after excavation, the model suggested in this study can be incorporated into numerical procedures such as finite-element or discrete-element methods. Use of the model could then increase the accuracy and reliability of stability predictions for rock masses under excavation.

  7. Treatment of childhood traumatic grief.

    PubMed

    Cohen, Judith A; Mannarino, Anthony P

    2004-12-01

    Childhood traumatic grief (CTG) is a condition in which trauma symptoms impinge on children's ability to negotiate the normal grieving process. Clinical characteristics of CTG and their implications for treatment are discussed, and data from a small number of open-treatment studies of traumatically bereaved children are reviewed. An empirically derived treatment model for CTG is described; this model addresses both trauma and grief symptoms and includes a parental treatment component. Future research directions are also addressed.

  8. Accurate Modeling Method for Cu Interconnect

    NASA Astrophysics Data System (ADS)

    Yamada, Kenta; Kitahara, Hiroshi; Asai, Yoshihiko; Sakamoto, Hideo; Okada, Norio; Yasuda, Makoto; Oda, Noriaki; Sakurai, Michio; Hiroi, Masayuki; Takewaki, Toshiyuki; Ohnishi, Sadayuki; Iguchi, Manabu; Minda, Hiroyasu; Suzuki, Mieko

    This paper proposes an accurate modeling method of the copper interconnect cross-section in which the width and thickness dependence on layout patterns and density caused by processes (CMP, etching, sputtering, lithography, and so on) are fully, incorporated and universally expressed. In addition, we have developed specific test patterns for the model parameters extraction, and an efficient extraction flow. We have extracted the model parameters for 0.15μm CMOS using this method and confirmed that 10%τpd error normally observed with conventional LPE (Layout Parameters Extraction) was completely dissolved. Moreover, it is verified that the model can be applied to more advanced technologies (90nm, 65nm and 55nm CMOS). Since the interconnect delay variations due to the processes constitute a significant part of what have conventionally been treated as random variations, use of the proposed model could enable one to greatly narrow the guardbands required to guarantee a desired yield, thereby facilitating design closure.

  9. Prediction of the filtrate particle size distribution from the pore size distribution in membrane filtration: Numerical correlations from computer simulations

    NASA Astrophysics Data System (ADS)

    Marrufo-Hernández, Norma Alejandra; Hernández-Guerrero, Maribel; Nápoles-Duarte, José Manuel; Palomares-Báez, Juan Pedro; Chávez-Rojo, Marco Antonio

    2018-03-01

    We present a computational model that describes the diffusion of a hard spheres colloidal fluid through a membrane. The membrane matrix is modeled as a series of flat parallel planes with circular pores of different sizes and random spatial distribution. This model was employed to determine how the size distribution of the colloidal filtrate depends on the size distributions of both, the particles in the feed and the pores of the membrane, as well as to describe the filtration kinetics. A Brownian dynamics simulation study considering normal distributions was developed in order to determine empirical correlations between the parameters that characterize these distributions. The model can also be extended to other distributions such as log-normal. This study could, therefore, facilitate the selection of membranes for industrial or scientific filtration processes once the size distribution of the feed is known and the expected characteristics in the filtrate have been defined.

  10. Pre-Flight Radiometric Model of Linear Imager on LAPAN-IPB Satellite

    NASA Astrophysics Data System (ADS)

    Hadi Syafrudin, A.; Salaswati, Sartika; Hasbi, Wahyudi

    2018-05-01

    LAPAN-IPB Satellite is Microsatellite class with mission of remote sensing experiment. This satellite carrying Multispectral Line Imager for captured of radiometric reflectance value from earth to space. Radiometric quality of image is important factor to classification object on remote sensing process. Before satellite launch in orbit or pre-flight, Line Imager have been tested by Monochromator and integrating sphere to get spectral and every pixel radiometric response characteristic. Pre-flight test data with variety setting of line imager instrument used to see correlation radiance input and digital number of images output. Output input correlation is described by the radiance conversion model with imager setting and radiometric characteristics. Modelling process from hardware level until normalize radiance formula are presented and discussed in this paper.

  11. Leaf optical system modeled as a stochastic process. [solar radiation interaction with terrestrial vegetation

    NASA Technical Reports Server (NTRS)

    Tucker, C. J.; Garratt, M. W.

    1977-01-01

    A stochastic leaf radiation model based upon physical and physiological properties of dicot leaves has been developed. The model accurately predicts the absorbed, reflected, and transmitted radiation of normal incidence as a function of wavelength resulting from the leaf-irradiance interaction over the spectral interval of 0.40-2.50 micron. The leaf optical system has been represented as Markov process with a unique transition matrix at each 0.01-micron increment between 0.40 micron and 2.50 micron. Probabilities are calculated at every wavelength interval from leaf thickness, structure, pigment composition, and water content. Simulation results indicate that this approach gives accurate estimations of actual measured values for dicot leaf absorption, reflection, and transmission as a function of wavelength.

  12. DAMS: A Model to Assess Domino Effects by Using Agent-Based Modeling and Simulation.

    PubMed

    Zhang, Laobing; Landucci, Gabriele; Reniers, Genserik; Khakzad, Nima; Zhou, Jianfeng

    2017-12-19

    Historical data analysis shows that escalation accidents, so-called domino effects, have an important role in disastrous accidents in the chemical and process industries. In this study, an agent-based modeling and simulation approach is proposed to study the propagation of domino effects in the chemical and process industries. Different from the analytical or Monte Carlo simulation approaches, which normally study the domino effect at probabilistic network levels, the agent-based modeling technique explains the domino effects from a bottom-up perspective. In this approach, the installations involved in a domino effect are modeled as agents whereas the interactions among the installations (e.g., by means of heat radiation) are modeled via the basic rules of the agents. Application of the developed model to several case studies demonstrates the ability of the model not only in modeling higher-level domino effects and synergistic effects but also in accounting for temporal dependencies. The model can readily be applied to large-scale complicated cases. © 2017 Society for Risk Analysis.

  13. Using archetypes for defining CDA templates.

    PubMed

    Moner, David; Moreno, Alberto; Maldonado, José A; Robles, Montserrat; Parra, Carlos

    2012-01-01

    While HL7 CDA is a widely adopted standard for the documentation of clinical information, the archetype approach proposed by CEN/ISO 13606 and openEHR is gaining recognition as a means of describing domain models and medical knowledge. This paper describes our efforts in combining both standards. Using archetypes as an alternative for defining CDA templates permit new possibilities all based on the formal nature of archetypes and their ability to merge into the same artifact medical knowledge and technical requirements for semantic interoperability of electronic health records. We describe the process followed for the normalization of existing legacy data in a hospital environment, from the importation of the HL7 CDA model into an archetype editor, the definition of CDA archetypes and the application of those archetypes to obtain normalized CDA data instances.

  14. On dynamic tumor eradication conditions under combined chemical/anti-angiogenic therapies

    NASA Astrophysics Data System (ADS)

    Starkov, Konstantin E.

    2018-02-01

    In this paper ultimate dynamics of the five-dimensional cancer tumor growth model at the angiogenesis phase is studied. This model elaborated by Pinho et al. in 2014 describes interactions between normal/cancer/endothelial cells under chemotherapy/anti-angiogenic agents in tumor growth process. The author derives ultimate upper bounds for normal/tumor/endothelial cells concentrations and ultimate upper and lower bounds for chemical/anti-angiogenic concentrations. Global asymptotic tumor clearance conditions are obtained for two versions: the use of only chemotherapy and the combined application of chemotherapy and anti-angiogenic therapy. These conditions are established as the attraction conditions to the maximum invariant set in the tumor free plane, and furthermore, the case is examined when this set consists only of tumor free equilibrium points.

  15. Interactive learning in 2×2 normal form games by neural network agents

    NASA Astrophysics Data System (ADS)

    Spiliopoulos, Leonidas

    2012-11-01

    This paper models the learning process of populations of randomly rematched tabula rasa neural network (NN) agents playing randomly generated 2×2 normal form games of all strategic classes. This approach has greater external validity than the existing models in the literature, each of which is usually applicable to narrow subsets of classes of games (often a single game) and/or to fixed matching protocols. The learning prowess of NNs with hidden layers was impressive as they learned to play unique pure strategy equilibria with near certainty, adhered to principles of dominance and iterated dominance, and exhibited a preference for risk-dominant equilibria. In contrast, perceptron NNs were found to perform significantly worse than hidden layer NN agents and human subjects in experimental studies.

  16. Measurement of the Michel parameter {rho} in normal muon decay

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tu, X.; Amann, J.F.; Bolton, R.D.

    1995-07-10

    A new measurement of the Michel parameter {rho} in normal muon decay has been performed using the MEGA positron spectrometer. Over 500 million triggers were recorded and the data are currently being analyzed. The previous result has a precision on the value of {rho}{plus_minus}0.0026. The present experiment expects to improve the precision to {plus_minus}0.0008 or better. The improved result will be a precise test of the standard model of electroweak interactions for a purely leptonic process. It also will provide a better constraint on the {ital W}{sub {ital R}}{minus}{ital W}{sub {ital L}} mixing angle in the left-right symmetric models. {copyright}more » {ital 1995} {ital American} {ital Institute} {ital of} {ital Physics}.« less

  17. Non-normal perturbation growth in idealised island and headland wakes

    NASA Astrophysics Data System (ADS)

    Aiken, C. M.; Moore, A. M.; Middleton, J. H.

    2003-12-01

    Generalised linear stability theory is used to calculate the linear perturbations that furnish most rapid growth in energy in a model of a steady recirculating island wake. This optimal peturbation is found to be antisymmetric and to evolve into a von Kármán vortex street. Eigenanalysis of the linearised system reveals that the eigenmodes corresponding to vortex sheet formation are damped, so the growth of the perturbation is understood through the non-normality of the linearised system. Qualitatively similar perturbation growth is shown to occur in a non-linear model of stochastically-forced subcritical flow, resulting in transition to an unsteady wake. Free-stream variability with amplitude 8% of the mean inflow speed sustains vortex street structures in the non-linear model with perturbation velocities the order of the inflow speed, suggesting that environmental stochastic forcing may similarly be capable of exciting growing disturbances in real island wakes. To support this, qualitatively similar perturbation growth is demonstrated in the straining wake of a realistic island obstacle. It is shown that for the case of an idealised headland, where the vortex street eigenmodes are lacking, vortex sheets are produced through a similar non-normal process.

  18. Numerical schemes for anomalous diffusion of single-phase fluids in porous media

    NASA Astrophysics Data System (ADS)

    Awotunde, Abeeb A.; Ghanam, Ryad A.; Al-Homidan, Suliman S.; Tatar, Nasser-eddine

    2016-10-01

    Simulation of fluid flow in porous media is an indispensable part of oil and gas reservoir management. Accurate prediction of reservoir performance and profitability of investment rely on our ability to model the flow behavior of reservoir fluids. Over the years, numerical reservoir simulation models have been based mainly on solutions to the normal diffusion of fluids in the porous reservoir. Recently, however, it has been documented that fluid flow in porous media does not always follow strictly the normal diffusion process. Small deviations from normal diffusion, called anomalous diffusion, have been reported in some experimental studies. Such deviations can be caused by different factors such as the viscous state of the fluid, the fractal nature of the porous media and the pressure pulse in the system. In this work, we present explicit and implicit numerical solutions to the anomalous diffusion of single-phase fluids in heterogeneous reservoirs. An analytical solution is used to validate the numerical solution to the simple homogeneous case. The conventional wellbore flow model is modified to account for anomalous behavior. Example applications are used to show the behavior of wellbore and wellblock pressures during the single-phase anomalous flow of fluids in the reservoirs considered.

  19. Bayesian spatiotemporal analysis of zero-inflated biological population density data by a delta-normal spatiotemporal additive model.

    PubMed

    Arcuti, Simona; Pollice, Alessio; Ribecco, Nunziata; D'Onghia, Gianfranco

    2016-03-01

    We evaluate the spatiotemporal changes in the density of a particular species of crustacean known as deep-water rose shrimp, Parapenaeus longirostris, based on biological sample data collected during trawl surveys carried out from 1995 to 2006 as part of the international project MEDITS (MEDiterranean International Trawl Surveys). As is the case for many biological variables, density data are continuous and characterized by unusually large amounts of zeros, accompanied by a skewed distribution of the remaining values. Here we analyze the normalized density data by a Bayesian delta-normal semiparametric additive model including the effects of covariates, using penalized regression with low-rank thin-plate splines for nonlinear spatial and temporal effects. Modeling the zero and nonzero values by two joint processes, as we propose in this work, allows to obtain great flexibility and easily handling of complex likelihood functions, avoiding inaccurate statistical inferences due to misclassification of the high proportion of exact zeros in the model. Bayesian model estimation is obtained by Markov chain Monte Carlo simulations, suitably specifying the complex likelihood function of the zero-inflated density data. The study highlights relevant nonlinear spatial and temporal effects and the influence of the annual Mediterranean oscillations index and of the sea surface temperature on the distribution of the deep-water rose shrimp density. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. Cartilaginous development of the human craniovertebral junction as visualised by a new three-dimensional computer reconstruction technique.

    PubMed

    David, K M; McLachlan, J C; Aiton, J F; Whiten, S C; Smart, S D; Thorogood, P V; Crockard, H A

    1998-02-01

    Serial transverse histological sections of the human craniovertebral junction (CVJ) of 4 normal human embryos (aged 45 to 58 d) and of a fetus (77 d) were used to create 3-dimensional computer models of the CVJ. The main components modelled included the chondrified basioccipital, atlas and axis, notochord, the vertebrobasilar complex and the spinal cord. Chondrification of the component parts of CVJ had already begun at 45 d (Stage 18). The odontoid process appeared to develop from a short eminence of the axis forming a third occipital condyle with the caudal end of the basioccipital. The cartilaginous anterior arch of C1 appeared at 50-53 d (Stages 20-21). Neural arches of C1 and C2 showed gradual closure, but there was still a wide posterior spina bifida in the oldest reconstructed specimen (77 d fetus). The position of the notochord was constant throughout. The normal course of the vertebral arteries was already established and the chondrified vertebral foramina showed progressive closure. The findings confirm that the odontoid process is not derived solely from the centrum of C1 and that there is a 'natural basilar invagination' of C2 during normal embryonic development. On the basis of the observed shape and developmental pattern of structures of the cartilaginous human CVJ, we suggest that certain pathologies are likely to originate during the chondrification phase of development.

  1. Are your covariates under control? How normalization can re-introduce covariate effects.

    PubMed

    Pain, Oliver; Dudbridge, Frank; Ronald, Angelica

    2018-04-30

    Many statistical tests rely on the assumption that the residuals of a model are normally distributed. Rank-based inverse normal transformation (INT) of the dependent variable is one of the most popular approaches to satisfy the normality assumption. When covariates are included in the analysis, a common approach is to first adjust for the covariates and then normalize the residuals. This study investigated the effect of regressing covariates against the dependent variable and then applying rank-based INT to the residuals. The correlation between the dependent variable and covariates at each stage of processing was assessed. An alternative approach was tested in which rank-based INT was applied to the dependent variable before regressing covariates. Analyses based on both simulated and real data examples demonstrated that applying rank-based INT to the dependent variable residuals after regressing out covariates re-introduces a linear correlation between the dependent variable and covariates, increasing type-I errors and reducing power. On the other hand, when rank-based INT was applied prior to controlling for covariate effects, residuals were normally distributed and linearly uncorrelated with covariates. This latter approach is therefore recommended in situations were normality of the dependent variable is required.

  2. Cognitive aging on latent constructs for visual processing capacity: a novel structural equation modeling framework with causal assumptions based on a theory of visual attention.

    PubMed

    Nielsen, Simon; Wilms, L Inge

    2014-01-01

    We examined the effects of normal aging on visual cognition in a sample of 112 healthy adults aged 60-75. A testbattery was designed to capture high-level measures of visual working memory and low-level measures of visuospatial attention and memory. To answer questions of how cognitive aging affects specific aspects of visual processing capacity, we used confirmatory factor analyses in Structural Equation Modeling (SEM; Model 2), informed by functional structures that were modeled with path analyses in SEM (Model 1). The results show that aging effects were selective to measures of visual processing speed compared to visual short-term memory (VSTM) capacity (Model 2). These results are consistent with some studies reporting selective aging effects on processing speed, and inconsistent with other studies reporting aging effects on both processing speed and VSTM capacity. In the discussion we argue that this discrepancy may be mediated by differences in age ranges, and variables of demography. The study demonstrates that SEM is a sensitive method to detect cognitive aging effects even within a narrow age-range, and a useful approach to structure the relationships between measured variables, and the cognitive functional foundation they supposedly represent.

  3. Midlife Divorce and Archetypes for Women.

    ERIC Educational Resources Information Center

    Bobo, Terry Skinner

    Midlife divorce for women can be a time for creative growth or divorce can lead to loneliness, bitterness, and depression. Middle-aged women appear to experience an inordinate amount of stress from divorce because of loss of roles and lack of new role models. Based upon role theory and divorce as a normal developmental process, a feminist…

  4. Research Review: Cholinergic Mechanisms, Early Brain Development, and Risk for Schizophrenia

    ERIC Educational Resources Information Center

    Ross, Randal G.; Stevens, Karen E.; Proctor, William R.; Leonard, Sherry; Kisley, Michael A.; Hunter, Sharon K.; Freedman, Robert; Adams, Catherine E.

    2010-01-01

    The onset of diagnostic symptomology for neuropsychiatric diseases is often the end result of a decades-long process of aberrant brain development. Identification of novel treatment strategies aimed at normalizing early brain development and preventing mental illness should be a major therapeutic goal. However, there are few models for how this…

  5. Doing Without Schema Hierarchies: A Recurrent Connectionist Approach to Normal and Impaired Routine Sequential Action

    ERIC Educational Resources Information Center

    Botvinick, Matthew; Plaut, David C.

    2004-01-01

    In everyday tasks, selecting actions in the proper sequence requires a continuously updated representation of temporal context. Previous models have addressed this problem by positing a hierarchy of processing units, mirroring the roughly hierarchical structure of naturalistic tasks themselves. The present study considers an alternative framework,…

  6. Interpersonal Relatedness and Self-Definition in Normal and Disrupted Personality Development: Retrospect and Prospect

    ERIC Educational Resources Information Center

    Luyten, Patrick; Blatt, Sidney J.

    2013-01-01

    Two-polarities models of personality propose that personality development evolves through a dialectic synergistic interaction between two fundamental developmental psychological processes across the life span--the development of interpersonal relatedness on the one hand and of self-definition on the other. This article offers a broad review of…

  7. A Continuum-Atomistic Analysis of Transgranular Crack Propagation in Aluminum

    NASA Technical Reports Server (NTRS)

    Yamakov, V.; Saether, E.; Glaessgen, E.

    2009-01-01

    A concurrent multiscale modeling methodology that embeds a molecular dynamics (MD) region within a finite element (FEM) domain is used to study plastic processes at a crack tip in a single crystal of aluminum. The case of mode I loading is studied. A transition from deformation twinning to full dislocation emission from the crack tip is found when the crack plane is rotated around the [111] crystallographic axis. When the crack plane normal coincides with the [112] twinning direction, the crack propagates through a twinning mechanism. When the crack plane normal coincides with the [011] slip direction, the crack propagates through the emission of full dislocations. In intermediate orientations, a transition from full dislocation emission to twinning is found to occur with an increase in the stress intensity at the crack tip. This finding confirms the suggestion that the very high strain rates, inherently present in MD simulations, which produce higher stress intensities at the crack tip, over-predict the tendency for deformation twinning compared to experiments. The present study, therefore, aims to develop a more realistic and accurate predictive modeling of fracture processes.

  8. Size Dependence of Residual Thermal Stresses in Micro Multilayer Ceramic Capacitors by Using Finite Element Unit Cell Model Including Strain Gradient Effect

    NASA Astrophysics Data System (ADS)

    Jiang, W. G.; Xiong, C. A.; Wu, X. G.

    2013-11-01

    The residual thermal stresses induced by the high-temperature sintering process in multilayer ceramic capacitors (MLCCs) are investigated by using a finite-element unit cell model, in which the strain gradient effect is considered. The numerical results show that the residual thermal stresses depend on the lateral margin length, the thickness ratio of the dielectrics layer to the electrode layer, and the MLCC size. At a given thickness ratio, as the MLCC size is scaled down, the peak shear stress reduces significantly and the normal stresses along the length and thickness directions change slightly with the decrease in the ceramic layer thickness t d as t d > 1 μm, but as t d < 1 μm, the normal stress components increase sharply with the increase in t d. Thus, the residual thermal stresses induced by the sintering process exhibit strong size effects and, therefore, the strain gradient effect should be taken into account in the design and evaluation of MLCC devices

  9. Patterns of glaucomatous visual field loss in sita fields automatically identified using independent component analysis.

    PubMed

    Goldbaum, Michael H; Jang, Gil-Jin; Bowd, Chris; Hao, Jiucang; Zangwill, Linda M; Liebmann, Jeffrey; Girkin, Christopher; Jung, Tzyy-Ping; Weinreb, Robert N; Sample, Pamela A

    2009-12-01

    To determine if the patterns uncovered with variational Bayesian-independent component analysis-mixture model (VIM) applied to a large set of normal and glaucomatous fields obtained with the Swedish Interactive Thresholding Algorithm (SITA) are distinct, recognizable, and useful for modeling the severity of the field loss. SITA fields were obtained with the Humphrey Visual Field Analyzer (Carl Zeiss Meditec, Inc, Dublin, California) on 1,146 normal eyes and 939 glaucoma eyes from subjects followed by the Diagnostic Innovations in Glaucoma Study and the African Descent and Glaucoma Evaluation Study. VIM modifies independent component analysis (ICA) to develop separate sets of ICA axes in the cluster of normal fields and the 2 clusters of abnormal fields. Of 360 models, the model with the best separation of normal and glaucomatous fields was chosen for creating the maximally independent axes. Grayscale displays of fields generated by VIM on each axis were compared. SITA fields most closely associated with each axis and displayed in grayscale were evaluated for consistency of pattern at all severities. The best VIM model had 3 clusters. Cluster 1 (1,193) was mostly normal (1,089, 95% specificity) and had 2 axes. Cluster 2 (596) contained mildly abnormal fields (513) and 2 axes; cluster 3 (323) held mostly moderately to severely abnormal fields (322) and 5 axes. Sensitivity for clusters 2 and 3 combined was 88.9%. The VIM-generated field patterns differed from each other and resembled glaucomatous defects (eg, nasal step, arcuate, temporal wedge). SITA fields assigned to an axis resembled each other and the VIM-generated patterns for that axis. Pattern severity increased in the positive direction of each axis by expansion or deepening of the axis pattern. VIM worked well on SITA fields, separating them into distinctly different yet recognizable patterns of glaucomatous field defects. The axis and pattern properties make VIM a good candidate as a preliminary process for detecting progression.

  10. [Construction of platform on the three-dimensional finite element model of the dentulous mandibular body of a normal person].

    PubMed

    Gong, Lu-Lu; Zhu, Jing; Ding, Zu-Quan; Li, Guo-Qiang; Wang, Li-Ming; Yan, Bo-Yong

    2008-04-01

    To develop a method to construct a three-dimensional finite element model of the dentulous mandibular body of a normal person. A series of pictures with the interval of 0.1 mm were taken by CT scanning. After extracting the coordinates of key points of some pictures by the procedure, we used a C program to process the useful data, and constructed a platform of the three-dimensional finite element model of the dentulous mandibular body with the Ansys software for finite element analysis. The experimental results showed that the platform of the three-dimensional finite element model of the dentulous mandibular body was more accurate and applicable. The exact three-dimensional shape of model was well constructed, and each part of this model, such as one single tooth, can be deleted, which can be used to emulate various tooth-loss clinical cases. The three-dimensional finite element model is constructed with life-like shapes of dental cusps. Each part of this model can be easily removed. In conclusion, this experiment provides a good platform of biomechanical analysis on various tooth-loss clinical cases.

  11. Advances in heat conduction models and approaches for the prediction of lattice thermal conductivity of dielectric materials

    NASA Astrophysics Data System (ADS)

    Saikia, Banashree

    2017-03-01

    An overview of predominant theoretical models used for predicting the thermal conductivities of dielectric materials is given. The criteria used for different theoretical models are explained. This overview highlights a unified theory based on temperature-dependent thermal-conductivity theories, and a drifting of the equilibrium phonon distribution function due to normal three-phonon scattering processes causes transfer of phonon momentum to (a) the same phonon modes (KK-S model) and (b) across the phonon modes (KK-H model). Estimates of the lattice thermal conductivities of LiF and Mg2Sn for the KK-H model are presented graphically.

  12. Multistage degradation modeling for BLDC motor based on Wiener process

    NASA Astrophysics Data System (ADS)

    Yuan, Qingyang; Li, Xiaogang; Gao, Yuankai

    2018-05-01

    Brushless DC motors are widely used, and their working temperatures, regarding as degradation processes, are nonlinear and multistage. It is necessary to establish a nonlinear degradation model. In this research, our study was based on accelerated degradation data of motors, which are their working temperatures. A multistage Wiener model was established by using the transition function to modify linear model. The normal weighted average filter (Gauss filter) was used to improve the results of estimation for the model parameters. Then, to maximize likelihood function for parameter estimation, we used numerical optimization method- the simplex method for cycle calculation. Finally, the modeling results show that the degradation mechanism changes during the degradation of the motor with high speed. The effectiveness and rationality of model are verified by comparison of the life distribution with widely used nonlinear Wiener model, as well as a comparison of QQ plots for residual. Finally, predictions for motor life are gained by life distributions in different times calculated by multistage model.

  13. Precise Synaptic Efficacy Alignment Suggests Potentiation Dominated Learning.

    PubMed

    Hartmann, Christoph; Miner, Daniel C; Triesch, Jochen

    2015-01-01

    Recent evidence suggests that parallel synapses from the same axonal branch onto the same dendritic branch have almost identical strength. It has been proposed that this alignment is only possible through learning rules that integrate activity over long time spans. However, learning mechanisms such as spike-timing-dependent plasticity (STDP) are commonly assumed to be temporally local. Here, we propose that the combination of temporally local STDP and a multiplicative synaptic normalization mechanism is sufficient to explain the alignment of parallel synapses. To address this issue, we introduce three increasingly complex models: First, we model the idealized interaction of STDP and synaptic normalization in a single neuron as a simple stochastic process and derive analytically that the alignment effect can be described by a so-called Kesten process. From this we can derive that synaptic efficacy alignment requires potentiation-dominated learning regimes. We verify these conditions in a single-neuron model with independent spiking activities but more realistic synapses. As expected, we only observe synaptic efficacy alignment for long-term potentiation-biased STDP. Finally, we explore how well the findings transfer to recurrent neural networks where the learning mechanisms interact with the correlated activity of the network. We find that due to the self-reinforcing correlations in recurrent circuits under STDP, alignment occurs for both long-term potentiation- and depression-biased STDP, because the learning will be potentiation dominated in both cases due to the potentiating events induced by correlated activity. This is in line with recent results demonstrating a dominance of potentiation over depression during waking and normalization during sleep. This leads us to predict that individual spine pairs will be more similar after sleep compared to after sleep deprivation. In conclusion, we show that synaptic normalization in conjunction with coordinated potentiation--in this case, from STDP in the presence of correlated pre- and post-synaptic activity--naturally leads to an alignment of parallel synapses.

  14. Performance characteristics of a perforated shadow band under clear sky conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brooks, Michael J.

    2010-12-15

    A perforated, non-rotating shadow band is described for separating global solar irradiance into its diffuse and direct normal components using a single pyranometer. Whereas shadow bands are normally solid so as to occult the sensor of a pyranometer throughout the day, the proposed band has apertures cut from its circumference to intermittently expose the instrument sensor at preset intervals. Under clear sky conditions the device produces a saw tooth waveform of irradiance data from which it is possible to reconstruct separate global and diffuse curves. The direct normal irradiance may then be calculated giving a complete breakdown of the irradiancemore » curves without need of a second instrument or rotating shadow band. This paper describes the principle of operation of the band and gives a mathematical model of its shading mask based on the results of an optical ray tracing study. An algorithm for processing the data from the perforated band system is described and evaluated. In an extended trial conducted at NREL's Solar Radiation Research Laboratory, the band coupled with a thermally corrected Eppley PSP produced independent curves for diffuse, global and direct normal irradiance with low mean bias errors of 5.6 W/m{sup 2}, 0.3 W/m{sup 2} and -2.6 W/m{sup 2} respectively, relative to collocated reference instruments. Random uncertainties were 9.7 W/m{sup 2} (diffuse), 17.3 W/m{sup 2} (global) and 19.0 W/m{sup 2} (direct). When the data processing algorithm was modified to include the ray trace model of sensor exposure, uncertainties increased only marginally, confirming the effectiveness of the model. Deployment of the perforated band system can potentially increase the accuracy of data from ground stations in predominantly sunny areas where instrumentation is limited to a single pyranometer. (author)« less

  15. Improvement of ALT decay kinetics by all-oral HCV treatment: Role of NS5A inhibitors and differences with IFN-based regimens

    PubMed Central

    Cento, Valeria; Nguyen, Thi Huyen Tram; Di Carlo, Domenico; Biliotti, Elisa; Gianserra, Laura; Lenci, Ilaria; Di Paolo, Daniele; Calvaruso, Vincenza; Teti, Elisabetta; Cerrone, Maddalena; Romagnoli, Dante; Melis, Michela; Danieli, Elena; Menzaghi, Barbara; Polilli, Ennio; Siciliano, Massimo; Nicolini, Laura Ambra; Di Biagio, Antonio; Magni, Carlo Federico; Bolis, Matteo; Antonucci, Francesco Paolo; Di Maio, Velia Chiara; Alfieri, Roberta; Sarmati, Loredana; Casalino, Paolo; Bernardini, Sergio; Micheli, Valeria; Rizzardini, Giuliano; Parruti, Giustino; Quirino, Tiziana; Puoti, Massimo; Babudieri, Sergio; D’Arminio Monforte, Antonella; Andreoni, Massimo; Craxì, Antonio; Angelico, Mario; Pasquazzi, Caterina; Taliani, Gloria; Guedj, Jeremie; Ceccherini-Silberstein, Francesca

    2017-01-01

    Background Intracellular HCV-RNA reduction is a proposed mechanism of action of direct-acting antivirals (DAAs), alternative to hepatocytes elimination by pegylated-interferon plus ribavirin (PR). We modeled ALT and HCV-RNA kinetics in cirrhotic patients treated with currently-used all-DAA combinations to evaluate their mode of action and cytotoxicity compared with telaprevir (TVR)+PR. Study design Mathematical modeling of ALT and HCV-RNA kinetics was performed in 111 HCV-1 cirrhotic patients, 81 treated with all-DAA regimens and 30 with TVR+PR. Kinetic-models and Cox-analysis were used to assess determinants of ALT-decay and normalization. Results HCV-RNA kinetics was biphasic, reflecting a mean effectiveness in blocking viral production >99.8%. The first-phase of viral-decline was faster in patients receiving NS5A-inhibitors compared to TVR+PR or sofosbuvir+simeprevir (p<0.001), reflecting higher efficacy in blocking assembly/secretion. The second-phase, noted δ and attributed to infected-cell loss, was faster in patients receiving TVR+PR or sofosbuvir+simeprevir compared to NS5A-inhibitors (0.27 vs 0.21 d-1, respectively, p = 0.0012). In contrast the rate of ALT-normalization, noted λ, was slower in patients receiving TVR+PR or sofosbuvir+simeprevir compared to NS5A-inhibitors (0.17 vs 0.27 d-1, respectively, p<0.001). There was no significant association between the second-phase of viral-decline and ALT normalization rate and, for a given level of viral reduction, ALT-normalization was more profound in patients receiving DAA, and NS5A in particular, than TVR+PR. Conclusions Our data support a process of HCV-clearance by all-DAA regimens potentiated by NS5A-inhibitor, and less relying upon hepatocyte death than IFN-containing regimens. This may underline a process of “cell-cure” by DAAs, leading to a fast improvement of liver homeostasis. PMID:28545127

  16. Modeling of active transmembrane transport in a mixture theory framework.

    PubMed

    Ateshian, Gerard A; Morrison, Barclay; Hung, Clark T

    2010-05-01

    This study formulates governing equations for active transport across semi-permeable membranes within the framework of the theory of mixtures. In mixture theory, which models the interactions of any number of fluid and solid constituents, a supply term appears in the conservation of linear momentum to describe momentum exchanges among the constituents. In past applications, this momentum supply was used to model frictional interactions only, thereby describing passive transport processes. In this study, it is shown that active transport processes, which impart momentum to solutes or solvent, may also be incorporated in this term. By projecting the equation of conservation of linear momentum along the normal to the membrane, a jump condition is formulated for the mechano-electrochemical potential of fluid constituents which is generally applicable to nonequilibrium processes involving active transport. The resulting relations are simple and easy to use, and address an important need in the membrane transport literature.

  17. Positive versus negative perfectionism in psychopathology: a comment on Slade and Owens's dual process model.

    PubMed

    Flett, Gordon L; Hewitt, Paul L

    2006-07-01

    This article reviews the concepts of positive and negative perfectionism and the dual process model of perfectionism outlined by Slade and Owens (1998). The authors acknowledge that the dual process model represents a conceptual advance in the study of perfectionism and that Slade and Owens should be commended for identifying testable hypotheses and future research directions. However, the authors take issue with the notion that there are two types of perfectionism, with one type of perfectionism representing a "normal" or "healthy" form of perfectionism. They suggest that positive perfectionism is motivated, at least in part, by an avoidance orientation and fear of failure, and recent attempts to define and conceptualize positive perfectionism may have blurred the distinction between perfectionism and conscientiousness. Research findings that question the adaptiveness of positive forms of perfectionism are highlighted, and key issues for future research are identified.

  18. Using a theory-driven conceptual framework in qualitative health research.

    PubMed

    Macfarlane, Anne; O'Reilly-de Brún, Mary

    2012-05-01

    The role and merits of highly inductive research designs in qualitative health research are well established, and there has been a powerful proliferation of grounded theory method in the field. However, tight qualitative research designs informed by social theory can be useful to sensitize researchers to concepts and processes that they might not necessarily identify through inductive processes. In this article, we provide a reflexive account of our experience of using a theory-driven conceptual framework, the Normalization Process Model, in a qualitative evaluation of general practitioners' uptake of a free, pilot, language interpreting service in the Republic of Ireland. We reflect on our decisions about whether or not to use the Model, and describe our actual use of it to inform research questions, sampling, coding, and data analysis. We conclude with reflections on the added value that the Model and tight design brought to our research.

  19. Semiparametric Bayesian classification with longitudinal markers

    PubMed Central

    De la Cruz-Mesía, Rolando; Quintana, Fernando A.; Müller, Peter

    2013-01-01

    Summary We analyse data from a study involving 173 pregnant women. The data are observed values of the β human chorionic gonadotropin hormone measured during the first 80 days of gestational age, including from one up to six longitudinal responses for each woman. The main objective in this study is to predict normal versus abnormal pregnancy outcomes from data that are available at the early stages of pregnancy. We achieve the desired classification with a semiparametric hierarchical model. Specifically, we consider a Dirichlet process mixture prior for the distribution of the random effects in each group. The unknown random-effects distributions are allowed to vary across groups but are made dependent by using a design vector to select different features of a single underlying random probability measure. The resulting model is an extension of the dependent Dirichlet process model, with an additional probability model for group classification. The model is shown to perform better than an alternative model which is based on independent Dirichlet processes for the groups. Relevant posterior distributions are summarized by using Markov chain Monte Carlo methods. PMID:24368871

  20. Zero-state Markov switching count-data models: an empirical assessment.

    PubMed

    Malyshkina, Nataliya V; Mannering, Fred L

    2010-01-01

    In this study, a two-state Markov switching count-data model is proposed as an alternative to zero-inflated models to account for the preponderance of zeros sometimes observed in transportation count data, such as the number of accidents occurring on a roadway segment over some period of time. For this accident-frequency case, zero-inflated models assume the existence of two states: one of the states is a zero-accident count state, which has accident probabilities that are so low that they cannot be statistically distinguished from zero, and the other state is a normal-count state, in which counts can be non-negative integers that are generated by some counting process, for example, a Poisson or negative binomial. While zero-inflated models have come under some criticism with regard to accident-frequency applications - one fact is undeniable - in many applications they provide a statistically superior fit to the data. The Markov switching approach we propose seeks to overcome some of the criticism associated with the zero-accident state of the zero-inflated model by allowing individual roadway segments to switch between zero and normal-count states over time. An important advantage of this Markov switching approach is that it allows for the direct statistical estimation of the specific roadway-segment state (i.e., zero-accident or normal-count state) whereas traditional zero-inflated models do not. To demonstrate the applicability of this approach, a two-state Markov switching negative binomial model (estimated with Bayesian inference) and standard zero-inflated negative binomial models are estimated using five-year accident frequencies on Indiana interstate highway segments. It is shown that the Markov switching model is a viable alternative and results in a superior statistical fit relative to the zero-inflated models.

  1. One-dimensional wave bottom boundary layer model comparison: specific eddy viscosity and turbulence closure models

    USGS Publications Warehouse

    Puleo, J.A.; Mouraenko, O.; Hanes, D.M.

    2004-01-01

    Six one-dimensional-vertical wave bottom boundary layer models are analyzed based on different methods for estimating the turbulent eddy viscosity: Laminar, linear, parabolic, k—one equation turbulence closure, k−ε—two equation turbulence closure, and k−ω—two equation turbulence closure. Resultant velocity profiles, bed shear stresses, and turbulent kinetic energy are compared to laboratory data of oscillatory flow over smooth and rough beds. Bed shear stress estimates for the smooth bed case were most closely predicted by the k−ω model. Normalized errors between model predictions and measurements of velocity profiles over the entire computational domain collected at 15° intervals for one-half a wave cycle show that overall the linear model was most accurate. The least accurate were the laminar and k−ε models. Normalized errors between model predictions and turbulence kinetic energy profiles showed that the k−ω model was most accurate. Based on these findings, when the smallest overall velocity profile prediction error is required, the processing requirements and error analysis suggest that the linear eddy viscosity model is adequate. However, if accurate estimates of bed shear stress and TKE are required then, of the models tested, the k−ω model should be used.

  2. Model-Based Thermal System Design Optimization for the James Webb Space Telescope

    NASA Technical Reports Server (NTRS)

    Cataldo, Giuseppe; Niedner, Malcolm B.; Fixsen, Dale J.; Moseley, Samuel H.

    2017-01-01

    Spacecraft thermal model validation is normally performed by comparing model predictions with thermal test data and reducing their discrepancies to meet the mission requirements. Based on thermal engineering expertise, the model input parameters are adjusted to tune the model output response to the test data. The end result is not guaranteed to be the best solution in terms of reduced discrepancy and the process requires months to complete. A model-based methodology was developed to perform the validation process in a fully automated fashion and provide mathematical bases to the search for the optimal parameter set that minimizes the discrepancies between model and data. The methodology was successfully applied to several thermal subsystems of the James Webb Space Telescope (JWST). Global or quasiglobal optimal solutions were found and the total execution time of the model validation process was reduced to about two weeks. The model sensitivities to the parameters, which are required to solve the optimization problem, can be calculated automatically before the test begins and provide a library for sensitivity studies. This methodology represents a crucial commodity when testing complex, large-scale systems under time and budget constraints. Here, results for the JWST Core thermal system will be presented in detail.

  3. Model-based thermal system design optimization for the James Webb Space Telescope

    NASA Astrophysics Data System (ADS)

    Cataldo, Giuseppe; Niedner, Malcolm B.; Fixsen, Dale J.; Moseley, Samuel H.

    2017-10-01

    Spacecraft thermal model validation is normally performed by comparing model predictions with thermal test data and reducing their discrepancies to meet the mission requirements. Based on thermal engineering expertise, the model input parameters are adjusted to tune the model output response to the test data. The end result is not guaranteed to be the best solution in terms of reduced discrepancy and the process requires months to complete. A model-based methodology was developed to perform the validation process in a fully automated fashion and provide mathematical bases to the search for the optimal parameter set that minimizes the discrepancies between model and data. The methodology was successfully applied to several thermal subsystems of the James Webb Space Telescope (JWST). Global or quasiglobal optimal solutions were found and the total execution time of the model validation process was reduced to about two weeks. The model sensitivities to the parameters, which are required to solve the optimization problem, can be calculated automatically before the test begins and provide a library for sensitivity studies. This methodology represents a crucial commodity when testing complex, large-scale systems under time and budget constraints. Here, results for the JWST Core thermal system will be presented in detail.

  4. Goce and Its Role in Combined Global High Resolution Gravity Field Determination

    NASA Astrophysics Data System (ADS)

    Fecher, T.; Pail, R.; Gruber, T.

    2013-12-01

    Combined high-resolution gravity field models serve as a mandatory basis to describe static and dynamic processes in system Earth. Ocean dynamics can be modeled referring to a high-accurate geoid as reference surface, solid earth processes are initiated by the gravity field. Also geodetic disciplines such as height system determination depend on high-precise gravity field information. To fulfill the various requirements concerning resolution and accuracy, any kind of gravity field information, that means satellite as well as terrestrial and altimetric gravity field observations have to be included in one combination process. A key role is here reserved for GOCE observations, which contribute with its optimal signal content in the long to medium wavelength part and enable a more accurate gravity field determination than ever before especially in areas, where no high-accurate terrestrial gravity field observations are available, such as South America, Asia or Africa. For our contribution we prepare a combined high-resolution gravity field model up to d/o 720 based on full normal equation including recent GOCE, GRACE and terrestrial / altimetric data. For all data sets, normal equations are set up separately, relative weighted to each other in the combination step and solved. This procedure is computationally challenging and can only be performed using super computers. We put special emphasis on the combination process, for which we modified especially our procedure to include GOCE data optimally in the combination. Furthermore we modified our terrestrial/altimetric data sets, what should result in an improved outcome. With our model, in which we included the newest GOCE TIM4 gradiometry results, we can show how GOCE contributes to a combined gravity field solution especially in areas of poor terrestrial data coverage. The model is validated by independent GPS leveling data in selected regions as well as computation of the mean dynamic topography over the oceans. Further, we analyze the statistical error estimates derived from full covariance propagation and compare them with the absolute validation with independent data sets.

  5. Epigenetic and gene expression changes in the adolescent brain: What have we learned from animal models?

    PubMed

    Mychasiuk, Richelle; Metz, Gerlinde A S

    2016-11-01

    Adolescence is defined as the gradual period of transition between childhood and adulthood that is characterized by significant brain maturation, growth spurts, sexual maturation, and heightened social interaction. Although originally believed to be a uniquely human aspect of development, rodent and non-human primates demonstrate maturational patterns that distinctly support an adolescent stage. As epigenetic processes are essential for development and differentiation, but also transpire in mature cells in response to environmental influences, they are an important aspect of adolescent brain maturation. The purpose of this review article was to examine epigenetic programming in animal models of brain maturation during adolescence. The discussion focuses on animal models to examine three main concepts; epigenetic processes involved in normal adolescent brain maturation, the influence of fetal programming on adolescent brain development and the epigenome, and finally, postnatal experiences such as exercise and drugs that modify epigenetic processes important for adolescent brain maturation. This corollary emphasizes the utility of animal models to further our understanding of complex processes such as epigenetic regulation and brain development. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Eye-fixation behavior, lexical storage, and visual word recognition in a split processing model.

    PubMed

    Shillcock, R; Ellison, T M; Monaghan, P

    2000-10-01

    Some of the implications of a model of visual word recognition in which processing is conditioned by the anatomical splitting of the visual field between the two hemispheres of the brain are explored. The authors investigate the optimal processing of visually presented words within such an architecture, and, for a realistically sized lexicon of English, characterize a computationally optimal fixation point in reading. They demonstrate that this approach motivates a range of behavior observed in reading isolated words and text, including the optimal viewing position and its relationship with the preferred viewing location, the failure to fixate smaller words, asymmetries in hemisphere-specific processing, and the priority given to the exterior letters of words. The authors also show that split architectures facilitate the uptake of all the letter-position information necessary for efficient word recognition and that this information may be less specific than is normally assumed. A split model of word recognition captures a range of behavior in reading that is greater than that covered by existing models of visual word recognition.

  7. Tensor products of process matrices with indefinite causal structure

    NASA Astrophysics Data System (ADS)

    Jia, Ding; Sakharwade, Nitica

    2018-03-01

    Theories with indefinite causal structure have been studied from both the fundamental perspective of quantum gravity and the practical perspective of information processing. In this paper we point out a restriction in forming tensor products of objects with indefinite causal structure in certain models: there exist both classical and quantum objects the tensor products of which violate the normalization condition of probabilities, if all local operations are allowed. We obtain a necessary and sufficient condition for when such unrestricted tensor products of multipartite objects are (in)valid. This poses a challenge to extending communication theory to indefinite causal structures, as the tensor product is the fundamental ingredient in the asymptotic setting of communication theory. We discuss a few options to evade this issue. In particular, we show that the sequential asymptotic setting does not suffer the violation of normalization.

  8. An on-line modified least-mean-square algorithm for training neurofuzzy controllers.

    PubMed

    Tan, Woei Wan

    2007-04-01

    The problem hindering the use of data-driven modelling methods for training controllers on-line is the lack of control over the amount by which the plant is excited. As the operating schedule determines the information available on-line, the knowledge of the process may degrade if the setpoint remains constant for an extended period. This paper proposes an identification algorithm that alleviates "learning interference" by incorporating fuzzy theory into the normalized least-mean-square update rule. The ability of the proposed methodology to achieve faster learning is examined by employing the algorithm to train a neurofuzzy feedforward controller for controlling a liquid level process. Since the proposed identification strategy has similarities with the normalized least-mean-square update rule and the recursive least-square estimator, the on-line learning rates of these algorithms are also compared.

  9. High-frequency ultrasound measurements of the normal ciliary body and iris.

    PubMed

    Garcia, Julian P S; Spielberg, Leigh; Finger, Paul T

    2011-01-01

    To determine the normal ultrasonographic thickness of the iris and ciliary body. This prospective 35-MHz ultrasonographic study included 80 normal eyes of 40 healthy volunteers. The images were obtained at the 12-, 3-, 6-, and 9-o'clock radial meridians, measured at three locations along the radial length of the iris and at the thickest section of the ciliary body. Mixed model was used to estimate eye site-adjusted means and standard errors and to test the statistical difference of adjusted results. Parameters included mean thickness, standard deviation, and range. Mean thicknesses at the iris root, midway along the radial length of the iris, and at the juxtapupillary margin were 0.4 ± 0.1, 0.5 ± 0.1, and 0.6 ± 0.1 mm, respectively. Those of the ciliary body, ciliary processes, and ciliary body + ciliary processes were 0.7 ± 0.1, 0.6 ± 0.1, and 1.3 ± 0.2 mm, respectively. This study provides standard, normative thickness data for the iris and ciliary body in healthy adults using ultrasonographic imaging. Copyright 2011, SLACK Incorporated.

  10. Drosophila hematopoiesis under normal conditions and in response to immune stress.

    PubMed

    Letourneau, Manon; Lapraz, Francois; Sharma, Anurag; Vanzo, Nathalie; Waltzer, Lucas; Crozatier, Michèle

    2016-11-01

    The emergence of hematopoietic progenitors and their differentiation into various highly specialized blood cell types constitute a finely tuned process. Unveiling the genetic cascades that control blood cell progenitor fate and understanding how they are modulated in response to environmental changes are two major challenges in the field of hematopoiesis. In the last 20 years, many studies have established important functional analogies between blood cell development in vertebrates and in the fruit fly, Drosophila melanogaster. Thereby, Drosophila has emerged as a powerful genetic model for studying mechanisms that control hematopoiesis during normal development or in pathological situations. Moreover, recent advances in Drosophila have highlighted how intricate cell communication networks and microenvironmental cues regulate blood cell homeostasis. They have also revealed the striking plasticity of Drosophila mature blood cells and the presence of different sites of hematopoiesis in the larva. This review provides an overview of Drosophila hematopoiesis during development and summarizes our current knowledge on the molecular processes controlling larval hematopoiesis, both under normal conditions and in response to an immune challenge, such as wasp parasitism. © 2016 Federation of European Biochemical Societies.

  11. Strength of Gamma Rhythm Depends on Normalization

    PubMed Central

    Ray, Supratim; Ni, Amy M.; Maunsell, John H. R.

    2013-01-01

    Neuronal assemblies often exhibit stimulus-induced rhythmic activity in the gamma range (30–80 Hz), whose magnitude depends on the attentional load. This has led to the suggestion that gamma rhythms form dynamic communication channels across cortical areas processing the features of behaviorally relevant stimuli. Recently, attention has been linked to a normalization mechanism, in which the response of a neuron is suppressed (normalized) by the overall activity of a large pool of neighboring neurons. In this model, attention increases the excitatory drive received by the neuron, which in turn also increases the strength of normalization, thereby changing the balance of excitation and inhibition. Recent studies have shown that gamma power also depends on such excitatory–inhibitory interactions. Could modulation in gamma power during an attention task be a reflection of the changes in the underlying excitation–inhibition interactions? By manipulating the normalization strength independent of attentional load in macaque monkeys, we show that gamma power increases with increasing normalization, even when the attentional load is fixed. Further, manipulations of attention that increase normalization increase gamma power, even when they decrease the firing rate. Thus, gamma rhythms could be a reflection of changes in the relative strengths of excitation and normalization rather than playing a functional role in communication or control. PMID:23393427

  12. The model of drugs distribution dynamics in biological tissue

    NASA Astrophysics Data System (ADS)

    Ginevskij, D. A.; Izhevskij, P. V.; Sheino, I. N.

    2017-09-01

    The dose distribution by Neutron Capture Therapy follows the distribution of 10B in the tissue. The modern models of pharmacokinetics of drugs describe the processes occurring in conditioned "chambers" (blood-organ-tumor), but fail to describe the spatial distribution of the drug in the tumor and in normal tissue. The mathematical model of the spatial distribution dynamics of drugs in the tissue, depending on the concentration of the drug in the blood, was developed. The modeling method is the representation of the biological structure in the form of a randomly inhomogeneous medium in which the 10B distribution occurs. The parameters of the model, which cannot be determined rigorously in the experiment, are taken as the quantities subject to the laws of the unconnected random processes. The estimates of 10B distribution preparations in the tumor and healthy tissue, inside/outside the cells, are obtained.

  13. The IfE Global Gravity Field Model Recovered from GOCE Orbit and Gradiometer Data

    NASA Astrophysics Data System (ADS)

    Wu, Hu; Muiller, Jurgen; Brieden, Phillip

    2015-03-01

    An independent global gravity field model is computed from the GOCE orbit and gradiometer data using our own IfE software. We analysed the same data period that were considered for the first released GOCE models. The Acceleration Approach is applied to process the orbit data. The gravity gradients are processed in the framework of the remove-restore technique by which the low-frequency noise of the original gradients are removed. For the combined solution, the normal equations are summed by the Variance Component Estimation Approach. The result in terms of accumulated geoid height error calculated from the coefficient difference w.r.t. EGM2008 is about 11 cm at D/O 200, which corresponds to the accuracy level of the first released TIM and DIR solutions. This indicates that our IfE model has a comparable performance as the other official GOCE models.

  14. Effect of second to first normal stress difference ratio at the die exit on neck-in phenomenon in polymeric flat film production

    NASA Astrophysics Data System (ADS)

    Barborik, Tomas; Zatloukal, Martin

    2017-05-01

    In this study, viscoelastic modeling of the extrusion film casting process, based on the lD membrane model and modified Leonov constitutive equation, was conducted and the effect of the viscoelastic stress state at the die exit (captured here via second to first normal stress difference ratio) on the unwanted neck-in phenomenon has been analyzed for wide range of Deborah numbers and materials having different level of uniaxial and planar extensional strain hardening. Relevant experimental data for LDPE and theoretical predictions based on multimode eXtended Pom-Pom model acquired from the open literature were used for the validation purposes. It was found that firstly, the predicting capabilities of both constitutive equations for given material and processing conditions are comparable even if the single mode modified Leonov model was used and secondly, the agreement between theoretical and experimental data on neck-in is fairly good. Results of the theoretical study revealed that the viscoelastic stress state at the die exit (i.e. -N2/N1 ratio) increases the level of neck-in if uniaxial extensional strain hardening, planar to uniaxial extensional viscosity ratio and Deborah number increases. It has also been revealed that there exists threshold value for Deborah number and extensional strain hardening below which the neck-in becomes independent on the die exit stress state.

  15. Stochastic Model of Seasonal Runoff Forecasts

    NASA Astrophysics Data System (ADS)

    Krzysztofowicz, Roman; Watada, Leslie M.

    1986-03-01

    Each year the National Weather Service and the Soil Conservation Service issue a monthly sequence of five (or six) categorical forecasts of the seasonal snowmelt runoff volume. To describe uncertainties in these forecasts for the purposes of optimal decision making, a stochastic model is formulated. It is a discrete-time, finite, continuous-space, nonstationary Markov process. Posterior densities of the actual runoff conditional upon a forecast, and transition densities of forecasts are obtained from a Bayesian information processor. Parametric densities are derived for the process with a normal prior density of the runoff and a linear model of the forecast error. The structure of the model and the estimation procedure are motivated by analyses of forecast records from five stations in the Snake River basin, from the period 1971-1983. The advantages of supplementing the current forecasting scheme with a Bayesian analysis are discussed.

  16. The significance of the choice of radiobiological (NTCP) models in treatment plan objective functions.

    PubMed

    Miller, J; Fuller, M; Vinod, S; Suchowerska, N; Holloway, L

    2009-06-01

    A Clinician's discrimination between radiation therapy treatment plans is traditionally a subjective process, based on experience and existing protocols. A more objective and quantitative approach to distinguish between treatment plans is to use radiobiological or dosimetric objective functions, based on radiobiological or dosimetric models. The efficacy of models is not well understood, nor is the correlation of the rank of plans resulting from the use of models compared to the traditional subjective approach. One such radiobiological model is the Normal Tissue Complication Probability (NTCP). Dosimetric models or indicators are more accepted in clinical practice. In this study, three radiobiological models, Lyman NTCP, critical volume NTCP and relative seriality NTCP, and three dosimetric models, Mean Lung Dose (MLD) and the Lung volumes irradiated at 10Gy (V10) and 20Gy (V20), were used to rank a series of treatment plans using, harm to normal (Lung) tissue as the objective criterion. None of the models considered in this study showed consistent correlation with the Radiation Oncologists plan ranking. If radiobiological or dosimetric models are to be used in objective functions for lung treatments, based on this study it is recommended that the Lyman NTCP model be used because it will provide most consistency with traditional clinician ranking.

  17. Normalized Temperature Contrast Processing in Flash Infrared Thermography

    NASA Technical Reports Server (NTRS)

    Koshti, Ajay M.

    2016-01-01

    The paper presents further development in normalized contrast processing of flash infrared thermography method by the author given in US 8,577,120 B1. The method of computing normalized image or pixel intensity contrast, and normalized temperature contrast are provided, including converting one from the other. Methods of assessing emissivity of the object, afterglow heat flux, reflection temperature change and temperature video imaging during flash thermography are provided. Temperature imaging and normalized temperature contrast imaging provide certain advantages over pixel intensity normalized contrast processing by reducing effect of reflected energy in images and measurements, providing better quantitative data. The subject matter for this paper mostly comes from US 9,066,028 B1 by the author. Examples of normalized image processing video images and normalized temperature processing video images are provided. Examples of surface temperature video images, surface temperature rise video images and simple contrast video images area also provided. Temperature video imaging in flash infrared thermography allows better comparison with flash thermography simulation using commercial software which provides temperature video as the output. Temperature imaging also allows easy comparison of surface temperature change to camera temperature sensitivity or noise equivalent temperature difference (NETD) to assess probability of detecting (POD) anomalies.

  18. Integrating cognitive and peripheral factors in predicting hearing-aid processing effectiveness

    PubMed Central

    Kates, James M.; Arehart, Kathryn H.; Souza, Pamela E.

    2013-01-01

    Individual factors beyond the audiogram, such as age and cognitive abilities, can influence speech intelligibility and speech quality judgments. This paper develops a neural network framework for combining multiple subject factors into a single model that predicts speech intelligibility and quality for a nonlinear hearing-aid processing strategy. The nonlinear processing approach used in the paper is frequency compression, which is intended to improve the audibility of high-frequency speech sounds by shifting them to lower frequency regions where listeners with high-frequency loss have better hearing thresholds. An ensemble averaging approach is used for the neural network to avoid the problems associated with overfitting. Models are developed for two subject groups, one having nearly normal hearing and the other mild-to-moderate sloping losses. PMID:25669257

  19. A cortical neural prosthesis for restoring and enhancing memory

    NASA Astrophysics Data System (ADS)

    Berger, Theodore W.; Hampson, Robert E.; Song, Dong; Goonawardena, Anushka; Marmarelis, Vasilis Z.; Deadwyler, Sam A.

    2011-08-01

    A primary objective in developing a neural prosthesis is to replace neural circuitry in the brain that no longer functions appropriately. Such a goal requires artificial reconstruction of neuron-to-neuron connections in a way that can be recognized by the remaining normal circuitry, and that promotes appropriate interaction. In this study, the application of a specially designed neural prosthesis using a multi-input/multi-output (MIMO) nonlinear model is demonstrated by using trains of electrical stimulation pulses to substitute for MIMO model derived ensemble firing patterns. Ensembles of CA3 and CA1 hippocampal neurons, recorded from rats performing a delayed-nonmatch-to-sample (DNMS) memory task, exhibited successful encoding of trial-specific sample lever information in the form of different spatiotemporal firing patterns. MIMO patterns, identified online and in real-time, were employed within a closed-loop behavioral paradigm. Results showed that the model was able to predict successful performance on the same trial. Also, MIMO model-derived patterns, delivered as electrical stimulation to the same electrodes, improved performance under normal testing conditions and, more importantly, were capable of recovering performance when delivered to animals with ensemble hippocampal activity compromised by pharmacologic blockade of synaptic transmission. These integrated experimental-modeling studies show for the first time that, with sufficient information about the neural coding of memories, a neural prosthesis capable of real-time diagnosis and manipulation of the encoding process can restore and even enhance cognitive, mnemonic processes.

  20. [Application of support vector machine-recursive feature elimination algorithm in Raman spectroscopy for differential diagnosis of benign and malignant breast diseases].

    PubMed

    Zhang, Haipeng; Fu, Tong; Zhang, Zhiru; Fan, Zhimin; Zheng, Chao; Han, Bing

    2014-08-01

    To explore the value of application of support vector machine-recursive feature elimination (SVM-RFE) method in Raman spectroscopy for differential diagnosis of benign and malignant breast diseases. Fresh breast tissue samples of 168 patients (all female; ages 22-75) were obtained by routine surgical resection from May 2011 to May 2012 at the Department of Breast Surgery, the First Hospital of Jilin University. Among them, there were 51 normal tissues, 66 benign and 51 malignant breast lesions. All the specimens were assessed by Raman spectroscopy, and the SVM-RFE algorithm was used to process the data and build the mathematical model. Mahalanobis distance and spectral residuals were used as discriminating criteria to evaluate this data-processing method. 1 800 Raman spectra were acquired from the fresh samples of human breast tissues. Based on spectral profiles, the presence of 1 078, 1 267, 1 301, 1 437, 1 653, and 1 743 cm(-1) peaks were identified in the normal tissues; and 1 281, 1 341, 1 381, 1 417, 1 465, 1 530, and 1 637 cm(-1) peaks were found in the benign and malignant tissues. The main characteristic peaks differentiating benign and malignant lesions were 1 340 and 1 480 cm(-1). The accuracy of SVM-RFE in discriminating normal and malignant lesions was 100.0%, while that in the assessment of benign lesions was 93.0%. There are distinct differences among the Raman spectra of normal, benign and malignant breast tissues, and SVM-RFE method can be used to build differentiation model of breast lesions.

  1. Effect of Jianweiyuyang granule on gastric ulcer recurrence and expression of VEGF mRNA in the healing process of gastric ulcer in rats.

    PubMed

    Dai, Xing-Ping; Li, Jia-Bang; Liu, Zhao-Qian; Ding, Xiang; Huang, Cheng-Hui; Zhou, Bing

    2005-09-21

    To investigate the effect of Jianweiyuyang (JWYY) granule on gastric ulcer recurrence and its mechanism in the treatment of gastric ulcer in rats. Gastric ulcer in rats was induced according to Okeba's method with minor modification and the recurrence model was induced by IL-1beta. The expression of vascular endothelial growth factor mRNA (VEGF mRNA) was examined by reverse transcription polymerase chain reaction in gastric ulcer and microvessel density (MVD) adjacent to the ulcer margin was examined by immunohistochemistry. MVD was higher in the JWYY treatment group (14.0+/-2.62) compared with the normal, model and ranitidine treatment groups (2.2+/-0.84, 8.8+/-0.97, 10.4+/-0.97) in rats (P<0.01). The expression level of VEGF mRNA in gastric tissues during the healing process of JWYY treatment group rats significantly increased compared with other groups (normal group: 0.190+/-0.019, model group: 0.642+/-0.034, ranitidine group: 0.790+/-0.037, P<0.01). JWYY granules can stimulate angiogenesis and enhance the expression of VEGF mRNA in gastric ulcer rats. This might be the mechanism for JWYY accelerating the ulcer healing, and preventing the recurrence of gastric ulcer.

  2. Exponential model normalization for electrical capacitance tomography with external electrodes under gap permittivity conditions

    NASA Astrophysics Data System (ADS)

    Baidillah, Marlin R.; Takei, Masahiro

    2017-06-01

    A nonlinear normalization model which is called exponential model for electrical capacitance tomography (ECT) with external electrodes under gap permittivity conditions has been developed. The exponential model normalization is proposed based on the inherently nonlinear relationship characteristic between the mixture permittivity and the measured capacitance due to the gap permittivity of inner wall. The parameters of exponential equation are derived by using an exponential fitting curve based on the simulation and a scaling function is added to adjust the experiment system condition. The exponential model normalization was applied to two dimensional low and high contrast dielectric distribution phantoms by using simulation and experimental studies. The proposed normalization model has been compared with other normalization models i.e. Parallel, Series, Maxwell and Böttcher models. Based on the comparison of image reconstruction results, the exponential model is reliable to predict the nonlinear normalization of measured capacitance in term of low and high contrast dielectric distribution.

  3. Statistical Bayesian method for reliability evaluation based on ADT data

    NASA Astrophysics Data System (ADS)

    Lu, Dawei; Wang, Lizhi; Sun, Yusheng; Wang, Xiaohong

    2018-05-01

    Accelerated degradation testing (ADT) is frequently conducted in the laboratory to predict the products’ reliability under normal operating conditions. Two kinds of methods, degradation path models and stochastic process models, are utilized to analyze degradation data and the latter one is the most popular method. However, some limitations like imprecise solution process and estimation result of degradation ratio still exist, which may affect the accuracy of the acceleration model and the extrapolation value. Moreover, the conducted solution of this problem, Bayesian method, lose key information when unifying the degradation data. In this paper, a new data processing and parameter inference method based on Bayesian method is proposed to handle degradation data and solve the problems above. First, Wiener process and acceleration model is chosen; Second, the initial values of degradation model and parameters of prior and posterior distribution under each level is calculated with updating and iteration of estimation values; Third, the lifetime and reliability values are estimated on the basis of the estimation parameters; Finally, a case study is provided to demonstrate the validity of the proposed method. The results illustrate that the proposed method is quite effective and accuracy in estimating the lifetime and reliability of a product.

  4. Kullback-Leibler information function and the sequential selection of experiments to discriminate among several linear models

    NASA Technical Reports Server (NTRS)

    Sidik, S. M.

    1972-01-01

    The error variance of the process prior multivariate normal distributions of the parameters of the models are assumed to be specified, prior probabilities of the models being correct. A rule for termination of sampling is proposed. Upon termination, the model with the largest posterior probability is chosen as correct. If sampling is not terminated, posterior probabilities of the models and posterior distributions of the parameters are computed. An experiment was chosen to maximize the expected Kullback-Leibler information function. Monte Carlo simulation experiments were performed to investigate large and small sample behavior of the sequential adaptive procedure.

  5. Development of the Word Auditory Recognition and Recall Measure: A Working Memory Test for Use in Rehabilitative Audiology.

    PubMed

    Smith, Sherri L; Pichora-Fuller, M Kathleen; Alexander, Genevieve

    The purpose of this study was to develop the Word Auditory Recognition and Recall Measure (WARRM) and to conduct the inaugural evaluation of the performance of younger adults with normal hearing, older adults with normal to near-normal hearing, and older adults with pure-tone hearing loss on the WARRM. The WARRM is a new test designed for concurrently assessing word recognition and auditory working memory performance in adults who may have pure-tone hearing loss. The test consists of 100 monosyllabic words based on widely used speech-recognition test materials. The 100 words are presented in recall set sizes of 2, 3, 4, 5, and 6 items, with 5 trials in each set size. The WARRM yields a word-recognition score and a recall score. The WARRM was administered to all participants in three listener groups under two processing conditions in a mixed model (between-subjects, repeated measures) design. The between-subjects factor was group, with 48 younger listeners with normal audiometric thresholds (younger listeners with normal hearing [YNH]), 48 older listeners with normal thresholds through 3000 Hz (older listeners with normal hearing [ONH]), and 48 older listeners with sensorineural hearing loss (older listeners with hearing loss [OHL]). The within-subjects factor was WARRM processing condition (no additional task or with an alphabet judgment task). The associations between results on the WARRM test and results on a battery of other auditory and memory measures were examined. Word-recognition performance on the WARRM was not affected by processing condition or set size and was near ceiling for the YNH and ONH listeners (99 and 98%, respectively) with both groups performing significantly better than the OHL listeners (83%). The recall results were significantly better for the YNH, ONH, and OHL groups with no processing (93, 84, and 75%, respectively) than with the alphabet processing (86, 77, and 70%). In both processing conditions, recall was best for YNH, followed by ONH, and worst for OHL listeners. WARRM recall scores were significantly correlated with other memory measures. In addition, WARRM recall scores were correlated with results on the Words-In-Noise (WIN) test for the OHL listeners in the no processing condition and for ONH listeners in the alphabet processing condition. Differences in the WIN and recall scores of these groups are consistent with the interpretation that the OHL listeners found listening to be sufficiently demanding to affect recall even in the no processing condition, whereas the ONH group listeners did not find it so demanding until the additional alphabet processing task was added. These findings demonstrate the feasibility of incorporating an auditory memory test into a word-recognition test to obtain measures of both word recognition and working memory simultaneously. The correlation of WARRM recall with scores from other memory measures is evidence of construct validity. The observation of correlations between the WIN thresholds with each of the older groups and recall scores in certain processing conditions suggests that recall depends on listeners' word-recognition abilities in noise in combination with the processing demands of the task. The recall score provides additional information beyond the pure-tone audiogram and word-recognition scores that may help rehabilitative audiologists assess the listening abilities of patients with hearing loss.

  6. Mapping of quantitative trait loci using the skew-normal distribution.

    PubMed

    Fernandes, Elisabete; Pacheco, António; Penha-Gonçalves, Carlos

    2007-11-01

    In standard interval mapping (IM) of quantitative trait loci (QTL), the QTL effect is described by a normal mixture model. When this assumption of normality is violated, the most commonly adopted strategy is to use the previous model after data transformation. However, an appropriate transformation may not exist or may be difficult to find. Also this approach can raise interpretation issues. An interesting alternative is to consider a skew-normal mixture model in standard IM, and the resulting method is here denoted as skew-normal IM. This flexible model that includes the usual symmetric normal distribution as a special case is important, allowing continuous variation from normality to non-normality. In this paper we briefly introduce the main peculiarities of the skew-normal distribution. The maximum likelihood estimates of parameters of the skew-normal distribution are obtained by the expectation-maximization (EM) algorithm. The proposed model is illustrated with real data from an intercross experiment that shows a significant departure from the normality assumption. The performance of the skew-normal IM is assessed via stochastic simulation. The results indicate that the skew-normal IM has higher power for QTL detection and better precision of QTL location as compared to standard IM and nonparametric IM.

  7. Mutual regulation of tumour vessel normalization and immunostimulatory reprogramming.

    PubMed

    Tian, Lin; Goldstein, Amit; Wang, Hai; Ching Lo, Hin; Sun Kim, Ik; Welte, Thomas; Sheng, Kuanwei; Dobrolecki, Lacey E; Zhang, Xiaomei; Putluri, Nagireddy; Phung, Thuy L; Mani, Sendurai A; Stossi, Fabio; Sreekumar, Arun; Mancini, Michael A; Decker, William K; Zong, Chenghang; Lewis, Michael T; Zhang, Xiang H-F

    2017-04-13

    Blockade of angiogenesis can retard tumour growth, but may also paradoxically increase metastasis. This paradox may be resolved by vessel normalization, which involves increased pericyte coverage, improved tumour vessel perfusion, reduced vascular permeability, and consequently mitigated hypoxia. Although these processes alter tumour progression, their regulation is poorly understood. Here we show that type 1 T helper (T H 1) cells play a crucial role in vessel normalization. Bioinformatic analyses revealed that gene expression features related to vessel normalization correlate with immunostimulatory pathways, especially T lymphocyte infiltration or activity. To delineate the causal relationship, we used various mouse models with vessel normalization or T lymphocyte deficiencies. Although disruption of vessel normalization reduced T lymphocyte infiltration as expected, reciprocal depletion or inactivation of CD4 + T lymphocytes decreased vessel normalization, indicating a mutually regulatory loop. In addition, activation of CD4 + T lymphocytes by immune checkpoint blockade increased vessel normalization. T H 1 cells that secrete interferon-γ are a major population of cells associated with vessel normalization. Patient-derived xenograft tumours growing in immunodeficient mice exhibited enhanced hypoxia compared to the original tumours in immunocompetent humans, and hypoxia was reduced by adoptive T H 1 transfer. Our findings elucidate an unexpected role of T H 1 cells in vasculature and immune reprogramming. T H 1 cells may be a marker and a determinant of both immune checkpoint blockade and anti-angiogenesis efficacy.

  8. Seismic and aseismic deformations and impact on reservoir permeability: The case of EGS stimulation at The Geysers, California, USA

    DOE PAGES

    Jeanne, Pierre; Rutqvist, Jonny; Rinaldi, Antonio Pio; ...

    2015-10-27

    In this paper, we use the Seismicity-Based Reservoir Characterization approach to study the spatiotemporal dynamics of an injection-induced microseismic cloud, monitored during the stimulation of an enhanced geothermal system, and associated with the Northwest Geysers Enhanced Geothermal System (EGS) Demonstration project (California). We identified the development of a seismically quiet domain around the injection well surrounded by a seismically active domain. Then we compare these observations with the results of 3-D Thermo-Hydro-Mechanical simulations of the EGS, which accounts for changes in permeability as a function of the effective normal stress and the plastic strain. The results of our modeling showmore » that the aseismic domain is caused by both the presence of the injected cold water and by thermal processes. These thermal processes cause a cooling-stress reduction, which prevent shear reactivation and favors fracture opening by reducing effective normal stress and locally increasing the permeability. This process is accompanied by aseismic plastic shear strain. In the seismic domain, microseismicity is caused by the reactivation of the preexisting fractures, resulting from an increase in injection-induced pore pressure. Our modeling indicates that in this domain, permeability evolves according to the effective normal stress acting on the shear zones, whereas shearing of preexisting fractures may have a low impact on permeability. We attribute this lack of permeability gain to the fact that the initial permeabilities of these preexisting fractures are already high (up to 2 orders of magnitude higher than the host rock) and may already be fully dilated by past tectonic straining.« less

  9. Seismic and aseismic deformations and impact on reservoir permeability: The case of EGS stimulation at The Geysers, California, USA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jeanne, Pierre; Rutqvist, Jonny; Rinaldi, Antonio Pio

    In this paper, we use the Seismicity-Based Reservoir Characterization approach to study the spatiotemporal dynamics of an injection-induced microseismic cloud, monitored during the stimulation of an enhanced geothermal system, and associated with the Northwest Geysers Enhanced Geothermal System (EGS) Demonstration project (California). We identified the development of a seismically quiet domain around the injection well surrounded by a seismically active domain. Then we compare these observations with the results of 3-D Thermo-Hydro-Mechanical simulations of the EGS, which accounts for changes in permeability as a function of the effective normal stress and the plastic strain. The results of our modeling showmore » that the aseismic domain is caused by both the presence of the injected cold water and by thermal processes. These thermal processes cause a cooling-stress reduction, which prevent shear reactivation and favors fracture opening by reducing effective normal stress and locally increasing the permeability. This process is accompanied by aseismic plastic shear strain. In the seismic domain, microseismicity is caused by the reactivation of the preexisting fractures, resulting from an increase in injection-induced pore pressure. Our modeling indicates that in this domain, permeability evolves according to the effective normal stress acting on the shear zones, whereas shearing of preexisting fractures may have a low impact on permeability. We attribute this lack of permeability gain to the fact that the initial permeabilities of these preexisting fractures are already high (up to 2 orders of magnitude higher than the host rock) and may already be fully dilated by past tectonic straining.« less

  10. Modeling Error Distributions of Growth Curve Models through Bayesian Methods

    ERIC Educational Resources Information Center

    Zhang, Zhiyong

    2016-01-01

    Growth curve models are widely used in social and behavioral sciences. However, typical growth curve models often assume that the errors are normally distributed although non-normal data may be even more common than normal data. In order to avoid possible statistical inference problems in blindly assuming normality, a general Bayesian framework is…

  11. [Effects of electroacupuncture on hippocampal nNOS expression in rats of post-traumatic stress disorder model].

    PubMed

    Hou, Liang-Qin; Liu, Song; Xiong, Ke-Ren

    2013-07-01

    To explore the mechanism of electroacupuncture (EA) in the treatment of post-traumatic stress disorder (PTSD). Thirty male Sprague-Dawley rats were randomly divided into a normal group, a model group and an electroacupuncture group. The single prolonged stress (SPS) method was used to set up the PTSD models in latter two groups. After SPS Stimulation, EA group was treated with 2Hz electroacupuncture at Baihui (GV 20) and Zusanli (ST 36) for 30 min, once a day for a week. Reverse transcriptase polymerase chain reaction (RT-PCR) and immuno-histochemistry were used to detect the mRNA and protein expression of nNOS in the hippocampus of rats in the each group. (1) The nNOS mRNA expression in hippocampus in model group was higher than that in normal group (P < 0.05). But the expression in EA group was lower significantly than that in model group (P < 0.05). (2) The nNOS protein expression in hippocampus CA1 and CA3 in model group was higher than that in normal group (P < 0.05). But after electroacupuncture treatment, its expression in EA group was lower significantly than that in model group (P < 0.05). The nNOS protein expression in hippocampal CA2 had no difference among all three groups. The elevated nNOS expression in hippocampus may be involved in the pathological process of PTSD. Electroacupuncture play a down-regulation effects in the hippocampal nNOS expression, which may be one mechanism of electroacupuncture for treatment of PTSD.

  12. Estimation of value at risk and conditional value at risk using normal mixture distributions model

    NASA Astrophysics Data System (ADS)

    Kamaruzzaman, Zetty Ain; Isa, Zaidi

    2013-04-01

    Normal mixture distributions model has been successfully applied in financial time series analysis. In this paper, we estimate the return distribution, value at risk (VaR) and conditional value at risk (CVaR) for monthly and weekly rates of returns for FTSE Bursa Malaysia Kuala Lumpur Composite Index (FBMKLCI) from July 1990 until July 2010 using the two component univariate normal mixture distributions model. First, we present the application of normal mixture distributions model in empirical finance where we fit our real data. Second, we present the application of normal mixture distributions model in risk analysis where we apply the normal mixture distributions model to evaluate the value at risk (VaR) and conditional value at risk (CVaR) with model validation for both risk measures. The empirical results provide evidence that using the two components normal mixture distributions model can fit the data well and can perform better in estimating value at risk (VaR) and conditional value at risk (CVaR) where it can capture the stylized facts of non-normality and leptokurtosis in returns distribution.

  13. Word Recognition and Basic Cognitive Processes among Reading-Disabled and Normal Readers in Arabic.

    ERIC Educational Resources Information Center

    Abu-Rabia, Salim; Share, David; Mansour, Maysaloon Said

    2003-01-01

    Investigates word identification in Arabic and basic cognitive processes in reading-disabled (RD) and normal level readers of the same chronological age, and in younger normal readers at the same reading level. Indicates significant deficiencies in morphology, working memory, and syntactic and visual processing, with the most severe deficiencies…

  14. Wavelet-based multiscale performance analysis: An approach to assess and improve hydrological models

    NASA Astrophysics Data System (ADS)

    Rathinasamy, Maheswaran; Khosa, Rakesh; Adamowski, Jan; ch, Sudheer; Partheepan, G.; Anand, Jatin; Narsimlu, Boini

    2014-12-01

    The temporal dynamics of hydrological processes are spread across different time scales and, as such, the performance of hydrological models cannot be estimated reliably from global performance measures that assign a single number to the fit of a simulated time series to an observed reference series. Accordingly, it is important to analyze model performance at different time scales. Wavelets have been used extensively in the area of hydrological modeling for multiscale analysis, and have been shown to be very reliable and useful in understanding dynamics across time scales and as these evolve in time. In this paper, a wavelet-based multiscale performance measure for hydrological models is proposed and tested (i.e., Multiscale Nash-Sutcliffe Criteria and Multiscale Normalized Root Mean Square Error). The main advantage of this method is that it provides a quantitative measure of model performance across different time scales. In the proposed approach, model and observed time series are decomposed using the Discrete Wavelet Transform (known as the à trous wavelet transform), and performance measures of the model are obtained at each time scale. The applicability of the proposed method was explored using various case studies-both real as well as synthetic. The synthetic case studies included various kinds of errors (e.g., timing error, under and over prediction of high and low flows) in outputs from a hydrologic model. The real time case studies investigated in this study included simulation results of both the process-based Soil Water Assessment Tool (SWAT) model, as well as statistical models, namely the Coupled Wavelet-Volterra (WVC), Artificial Neural Network (ANN), and Auto Regressive Moving Average (ARMA) methods. For the SWAT model, data from Wainganga and Sind Basin (India) were used, while for the Wavelet Volterra, ANN and ARMA models, data from the Cauvery River Basin (India) and Fraser River (Canada) were used. The study also explored the effect of the choice of the wavelets in multiscale model evaluation. It was found that the proposed wavelet-based performance measures, namely the MNSC (Multiscale Nash-Sutcliffe Criteria) and MNRMSE (Multiscale Normalized Root Mean Square Error), are a more reliable measure than traditional performance measures such as the Nash-Sutcliffe Criteria (NSC), Root Mean Square Error (RMSE), and Normalized Root Mean Square Error (NRMSE). Further, the proposed methodology can be used to: i) compare different hydrological models (both physical and statistical models), and ii) help in model calibration.

  15. Rupture Process During the Mw 8.1 2017 Chiapas Mexico Earthquake: Shallow Intraplate Normal Faulting by Slab Bending

    NASA Astrophysics Data System (ADS)

    Okuwaki, R.; Yagi, Y.

    2017-12-01

    A seismic source model for the Mw 8.1 2017 Chiapas, Mexico, earthquake was constructed by kinematic waveform inversion using globally observed teleseismic waveforms, suggesting that the earthquake was a normal-faulting event on a steeply dipping plane, with the major slip concentrated around a relatively shallow depth of 28 km. The modeled rupture evolution showed unilateral, downdip propagation northwestward from the hypocenter, and the downdip width of the main rupture was restricted to less than 30 km below the slab interface, suggesting that the downdip extensional stresses due to the slab bending were the primary cause of the earthquake. The rupture front abruptly decelerated at the northwestern end of the main rupture where it intersected the subducting Tehuantepec Fracture Zone, suggesting that the fracture zone may have inhibited further rupture propagation.

  16. Stochastic Modeling Approach to the Incubation Time of Prionic Diseases

    NASA Astrophysics Data System (ADS)

    Ferreira, A. S.; da Silva, M. A.; Cressoni, J. C.

    2003-05-01

    Transmissible spongiform encephalopathies are neurodegenerative diseases for which prions are the attributed pathogenic agents. A widely accepted theory assumes that prion replication is due to a direct interaction between the pathologic (PrPSc) form and the host-encoded (PrPC) conformation, in a kind of autocatalytic process. Here we show that the overall features of the incubation time of prion diseases are readily obtained if the prion reaction is described by a simple mean-field model. An analytical expression for the incubation time distribution then follows by associating the rate constant to a stochastic variable log normally distributed. The incubation time distribution is then also shown to be log normal and fits the observed BSE (bovine spongiform encephalopathy) data very well. Computer simulation results also yield the correct BSE incubation time distribution at low PrPC densities.

  17. Fractional calculus and morphogen gradient formation

    NASA Astrophysics Data System (ADS)

    Yuste, Santos Bravo; Abad, Enrique; Lindenberg, Katja

    2012-12-01

    Some microscopic models for reactive systems where the reaction kinetics is limited by subdiffusion are described by means of reaction-subdiffusion equations where fractional derivatives play a key role. In particular, we consider subdiffusive particles described by means of a Continuous Time Random Walk (CTRW) model subject to a linear (first-order) death process. The resulting fractional equation is employed to study the developmental biology key problem of morphogen gradient formation for the case in which the morphogens are subdiffusive. If the morphogen degradation rate (reactivity) is constant, we find exponentially decreasing stationary concentration profiles, which are similar to the profiles found when the morphogens diffuse normally. However, for the case in which the degradation rate decays exponentially with the distance to the morphogen source, we find that the morphogen profiles are qualitatively different from the profiles obtained when the morphogens diffuse normally.

  18. Simulating Sli: General Cognitive Processing Stressors Can Produce a Specific Linguistic Profile.

    ERIC Educational Resources Information Center

    Hayiou-Thomas, Marianna E.; Bishop, Dorothy V.M.; Plunkett, Kim

    2004-01-01

    This study attempted to model specific language impairment (SLI) in a group of 6year-old children with typically developing language by introducing cognitive stress factors into a grammaticality judgment task. At normal speech rate, all children had near-perfect performance. When the speech signal was compressed to 50% of its original rate, to…

  19. The Efficacy of a Low-Level Program Visualization Tool for Teaching Programming Concepts to Novice C Programmers.

    ERIC Educational Resources Information Center

    Smith, Philip A.; Webb, Geoffrey I.

    2000-01-01

    Describes "Glass-box Interpreter" a low-level program visualization tool called Bradman designed to provide a conceptual model of C program execution for novice programmers and makes visible aspects of the programming process normally hidden from the user. Presents an experiment that tests the efficacy of Bradman, and provides…

  20. Does viotin activate violin more than viocin? On the use of visual cues during visual-word recognition.

    PubMed

    Perea, Manuel; Panadero, Victoria

    2014-01-01

    The vast majority of neural and computational models of visual-word recognition assume that lexical access is achieved via the activation of abstract letter identities. Thus, a word's overall shape should play no role in this process. In the present lexical decision experiment, we compared word-like pseudowords like viotín (same shape as its base word: violín) vs. viocín (different shape) in mature (college-aged skilled readers), immature (normally reading children), and immature/impaired (young readers with developmental dyslexia) word-recognition systems. Results revealed similar response times (and error rates) to consistent-shape and inconsistent-shape pseudowords for both adult skilled readers and normally reading children - this is consistent with current models of visual-word recognition. In contrast, young readers with developmental dyslexia made significantly more errors to viotín-like pseudowords than to viocín-like pseudowords. Thus, unlike normally reading children, young readers with developmental dyslexia are sensitive to a word's visual cues, presumably because of poor letter representations.

  1. Circularly-symmetric complex normal ratio distribution for scalar transmissibility functions. Part I: Fundamentals

    NASA Astrophysics Data System (ADS)

    Yan, Wang-Ji; Ren, Wei-Xin

    2016-12-01

    Recent advances in signal processing and structural dynamics have spurred the adoption of transmissibility functions in academia and industry alike. Due to the inherent randomness of measurement and variability of environmental conditions, uncertainty impacts its applications. This study is focused on statistical inference for raw scalar transmissibility functions modeled as complex ratio random variables. The goal is achieved through companion papers. This paper (Part I) is dedicated to dealing with a formal mathematical proof. New theorems on multivariate circularly-symmetric complex normal ratio distribution are proved on the basis of principle of probabilistic transformation of continuous random vectors. The closed-form distributional formulas for multivariate ratios of correlated circularly-symmetric complex normal random variables are analytically derived. Afterwards, several properties are deduced as corollaries and lemmas to the new theorems. Monte Carlo simulation (MCS) is utilized to verify the accuracy of some representative cases. This work lays the mathematical groundwork to find probabilistic models for raw scalar transmissibility functions, which are to be expounded in detail in Part II of this study.

  2. Adaptive Value Normalization in the Prefrontal Cortex Is Reduced by Memory Load.

    PubMed

    Holper, L; Van Brussel, L D; Schmidt, L; Schulthess, S; Burke, C J; Louie, K; Seifritz, E; Tobler, P N

    2017-01-01

    Adaptation facilitates neural representation of a wide range of diverse inputs, including reward values. Adaptive value coding typically relies on contextual information either obtained from the environment or retrieved from and maintained in memory. However, it is unknown whether having to retrieve and maintain context information modulates the brain's capacity for value adaptation. To address this issue, we measured hemodynamic responses of the prefrontal cortex (PFC) in two studies on risky decision-making. In each trial, healthy human subjects chose between a risky and a safe alternative; half of the participants had to remember the risky alternatives, whereas for the other half they were presented visually. The value of safe alternatives varied across trials. PFC responses adapted to contextual risk information, with steeper coding of safe alternative value in lower-risk contexts. Importantly, this adaptation depended on working memory load, such that response functions relating PFC activity to safe values were steeper with presented versus remembered risk. An independent second study replicated the findings of the first study and showed that similar slope reductions also arose when memory maintenance demands were increased with a secondary working memory task. Formal model comparison showed that a divisive normalization model fitted effects of both risk context and working memory demands on PFC activity better than alternative models of value adaptation, and revealed that reduced suppression of background activity was the critical parameter impairing normalization with increased memory maintenance demand. Our findings suggest that mnemonic processes can constrain normalization of neural value representations.

  3. A method for named entity normalization in biomedical articles: application to diseases and plants.

    PubMed

    Cho, Hyejin; Choi, Wonjun; Lee, Hyunju

    2017-10-13

    In biomedical articles, a named entity recognition (NER) technique that identifies entity names from texts is an important element for extracting biological knowledge from articles. After NER is applied to articles, the next step is to normalize the identified names into standard concepts (i.e., disease names are mapped to the National Library of Medicine's Medical Subject Headings disease terms). In biomedical articles, many entity normalization methods rely on domain-specific dictionaries for resolving synonyms and abbreviations. However, the dictionaries are not comprehensive except for some entities such as genes. In recent years, biomedical articles have accumulated rapidly, and neural network-based algorithms that incorporate a large amount of unlabeled data have shown considerable success in several natural language processing problems. In this study, we propose an approach for normalizing biological entities, such as disease names and plant names, by using word embeddings to represent semantic spaces. For diseases, training data from the National Center for Biotechnology Information (NCBI) disease corpus and unlabeled data from PubMed abstracts were used to construct word representations. For plants, a training corpus that we manually constructed and unlabeled PubMed abstracts were used to represent word vectors. We showed that the proposed approach performed better than the use of only the training corpus or only the unlabeled data and showed that the normalization accuracy was improved by using our model even when the dictionaries were not comprehensive. We obtained F-scores of 0.808 and 0.690 for normalizing the NCBI disease corpus and manually constructed plant corpus, respectively. We further evaluated our approach using a data set in the disease normalization task of the BioCreative V challenge. When only the disease corpus was used as a dictionary, our approach significantly outperformed the best system of the task. The proposed approach shows robust performance for normalizing biological entities. The manually constructed plant corpus and the proposed model are available at http://gcancer.org/plant and http://gcancer.org/normalization , respectively.

  4. Integration of gene normalization stages and co-reference resolution using a Markov logic network.

    PubMed

    Dai, Hong-Jie; Chang, Yen-Ching; Tsai, Richard Tzong-Han; Hsu, Wen-Lian

    2011-09-15

    Gene normalization (GN) is the task of normalizing a textual gene mention to a unique gene database ID. Traditional top performing GN systems usually need to consider several constraints to make decisions in the normalization process, including filtering out false positives, or disambiguating an ambiguous gene mention, to improve system performance. However, these constraints are usually executed in several separate stages and cannot use each other's input/output interactively. In this article, we propose a novel approach that employs a Markov logic network (MLN) to model the constraints used in the GN task. Firstly, we show how various constraints can be formulated and combined in an MLN. Secondly, we are the first to apply the two main concepts of co-reference resolution-discourse salience in centering theory and transitivity-to GN models. Furthermore, to make our results more relevant to developers of information extraction applications, we adopt the instance-based precision/recall/F-measure (PRF) in addition to the article-wide PRF to assess system performance. Experimental results show that our system outperforms baseline and state-of-the-art systems under two evaluation schemes. Through further analysis, we have found several unexplored challenges in the GN task. hongjie@iis.sinica.edu.tw Supplementary data are available at Bioinformatics online.

  5. Three-dimensional finite analysis of acetabular contact pressure and contact area during normal walking.

    PubMed

    Wang, Guangye; Huang, Wenjun; Song, Qi; Liang, Jinfeng

    2017-11-01

    This study aims to analyze the contact areas and pressure distributions between the femoral head and mortar during normal walking using a three-dimensional finite element model (3D-FEM). Computed tomography (CT) scanning technology and a computer image processing system were used to establish the 3D-FEM. The acetabular mortar model was used to simulate the pressures during 32 consecutive normal walking phases and the contact areas at different phases were calculated. The distribution of the pressure peak values during the 32 consecutive normal walking phases was bimodal, which reached the peak (4.2 Mpa) at the initial phase where the contact area was significantly higher than that at the stepping phase. The sites that always kept contact were concentrated on the acetabular top and leaned inwards, while the anterior and posterior acetabular horns had no pressure concentration. The pressure distributions of acetabular cartilage at different phases were significantly different, the zone of increased pressure at the support phase distributed at the acetabular top area, while that at the stepping phase distributed in the inside of acetabular cartilage. The zones of increased contact pressure and the distributions of acetabular contact areas had important significance towards clinical researches, and could indicate the inductive factors of acetabular osteoarthritis. Copyright © 2016. Published by Elsevier Taiwan.

  6. Models and molecular approaches to assessing the effects of the microgravity environment on vertebrate development

    NASA Technical Reports Server (NTRS)

    Wolgemuth, D. J.; Murashov, A. K.

    1995-01-01

    The extent to which gravity, and especially the lack thereof, can affect normal development in higher organisms is poorly understood. Underlying this question is the assumption that normal development depends on the embryo's ability to maintain a programmed temporal and spatial coordination of morphogenetic events. There are several reports documenting the apparently normal development of several vertebrate species, including mammals, under conditions of exposure to space flight during various periods of the development process. Evidence to the contrary also exists and it is therefore likely that some alterations in morphology do occur in a microgravity environment. Although subsequent development may appear overtly normal, more subtle abnormalities result. In all studies, the evaluation is restricted by the few numbers of specimens that can be examined and the relatively insensitive techniques for assessing potentially subtle effects. In the present discussion, we summarize some observations of mammalian development made in microgravity and consider which stages might be expected to be differentially sensitive to altered gravity conditions. While we emphasize mammalian development, we discuss the suitability of another model system for examining such effects in a cross-species context. Furthermore, we consider recent developments in our understanding of the molecular genetic program regulating embryogenesis that could serve as markers for assessing perturbations of development.

  7. A Method to Measure and Estimate Normalized Contrast in Infrared Flash Thermography

    NASA Technical Reports Server (NTRS)

    Koshti, Ajay M.

    2016-01-01

    The paper presents further development in normalized contrast processing used in flash infrared thermography method. Method of computing normalized image or pixel intensity contrast, and normalized temperature contrast are provided. Methods of converting image contrast to temperature contrast and vice versa are provided. Normalized contrast processing in flash thermography is useful in quantitative analysis of flash thermography data including flaw characterization and comparison of experimental results with simulation. Computation of normalized temperature contrast involves use of flash thermography data acquisition set-up with high reflectivity foil and high emissivity tape such that the foil, tape and test object are imaged simultaneously. Methods of assessing other quantitative parameters such as emissivity of object, afterglow heat flux, reflection temperature change and surface temperature during flash thermography are also provided. Temperature imaging and normalized temperature contrast processing provide certain advantages over normalized image contrast processing by reducing effect of reflected energy in images and measurements, therefore providing better quantitative data. Examples of incorporating afterglow heat-flux and reflection temperature evolution in flash thermography simulation are also discussed.

  8. Animal models to study microRNA function

    PubMed Central

    Pal, Arpita S.; Kasinski, Andrea L.

    2018-01-01

    The discovery of the microRNAs, lin-4 and let-7 as critical mediators of normal development in Caenorhabditis elegans and their conservation throughout evolution has spearheaded research towards identifying novel roles of microRNAs in other cellular processes. To accurately elucidate these fundamental functions, especially in the context of an intact organism various microRNA transgenic models have been generated and evaluated. Transgenic C. elegans (worms), Drosophila melanogaster (flies), Danio rerio (zebrafish), and Mus musculus (mouse) have contributed immensely towards uncovering the roles of multiple microRNAs in cellular processes such as proliferation, differentiation, and apoptosis, pathways that are severely altered in human diseases such as cancer. The simple model organisms, C. elegans, D. melanogaster and D. rerio do not develop cancers, but have proved to be convenient systesm in microRNA research, especially in characterizing the microRNA biogenesis machinery which is often dysregulated during human tumorigenesis. The microRNA-dependent events delineated via these simple in vivo systems have been further verified in vitro, and in more complex models of cancers, such as M. musculus. The focus of this review is to provide an overview of the important contributions made in the microRNA field using model organisms. The simple model systems provided the basis for the importance of microRNAs in normal cellular physiology, while the more complex animal systems provided evidence for the role of microRNAs dysregulation in cancers. Highlights include an overview of the various strategies used to generate transgenic organisms and a review of the use of transgenic mice for evaluating pre-clinical efficacy of microRNA-based cancer therapeutics. PMID:28882225

  9. Concepts, Control, and Context: A Connectionist Account of Normal and Disordered Semantic Cognition

    PubMed Central

    2018-01-01

    Semantic cognition requires conceptual representations shaped by verbal and nonverbal experience and executive control processes that regulate activation of knowledge to meet current situational demands. A complete model must also account for the representation of concrete and abstract words, of taxonomic and associative relationships, and for the role of context in shaping meaning. We present the first major attempt to assimilate all of these elements within a unified, implemented computational framework. Our model combines a hub-and-spoke architecture with a buffer that allows its state to be influenced by prior context. This hybrid structure integrates the view, from cognitive neuroscience, that concepts are grounded in sensory-motor representation with the view, from computational linguistics, that knowledge is shaped by patterns of lexical co-occurrence. The model successfully codes knowledge for abstract and concrete words, associative and taxonomic relationships, and the multiple meanings of homonyms, within a single representational space. Knowledge of abstract words is acquired through (a) their patterns of co-occurrence with other words and (b) acquired embodiment, whereby they become indirectly associated with the perceptual features of co-occurring concrete words. The model accounts for executive influences on semantics by including a controlled retrieval mechanism that provides top-down input to amplify weak semantic relationships. The representational and control elements of the model can be damaged independently, and the consequences of such damage closely replicate effects seen in neuropsychological patients with loss of semantic representation versus control processes. Thus, the model provides a wide-ranging and neurally plausible account of normal and impaired semantic cognition. PMID:29733663

  10. Cognitive components of a mathematical processing network in 9-year-old children.

    PubMed

    Szűcs, Dénes; Devine, Amy; Soltesz, Fruzsina; Nobes, Alison; Gabriel, Florence

    2014-07-01

    We determined how various cognitive abilities, including several measures of a proposed domain-specific number sense, relate to mathematical competence in nearly 100 9-year-old children with normal reading skill. Results are consistent with an extended number processing network and suggest that important processing nodes of this network are phonological processing, verbal knowledge, visuo-spatial short-term and working memory, spatial ability and general executive functioning. The model was highly specific to predicting arithmetic performance. There were no strong relations between mathematical achievement and verbal short-term and working memory, sustained attention, response inhibition, finger knowledge and symbolic number comparison performance. Non-verbal intelligence measures were also non-significant predictors when added to our model. Number sense variables were non-significant predictors in the model and they were also non-significant predictors when entered into regression analysis with only a single visuo-spatial WM measure. Number sense variables were predicted by sustained attention. Results support a network theory of mathematical competence in primary school children and falsify the importance of a proposed modular 'number sense'. We suggest an 'executive memory function centric' model of mathematical processing. Mapping a complex processing network requires that studies consider the complex predictor space of mathematics rather than just focusing on a single or a few explanatory factors.

  11. Cognitive components of a mathematical processing network in 9-year-old children

    PubMed Central

    Szűcs, Dénes; Devine, Amy; Soltesz, Fruzsina; Nobes, Alison; Gabriel, Florence

    2014-01-01

    We determined how various cognitive abilities, including several measures of a proposed domain-specific number sense, relate to mathematical competence in nearly 100 9-year-old children with normal reading skill. Results are consistent with an extended number processing network and suggest that important processing nodes of this network are phonological processing, verbal knowledge, visuo-spatial short-term and working memory, spatial ability and general executive functioning. The model was highly specific to predicting arithmetic performance. There were no strong relations between mathematical achievement and verbal short-term and working memory, sustained attention, response inhibition, finger knowledge and symbolic number comparison performance. Non-verbal intelligence measures were also non-significant predictors when added to our model. Number sense variables were non-significant predictors in the model and they were also non-significant predictors when entered into regression analysis with only a single visuo-spatial WM measure. Number sense variables were predicted by sustained attention. Results support a network theory of mathematical competence in primary school children and falsify the importance of a proposed modular ‘number sense’. We suggest an ‘executive memory function centric’ model of mathematical processing. Mapping a complex processing network requires that studies consider the complex predictor space of mathematics rather than just focusing on a single or a few explanatory factors. PMID:25089322

  12. The Effect of Normalization in Violence Video Classification Performance

    NASA Astrophysics Data System (ADS)

    Ali, Ashikin; Senan, Norhalina

    2017-08-01

    Basically, data pre-processing is an important part of data mining. Normalization is a pre-processing stage for any type of problem statement, especially in video classification. Challenging problems that arises in video classification is because of the heterogeneous content, large variations in video quality and complex semantic meanings of the concepts involved. Therefore, to regularize this problem, it is thoughtful to ensure normalization or basically involvement of thorough pre-processing stage aids the robustness of classification performance. This process is to scale all the numeric variables into certain range to make it more meaningful for further phases in available data mining techniques. Thus, this paper attempts to examine the effect of 2 normalization techniques namely Min-max normalization and Z-score in violence video classifications towards the performance of classification rate using Multi-layer perceptron (MLP) classifier. Using Min-Max Normalization range of [0,1] the result shows almost 98% of accuracy, meanwhile Min-Max Normalization range of [-1,1] accuracy is 59% and for Z-score the accuracy is 50%.

  13. Regional magnetic resonance imaging measures for multivariate analysis in Alzheimer's disease and mild cognitive impairment.

    PubMed

    Westman, Eric; Aguilar, Carlos; Muehlboeck, J-Sebastian; Simmons, Andrew

    2013-01-01

    Automated structural magnetic resonance imaging (MRI) processing pipelines are gaining popularity for Alzheimer's disease (AD) research. They generate regional volumes, cortical thickness measures and other measures, which can be used as input for multivariate analysis. It is not clear which combination of measures and normalization approach are most useful for AD classification and to predict mild cognitive impairment (MCI) conversion. The current study includes MRI scans from 699 subjects [AD, MCI and controls (CTL)] from the Alzheimer's disease Neuroimaging Initiative (ADNI). The Freesurfer pipeline was used to generate regional volume, cortical thickness, gray matter volume, surface area, mean curvature, gaussian curvature, folding index and curvature index measures. 259 variables were used for orthogonal partial least square to latent structures (OPLS) multivariate analysis. Normalisation approaches were explored and the optimal combination of measures determined. Results indicate that cortical thickness measures should not be normalized, while volumes should probably be normalized by intracranial volume (ICV). Combining regional cortical thickness measures (not normalized) with cortical and subcortical volumes (normalized with ICV) using OPLS gave a prediction accuracy of 91.5 % when distinguishing AD versus CTL. This model prospectively predicted future decline from MCI to AD with 75.9 % of converters correctly classified. Normalization strategy did not have a significant effect on the accuracies of multivariate models containing multiple MRI measures for this large dataset. The appropriate choice of input for multivariate analysis in AD and MCI is of great importance. The results support the use of un-normalised cortical thickness measures and volumes normalised by ICV.

  14. Intestinal absorption differences of major bioactive compounds of Gegenqinlian Decoction between normal and bacterial diarrheal mini-pigs in vitro and in situ.

    PubMed

    Ling, Xiao; Xiang, Yuqiang; Chen, Feilong; Tang, Qingfa; Zhang, Wei; Tan, Xiaomei

    2018-04-15

    Intestinal condition plays an important role in drug absorption and metabolism, thus the effects of varied gastrointestinal diseases such as infectious diarrhea on the intestinal function are crucial for drug absorption. However, due to the lack of suitable models, the differences of absorption and metabolism of drugs between the diarrheal and normal intestines are rarely reported. Thus, in this study, Escherichia coli diarrhea model was induced in mini-pigs and single-pass intestinal perfusion and intestinal mucosal enzyme metabolism experiments were conducted. A simple and rapid ultrahigh performance liquid chromatography-tandem mass spectrometry (UHPLC-MS/MS) method was developed to determine the concentrations of 9 major components in Gegen Qinlian decoction (GQD). Samples were pretreated by protein precipitation with methanol and naringin and prednisolone were used as internal standards. The validated method demonstrated adequate sensitivity, selectivity, and process efficiency for the bioanalysis of 9 compounds. Results of intestinal perfusion showed that puerarin, daidzein, daidzin and baicalin and berberine were absorbed faster in diarrheal jejunum than in normal intestines (p < 0.05). However, puerarin, daidzin and liquiritin were metabolized more slowly in diarrheal intestine after incubation compared with the normal group (p < 0.05). The concentrations of daidzein in both perfusion and metabolism and wogonin in metabolism were significantly increased (p < 0.05). In conclusion, absorption and metabolism of GQD were significantly different between the diarrheal and normal intestines, which suggest that bacterial diarrheal mini-pigs model can be used in the intestinal absorption study and is worthy to be applied in the other intestinal absorption study of anti- diarrheal drugs. Copyright © 2018 Elsevier B.V. All rights reserved.

  15. Shocks and metallicity gradients in normal star-forming galaxies

    NASA Astrophysics Data System (ADS)

    Ho, I.-Ting

    Gas flow is one of the most fundamental processes driving galaxy evolution. This thesis explores gas flows in local galaxies by studying metallicity gradients and galactic-scale outflows in normal star-forming galaxies. This is made possible by new integral field spectroscopy data that provide simultaneously spatial and spectral information of galaxies. First, I measure metallicity gradients in isolated disk galaxies and show that their metallicity gradients are remarkably simple and universal. When the metallicity gradients are normalized to galaxy sizes, all the 49 galaxies studied have virtually the same metallicity gradient. I model the common metallicity gradient using a simple chemical evolution model to understand its origin. The common metallicity gradient is a direct result of the coevolution of gas and stellar disk while galactic disks build up their masses from inside-out. Tight constraints on the mass outflow rates and inflow rates can be placed by the chemical evolution model. Second, I investigate galactic winds in normal star-forming galaxies using data from an integral field spectroscopy survey. I demonstrate how to search for galactic winds by probing emission line ratios, shocks, and gas kinematics. Galactic winds are found to be common even in normal star-forming galaxies that were not expected to host winds. By comparing galaxies with and without hosting winds, I show that galaxies with high star formation rate surface densities and bursty star formation histories are more likely to drive large-scale galactic winds. Finally, lzifu, a toolkit for fitting multiple emission lines simultaneously in integral field spectroscopy data, is developed in this thesis. I describe in detail the structure of the toolkit and demonstrate the capabilities of lzifu.

  16. Graves' disease: a host defense mechanism gone awry.

    PubMed

    Kohn, L D; Napolitano, G; Singer, D S; Molteni, M; Scorza, R; Shimojo, N; Kohno, Y; Mozes, E; Nakazato, M; Ulianich, L; Chung, H K; Matoba, H; Saunier, B; Suzuki, K; Schuppert, F; Saji, M

    2000-01-01

    In this report we summarize evidence to support a model for the development of Graves' disease. The model suggests that Graves' disease is initiated by an insult to the thyrocyte in an individual with a normal immune system. The insult, infectious or otherwise, causes double strand DNA or RNA to enter the cytoplasm of the cell. This causes abnormal expression of major histocompatibility (MHC) class I as a dominant feature, but also aberrant expression of MHC class II, as well as changes in genes or gene products needed for the thyrocyte to become an antigen presenting cell (APC). These include increased expression of proteasome processing proteins (LMP2), transporters of antigen peptides (TAP), invariant chain (Ii), HLA-DM, and the co-stimulatory molecule, B7, as well as STAT and NF-kappaB activation. A critical factor in these changes is the loss of normal negative regulation of MHC class I, class II, and thyrotropin receptor (TSHR) gene expression, which is necessary to maintain self-tolerance during the normal changes in gene expression involved in hormonally-increased growth and function of the cell. Self-tolerance to the TSHR is maintained in normals because there is a population of CD8- cells which normally suppresses a population of CD4+ cells that can interact with the TSHR if thyrocytes become APCs. This is a host self-defense mechanism that we hypothesize leads to autoimmune disease in persons, for example, with a specific viral infection, a genetic predisposition, or even, possibly, a TSHR polymorphism. The model is suggested to be important to explain the development of other autoimmune diseases including systemic lupus or diabetes.

  17. A class of exact solutions for biomacromolecule diffusion-reaction in live cells.

    PubMed

    Sadegh Zadeh, Kouroush; Montas, Hubert J

    2010-06-07

    A class of novel explicit analytic solutions for a system of n+1 coupled partial differential equations governing biomolecular mass transfer and reaction in living organisms are proposed, evaluated, and analyzed. The solution process uses Laplace and Hankel transforms and results in a recursive convolution of an exponentially scaled Gaussian with modified Bessel functions. The solution is developed for wide range of biomolecular binding kinetics from pure diffusion to multiple binding reactions. The proposed approach provides solutions for both Dirac and Gaussian laser beam (or fluorescence-labeled biomacromolecule) profiles during the course of a Fluorescence Recovery After Photobleaching (FRAP) experiment. We demonstrate that previous models are simplified forms of our theory for special cases. Model analysis indicates that at the early stages of the transport process, biomolecular dynamics is governed by pure diffusion. At large times, the dominant mass transfer process is effective diffusion. Analysis of the sensitivity equations, derived analytically and verified by finite difference differentiation, indicates that experimental biologists should use full space-time profile (instead of the averaged time series) obtained at the early stages of the fluorescence microscopy experiments to extract meaningful physiological information from the protocol. Such a small time frame requires improved bioinstrumentation relative to that in use today. Our mathematical analysis highlights several limitations of the FRAP protocol and provides strategies to improve it. The proposed model can be used to study biomolecular dynamics in molecular biology, targeted drug delivery in normal and cancerous tissues, motor-driven axonal transport in normal and abnormal nervous systems, kinetics of diffusion-controlled reactions between enzyme and substrate, and to validate numerical simulators of biological mass transport processes in vivo. Copyright (c) 2010 Elsevier Ltd. All rights reserved.

  18. Computer-aided analysis of cutting processes for brittle materials

    NASA Astrophysics Data System (ADS)

    Ogorodnikov, A. I.; Tikhonov, I. N.

    2017-12-01

    This paper is focused on 3D computer simulation of cutting processes for brittle materials and silicon wafers. Computer-aided analysis of wafer scribing and dicing is carried out with the use of the ANSYS CAE (computer-aided engineering) software, and a parametric model of the processes is created by means of the internal ANSYS APDL programming language. Different types of tool tip geometry are analyzed to obtain internal stresses, such as a four-sided pyramid with an included angle of 120° and a tool inclination angle to the normal axis of 15°. The quality of the workpieces after cutting is studied by optical microscopy to verify the FE (finite-element) model. The disruption of the material structure during scribing occurs near the scratch and propagates into the wafer or over its surface at a short range. The deformation area along the scratch looks like a ragged band, but the stress width is rather low. The theory of cutting brittle semiconductor and optical materials is developed on the basis of the advanced theory of metal turning. The fall of stress intensity along the normal on the way from the tip point to the scribe line can be predicted using the developed theory and with the verified FE model. The crystal quality and dimensions of defects are determined by the mechanics of scratching, which depends on the shape of the diamond tip, the scratching direction, the velocity of the cutting tool and applied force loads. The disunity is a rate-sensitive process, and it depends on the cutting thickness. The application of numerical techniques, such as FE analysis, to cutting problems enhances understanding and promotes the further development of existing machining technologies.

  19. Equivalent Sensor Radiance Generation and Remote Sensing from Model Parameters. Part 1; Equivalent Sensor Radiance Formulation

    NASA Technical Reports Server (NTRS)

    Wind, Galina; DaSilva, Arlindo M.; Norris, Peter M.; Platnick, Steven E.

    2013-01-01

    In this paper we describe a general procedure for calculating equivalent sensor radiances from variables output from a global atmospheric forecast model. In order to take proper account of the discrepancies between model resolution and sensor footprint the algorithm takes explicit account of the model subgrid variability, in particular its description of the probably density function of total water (vapor and cloud condensate.) The equivalent sensor radiances are then substituted into an operational remote sensing algorithm processing chain to produce a variety of remote sensing products that would normally be produced from actual sensor output. This output can then be used for a wide variety of purposes such as model parameter verification, remote sensing algorithm validation, testing of new retrieval methods and future sensor studies. We show a specific implementation using the GEOS-5 model, the MODIS instrument and the MODIS Adaptive Processing System (MODAPS) Data Collection 5.1 operational remote sensing cloud algorithm processing chain (including the cloud mask, cloud top properties and cloud optical and microphysical properties products.) We focus on clouds and cloud/aerosol interactions, because they are very important to model development and improvement.

  20. Multi-sensor Cloud Retrieval Simulator and Remote Sensing from Model Parameters . Pt. 1; Synthetic Sensor Radiance Formulation; [Synthetic Sensor Radiance Formulation

    NASA Technical Reports Server (NTRS)

    Wind, G.; DaSilva, A. M.; Norris, P. M.; Platnick, S.

    2013-01-01

    In this paper we describe a general procedure for calculating synthetic sensor radiances from variable output from a global atmospheric forecast model. In order to take proper account of the discrepancies between model resolution and sensor footprint, the algorithm takes explicit account of the model subgrid variability, in particular its description of the probability density function of total water (vapor and cloud condensate.) The simulated sensor radiances are then substituted into an operational remote sensing algorithm processing chain to produce a variety of remote sensing products that would normally be produced from actual sensor output. This output can then be used for a wide variety of purposes such as model parameter verification, remote sensing algorithm validation, testing of new retrieval methods and future sensor studies.We show a specific implementation using the GEOS-5 model, the MODIS instrument and the MODIS Adaptive Processing System (MODAPS) Data Collection 5.1 operational remote sensing cloud algorithm processing chain (including the cloud mask, cloud top properties and cloud optical and microphysical properties products). We focus on clouds because they are very important to model development and improvement.

  1. Porcine Tissue-Specific Regulatory Networks Derived from Meta-Analysis of the Transcriptome

    PubMed Central

    Pérez-Montarelo, Dafne; Hudson, Nicholas J.; Fernández, Ana I.; Ramayo-Caldas, Yuliaxis; Dalrymple, Brian P.; Reverter, Antonio

    2012-01-01

    The processes that drive tissue identity and differentiation remain unclear for most tissue types. So are the gene networks and transcription factors (TF) responsible for the differential structure and function of each particular tissue, and this is particularly true for non model species with incomplete genomic resources. To better understand the regulation of genes responsible for tissue identity in pigs, we have inferred regulatory networks from a meta-analysis of 20 gene expression studies spanning 480 Porcine Affymetrix chips for 134 experimental conditions on 27 distinct tissues. We developed a mixed-model normalization approach with a covariance structure that accommodated the disparity in the origin of the individual studies, and obtained the normalized expression of 12,320 genes across the 27 tissues. Using this resource, we constructed a network, based on the co-expression patterns of 1,072 TF and 1,232 tissue specific genes. The resulting network is consistent with the known biology of tissue development. Within the network, genes clustered by tissue and tissues clustered by site of embryonic origin. These clusters were significantly enriched for genes annotated in key relevant biological processes and confirm gene functions and interactions from the literature. We implemented a Regulatory Impact Factor (RIF) metric to identify the key regulators in skeletal muscle and tissues from the central nervous systems. The normalization of the meta-analysis, the inference of the gene co-expression network and the RIF metric, operated synergistically towards a successful search for tissue-specific regulators. Novel among these findings are evidence suggesting a novel key role of ERCC3 as a muscle regulator. Together, our results recapitulate the known biology behind tissue specificity and provide new valuable insights in a less studied but valuable model species. PMID:23049964

  2. Epidemiological modeling in a branching population. Particular case of a general SIS model with two age classes.

    PubMed

    Jacob, C; Viet, A F

    2003-03-01

    This paper covers the elaboration of a general class of multitype branching processes for modeling in a branching population, the evolution of a disease with horizontal and vertical transmissions. When the size of the population may tend to infinity, normalization must be carried out. As the initial size tends to infinity, the normalized model converges a.s. to a dynamical system the solution of which is the probability law of the state of health for an individual ancestors line. The focal point of this study concerns the transient and asymptotical behaviors of a SIS model with two age classes in a branching population. We will compare the asymptotical probability of extinction on the scale of a finite population and on the scale of an individual in an infinite population: when the rates of transmission are small compared to the rate of renewing the population of susceptibles, the two models lead to a.s. extinction, giving consistent results, which no longer applies to the opposite situation of important transmissions. In that case the size of the population plays a crucial role in the spreading of the disease.

  3. Nearly frictionless faulting by unclamping in long-term interaction models

    USGS Publications Warehouse

    Parsons, T.

    2002-01-01

    In defiance of direct rock-friction observations, some transform faults appear to slide with little resistance. In this paper finite element models are used to show how strain energy is minimized by interacting faults that can cause long-term reduction in fault-normal stresses (unclamping). A model fault contained within a sheared elastic medium concentrates stress at its end points with increasing slip. If accommodating structures free up the ends, then the fault responds by rotating, lengthening, and unclamping. This concept is illustrated by a comparison between simple strike-slip faulting and a mid-ocean-ridge model with the same total transform length; calculations show that the more complex system unclapms the transforms and operates at lower energy. In another example, the overlapping San Andreas fault system in the San Francisco Bay region is modeled; this system is complicated by junctions and stepovers. A finite element model indicates that the normal stress along parts of the faults could be reduced to hydrostatic levels after ???60-100 k.y. of system-wide slip. If this process occurs in the earth, then parts of major transform fault zones could appear nearly frictionless.

  4. Formulating physical processes in a full-range model of soil water retention

    NASA Astrophysics Data System (ADS)

    Nimmo, J. R.

    2016-12-01

    Currently-used water retention models vary in how much their formulas correspond to controlling physical processes such as capillarity, adsorption, and air-trapping. In model development, realistic correspondence to physical processes has often been a lower priority than ease of use and compatibility with other models. For example, the wettest range is normally represented simplistically, as by a straight line of zero slope, or by default using the same formulation as for the middle range. The new model presented here recognizes dominant processes within three segments of the range from oven-dryness to saturation. The adsorption-dominated dry range is represented by a logarithmic relation used in earlier models. The middle range of capillary advance/retreat and Haines jumps is represented by a new adaptation of the lognormal distribution function. In the wet range, the expansion of trapped air in response to matric pressure change is important because (1) it displaces water, and (2) it triggers additional volume-adjusting processes such as the collapse of liquid bridges between air pockets. For this range, the model incorporates the Boyles' law inverse-proportionality of trapped air volume and pressure, amplified by an empirical factor to account for the additional processes. With their basis in processes, the model's parameters have a strong physical interpretation, and in many cases can be assigned values from knowledge of fundamental relationships or individual measurements. An advantage of the physically-plausible treatment of the wet range is that it avoids such problems as the blowing-up of derivatives on approach to saturation, enhancing the model's utility for important but challenging wet-range phenomena such as domain exchange between preferential flow paths and soil matrix. Further development might be able to accommodate hysteresis by a systematic adjustment of the relation between the wet and middle ranges.

  5. Serum Cytokine Levels are related to Nesfatin-1/NUCB2 Expression in the Implantation Sites of Spontaneous Abortion Model of CBA/j × DBA/2 Mice.

    PubMed

    Chung, Yiwa; Kim, Heejeong; Seon, Sojeong; Yang, Hyunwon

    2017-03-01

    The process of spontaneous abortion involves a complex mechanism with various cytokines, growth factors, and hormones during the pregnancy. However, the mechanism underlying spontaneous abortion by pro- and anti-inflammatory cytokines in the serum during the pregnancy is not fully understood. Therefore, the purpose of this study was to examine the relationship between the serum levels of pro- and anti-inflammatory cytokines and spontaneous abortion using the CBA/j × DBA/2 mouse model. Serum levels of pro-inflammatory cytokines, such as IFN-γ, IL-1α and TNF-α were not increased in abortion model mice, but anti-inflammatory cytokines, such as IL-4, IL-13 and IL-1ra were decreased compared to normal pregnant mice. In addition, serum levels of chemokine, such as SDF-1, G-CSF, M-CSF, IL-16, KC and MCP-1 were decreased in abortion model mice compared to normal pregnant mice. However, the expression levels of nesfatin-1/NUCB2 mRNA and protein in the uteri of implantation sites were significantly higher in abortion model mice than normal pregnant mice. These results suggest that uterine nesfatin-1/NUCB2 expression may be down-regulated by inflammatory cytokines and chemokines in the serum of pregnant mice. Moreover, this study suggests the possibility that nesfatin-1/NUCB2 expressed in the implantation sites may be associated with the maintenance of pregnancy.

  6. A Lidar Point Cloud Based Procedure for Vertical Canopy Structure Analysis And 3D Single Tree Modelling in Forest

    PubMed Central

    Wang, Yunsheng; Weinacker, Holger; Koch, Barbara

    2008-01-01

    A procedure for both vertical canopy structure analysis and 3D single tree modelling based on Lidar point cloud is presented in this paper. The whole area of research is segmented into small study cells by a raster net. For each cell, a normalized point cloud whose point heights represent the absolute heights of the ground objects is generated from the original Lidar raw point cloud. The main tree canopy layers and the height ranges of the layers are detected according to a statistical analysis of the height distribution probability of the normalized raw points. For the 3D modelling of individual trees, individual trees are detected and delineated not only from the top canopy layer but also from the sub canopy layer. The normalized points are resampled into a local voxel space. A series of horizontal 2D projection images at the different height levels are then generated respect to the voxel space. Tree crown regions are detected from the projection images. Individual trees are then extracted by means of a pre-order forest traversal process through all the tree crown regions at the different height levels. Finally, 3D tree crown models of the extracted individual trees are reconstructed. With further analyses on the 3D models of individual tree crowns, important parameters such as crown height range, crown volume and crown contours at the different height levels can be derived. PMID:27879916

  7. Comparing the Effects of Particulate Matter on the Ocular Surfaces of Normal Eyes and a Dry Eye Rat Model.

    PubMed

    Han, Ji Yun; Kang, Boram; Eom, Youngsub; Kim, Hyo Myung; Song, Jong Suk

    2017-05-01

    To compare the effect of exposure to particulate matter on the ocular surface of normal and experimental dry eye (EDE) rat models. Titanium dioxide (TiO2) nanoparticles were used as the particulate matter. Rats were divided into 4 groups: normal control group, TiO2 challenge group of the normal model, EDE control group, and TiO2 challenge group of the EDE model. After 24 hours, corneal clarity was compared and tear samples were collected for quantification of lactate dehydrogenase, MUC5AC, and tumor necrosis factor-α concentrations. The periorbital tissues were used to evaluate the inflammatory cell infiltration and detect apoptotic cells. The corneal clarity score was greater in the EDE model than in the normal model. The score increased after TiO2 challenge in each group compared with each control group (normal control vs. TiO2 challenge group, 0.0 ± 0.0 vs. 0.8 ± 0.6, P = 0.024; EDE control vs. TiO2 challenge group, 2.2 ± 0.6 vs. 3.8 ± 0.4, P = 0.026). The tear lactate dehydrogenase level and inflammatory cell infiltration on the ocular surface were higher in the EDE model than in the normal model. These measurements increased significantly in both normal and EDE models after TiO2 challenge. The tumor necrosis factor-α levels and terminal deoxynucleotidyl transferase-mediated dUTP nick end labeling-positive cells were also higher in the EDE model than in the normal model. TiO2 nanoparticle exposure on the ocular surface had a more prominent effect in the EDE model than it did in the normal model. The ocular surface of dry eyes seems to be more vulnerable to fine dust of air pollution than that of normal eyes.

  8. File format for normalizing radiological concentration exposure rate and dose rate data for the effects of radioactive decay and weathering processes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kraus, Terrence D.

    2017-04-01

    This report specifies the electronic file format that was agreed upon to be used as the file format for normalized radiological data produced by the software tool developed under this TI project. The NA-84 Technology Integration (TI) Program project (SNL17-CM-635, Normalizing Radiological Data for Analysis and Integration into Models) investigators held a teleconference on December 7, 2017 to discuss the tasks to be completed under the TI program project. During this teleconference, the TI project investigators determined that the comma-separated values (CSV) file format is the most suitable file format for the normalized radiological data that will be outputted frommore » the normalizing tool developed under this TI project. The CSV file format was selected because it provides the requisite flexibility to manage different types of radiological data (i.e., activity concentration, exposure rate, dose rate) from other sources [e.g., Radiological Assessment and Monitoring System (RAMS), Aerial Measuring System (AMS), Monitoring and Sampling). The CSV file format also is suitable for the file format of the normalized radiological data because this normalized data can then be ingested by other software [e.g., RAMS, Visual Sampling Plan (VSP)] used by the NA-84’s Consequence Management Program.« less

  9. When novel sentences spoken or heard for the first time in the history of the universe are not enough: toward a dual-process model of language.

    PubMed

    Van Lancker Sidtis, Diana

    2004-01-01

    Although interest in the language sciences was previously focused on newly created sentences, more recently much attention has turned to the importance of formulaic expressions in normal and disordered communication. Also referred to as formulaic expressions and made up of speech formulas, idioms, expletives, serial and memorized speech, slang, sayings, clichés, and conventional expressions, non-propositional language forms a large proportion of every speaker's competence, and may be differentially disturbed in neurological disorders. This review aims to examine non-propositional speech with respect to linguistic descriptions, psycholinguistic experiments, sociolinguistic studies, child language development, clinical language disorders, and neurological studies. Evidence from numerous sources reveals differentiated and specialized roles for novel and formulaic verbal functions, and suggests that generation of novel sentences and management of prefabricated expressions represent two legitimate and separable processes in language behaviour. A preliminary model of language behaviour that encompasses unitary and compositional properties and their integration in everyday language use is proposed. Integration and synchronizing of two disparate processes in language behaviour, formulaic and novel, characterizes normal communicative function and contributes to creativity in language. This dichotomy is supported by studies arising from other disciplines in neurology and psychology. Further studies are necessary to determine in what ways the various categories of formulaic expressions are related, and how these categories are processed by the brain. Better understanding of how non-propositional categories of speech are stored and processed in the brain can lead to better informed treatment strategies in language disorders.

  10. Modeling error distributions of growth curve models through Bayesian methods.

    PubMed

    Zhang, Zhiyong

    2016-06-01

    Growth curve models are widely used in social and behavioral sciences. However, typical growth curve models often assume that the errors are normally distributed although non-normal data may be even more common than normal data. In order to avoid possible statistical inference problems in blindly assuming normality, a general Bayesian framework is proposed to flexibly model normal and non-normal data through the explicit specification of the error distributions. A simulation study shows when the distribution of the error is correctly specified, one can avoid the loss in the efficiency of standard error estimates. A real example on the analysis of mathematical ability growth data from the Early Childhood Longitudinal Study, Kindergarten Class of 1998-99 is used to show the application of the proposed methods. Instructions and code on how to conduct growth curve analysis with both normal and non-normal error distributions using the the MCMC procedure of SAS are provided.

  11. Acellular organ scaffolds for tumor tissue engineering

    NASA Astrophysics Data System (ADS)

    Guller, Anna; Trusova, Inna; Petersen, Elena; Shekhter, Anatoly; Kurkov, Alexander; Qian, Yi; Zvyagin, Andrei

    2015-12-01

    Rationale: Tissue engineering (TE) is an emerging alternative approach to create models of human malignant tumors for experimental oncology, personalized medicine and drug discovery studies. Being the bottom-up strategy, TE provides an opportunity to control and explore the role of every component of the model system, including cellular populations, supportive scaffolds and signalling molecules. Objectives: As an initial step to create a new ex vivo TE model of cancer, we optimized protocols to obtain organ-specific acellular matrices and evaluated their potential as TE scaffolds for culture of normal and tumor cells. Methods and results: Effective decellularization of animals' kidneys, ureter, lungs, heart, and liver has been achieved by detergent-based processing. The obtained scaffolds demonstrated biocompatibility and growthsupporting potential in combination with normal (Vero, MDCK) and tumor cell lines (C26, B16). Acellular scaffolds and TE constructs have been characterized and compared with morphological methods. Conclusions: The proposed methodology allows creation of sustainable 3D tumor TE constructs to explore the role of organ-specific cell-matrix interaction in tumorigenesis.

  12. Analyzing Axial Stress and Deformation of Tubular for Steam Injection Process in Deviated Wells Based on the Varied (T, P) Fields

    PubMed Central

    Liu, Yunqiang; Xu, Jiuping; Wang, Shize; Qi, Bin

    2013-01-01

    The axial stress and deformation of high temperature high pressure deviated gas wells are studied. A new model is multiple nonlinear equation systems by comprehensive consideration of axial load of tubular string, internal and external fluid pressure, normal pressure between the tubular and well wall, and friction and viscous friction of fluid flowing. The varied temperature and pressure fields were researched by the coupled differential equations concerning mass, momentum, and energy equations instead of traditional methods. The axial load, the normal pressure, the friction, and four deformation lengths of tubular string are got ten by means of the dimensionless iterative interpolation algorithm. The basic data of the X Well, 1300 meters deep, are used for case history calculations. The results and some useful conclusions can provide technical reliability in the process of designing well testing in oil or gas wells. PMID:24163623

  13. On the analysis of the double Hopf bifurcation in machining processes via centre manifold reduction

    NASA Astrophysics Data System (ADS)

    Molnar, T. G.; Dombovari, Z.; Insperger, T.; Stepan, G.

    2017-11-01

    The single-degree-of-freedom model of orthogonal cutting is investigated to study machine tool vibrations in the vicinity of a double Hopf bifurcation point. Centre manifold reduction and normal form calculations are performed to investigate the long-term dynamics of the cutting process. The normal form of the four-dimensional centre subsystem is derived analytically, and the possible topologies in the infinite-dimensional phase space of the system are revealed. It is shown that bistable parameter regions exist where unstable periodic and, in certain cases, unstable quasi-periodic motions coexist with the equilibrium. Taking into account the non-smoothness caused by loss of contact between the tool and the workpiece, the boundary of the bistable region is also derived analytically. The results are verified by numerical continuation. The possibility of (transient) chaotic motions in the global non-smooth dynamics is shown.

  14. Sex determination strategies in 2012: towards a common regulatory model?

    PubMed Central

    2012-01-01

    Sex determination is a complicated process involving large-scale modifications in gene expression affecting virtually every tissue in the body. Although the evolutionary origin of sex remains controversial, there is little doubt that it has developed as a process of optimizing metabolic control, as well as developmental and reproductive functions within a given setting of limited resources and environmental pressure. Evidence from various model organisms supports the view that sex determination may occur as a result of direct environmental induction or genetic regulation. The first process has been well documented in reptiles and fish, while the second is the classic case for avian species and mammals. Both of the latter have developed a variety of sex-specific/sex-related genes, which ultimately form a complete chromosome pair (sex chromosomes/gonosomes). Interestingly, combinations of environmental and genetic mechanisms have been described among different classes of animals, thus rendering the possibility of a unidirectional continuous evolutionary process from the one type of mechanism to the other unlikely. On the other hand, common elements appear throughout the animal kingdom, with regard to a) conserved key genes and b) a central role of sex steroid control as a prerequisite for ultimately normal sex differentiation. Studies in invertebrates also indicate a role of epigenetic chromatin modification, particularly with regard to alternative splicing options. This review summarizes current evidence from research in this hot field and signifies the need for further study of both normal hormonal regulators of sexual phenotype and patterns of environmental disruption. PMID:22357269

  15. MODAL TRACKING of A Structural Device: A Subspace Identification Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Candy, J. V.; Franco, S. N.; Ruggiero, E. L.

    Mechanical devices operating in an environment contaminated by noise, uncertainties, and extraneous disturbances lead to low signal-to-noise-ratios creating an extremely challenging processing problem. To detect/classify a device subsystem from noisy data, it is necessary to identify unique signatures or particular features. An obvious feature would be resonant (modal) frequencies emitted during its normal operation. In this report, we discuss a model-based approach to incorporate these physical features into a dynamic structure that can be used for such an identification. The approach we take after pre-processing the raw vibration data and removing any extraneous disturbances is to obtain a representation ofmore » the structurally unknown device along with its subsystems that capture these salient features. One approach is to recognize that unique modal frequencies (sinusoidal lines) appear in the estimated power spectrum that are solely characteristic of the device under investigation. Therefore, the objective of this effort is based on constructing a black box model of the device that captures these physical features that can be exploited to “diagnose” whether or not the particular device subsystem (track/detect/classify) is operating normally from noisy vibrational data. Here we discuss the application of a modern system identification approach based on stochastic subspace realization techniques capable of both (1) identifying the underlying black-box structure thereby enabling the extraction of structural modes that can be used for analysis and modal tracking as well as (2) indicators of condition and possible changes from normal operation.« less

  16. Manual choice reaction times in the rate-domain

    PubMed Central

    Harris, Christopher M.; Waddington, Jonathan; Biscione, Valerio; Manzi, Sean

    2014-01-01

    Over the last 150 years, human manual reaction times (RTs) have been recorded countless times. Yet, our understanding of them remains remarkably poor. RTs are highly variable with positively skewed frequency distributions, often modeled as an inverse Gaussian distribution reflecting a stochastic rise to threshold (diffusion process). However, latency distributions of saccades are very close to the reciprocal Normal, suggesting that “rate” (reciprocal RT) may be the more fundamental variable. We explored whether this phenomenon extends to choice manual RTs. We recorded two-alternative choice RTs from 24 subjects, each with 4 blocks of 200 trials with two task difficulties (easy vs. difficult discrimination) and two instruction sets (urgent vs. accurate). We found that rate distributions were, indeed, very close to Normal, shifting to lower rates with increasing difficulty and accuracy, and for some blocks they appeared to become left-truncated, but still close to Normal. Using autoregressive techniques, we found temporal sequential dependencies for lags of at least 3. We identified a transient and steady-state component in each block. Because rates were Normal, we were able to estimate autoregressive weights using the Box-Jenkins technique, and convert to a moving average model using z-transforms to show explicit dependence on stimulus input. We also found a spatial sequential dependence for the previous 3 lags depending on whether the laterality of previous trials was repeated or alternated. This was partially dissociated from temporal dependency as it only occurred in the easy tasks. We conclude that 2-alternative choice manual RT distributions are close to reciprocal Normal and not the inverse Gaussian. This is not consistent with stochastic rise to threshold models, and we propose a simple optimality model in which reward is maximized to yield to an optimal rate, and hence an optimal time to respond. We discuss how it might be implemented. PMID:24959134

  17. A Thermal Runaway Failure Model for Low-Voltage BME Ceramic Capacitors with Defects

    NASA Technical Reports Server (NTRS)

    Teverovsky, Alexander

    2017-01-01

    Reliability of base metal electrode (BME) multilayer ceramic capacitors (MLCCs) that until recently were used mostly in commercial applications, have been improved substantially by using new materials and processes. Currently, the inception of intrinsic wear-out failures in high quality capacitors became much greater than the mission duration in most high-reliability applications. However, in capacitors with defects degradation processes might accelerate substantially and cause infant mortality failures. In this work, a physical model that relates the presence of defects to reduction of breakdown voltages and decreasing times to failure has been suggested. The effect of the defect size has been analyzed using a thermal runaway model of failures. Adequacy of highly accelerated life testing (HALT) to predict reliability at normal operating conditions and limitations of voltage acceleration are considered. The applicability of the model to BME capacitors with cracks is discussed and validated experimentally.

  18. Detection of drug active ingredients by chemometric processing of solid-state NMR spectrometry data -- the case of acetaminophen.

    PubMed

    Paradowska, Katarzyna; Jamróz, Marta Katarzyna; Kobyłka, Mariola; Gowin, Ewelina; Maczka, Paulina; Skibiński, Robert; Komsta, Łukasz

    2012-01-01

    This paper presents a preliminary study in building discriminant models from solid-state NMR spectrometry data to detect the presence of acetaminophen in over-the-counter pharmaceutical formulations. The dataset, containing 11 spectra of pure substances and 21 spectra of various formulations, was processed by partial least squares discriminant analysis (PLS-DA). The model found coped with the discrimination, and its quality parameters were acceptable. It was found that standard normal variate preprocessing had almost no influence on unsupervised investigation of the dataset. The influence of variable selection with the uninformative variable elimination by PLS method was studied, reducing the dataset from 7601 variables to around 300 informative variables, but not improving the model performance. The results showed the possibility to construct well-working PLS-DA models from such small datasets without a full experimental design.

  19. Sieve estimation in semiparametric modeling of longitudinal data with informative observation times.

    PubMed

    Zhao, Xingqiu; Deng, Shirong; Liu, Li; Liu, Lei

    2014-01-01

    Analyzing irregularly spaced longitudinal data often involves modeling possibly correlated response and observation processes. In this article, we propose a new class of semiparametric mean models that allows for the interaction between the observation history and covariates, leaving patterns of the observation process to be arbitrary. For inference on the regression parameters and the baseline mean function, a spline-based least squares estimation approach is proposed. The consistency, rate of convergence, and asymptotic normality of the proposed estimators are established. Our new approach is different from the usual approaches relying on the model specification of the observation scheme, and it can be easily used for predicting the longitudinal response. Simulation studies demonstrate that the proposed inference procedure performs well and is more robust. The analyses of bladder tumor data and medical cost data are presented to illustrate the proposed method.

  20. Fractional Snow Cover Mapping by Artificial Neural Networks and Support Vector Machines

    NASA Astrophysics Data System (ADS)

    Çiftçi, B. B.; Kuter, S.; Akyürek, Z.; Weber, G.-W.

    2017-11-01

    Snow is an important land cover whose distribution over space and time plays a significant role in various environmental processes. Hence, snow cover mapping with high accuracy is necessary to have a real understanding for present and future climate, water cycle, and ecological changes. This study aims to investigate and compare the design and use of artificial neural networks (ANNs) and support vector machines (SVMs) algorithms for fractional snow cover (FSC) mapping from satellite data. ANN and SVM models with different model building settings are trained by using Moderate Resolution Imaging Spectroradiometer surface reflectance values of bands 1-7, normalized difference snow index and normalized difference vegetation index as predictor variables. Reference FSC maps are generated from higher spatial resolution Landsat ETM+ binary snow cover maps. Results on the independent test data set indicate that the developed ANN model with hyperbolic tangent transfer function in the output layer and the SVM model with radial basis function kernel produce high FSC mapping accuracies with the corresponding values of R = 0.93 and R = 0.92, respectively.

  1. Simulation study of overtaking in pedestrian flow using floor field cellular automaton model

    NASA Astrophysics Data System (ADS)

    Fu, Zhijian; Xia, Liang; Yang, Hongtai; Liu, Xiaobo; Ma, Jian; Luo, Lin; Yang, Lizhong; Chen, Junmin

    Properties of pedestrian may change along the moving path, for example, as a result of fatigue or injury, which has never been properly investigated in the past research. The paper attempts to study tactical overtaking in pedestrian flow. That is difficult to be modeled using a microscopic discrete model because of the complexity of the detailed overtaking behavior, and crossing/overlaps of pedestrian routes. Thus, a multi-velocity floor field cellular automaton model explaining the detailed psychical process of overtaking decision was proposed. Pedestrian can be either in normal state or in tactical overtaking state. Without tactical decision, pedestrians in normal state are driven by the floor field. Pedestrians make their tactical overtaking decisions by evaluating the walking environment around the overtaking route (the average velocity and density around the route, visual field of pedestrian) and obstructing conditions (the distance and velocity difference between the overtaking pedestrian and the obstructing pedestrian). The effects of tactical overtaking ratio, free velocity dispersion, and visual range on fundamental diagram, conflict density, and successful overtaking ratio were explored. Besides, the sensitivity analysis of the route factor relative intensity was performed.

  2. A heteroscedastic generalized linear model with a non-normal speed factor for responses and response times.

    PubMed

    Molenaar, Dylan; Bolsinova, Maria

    2017-05-01

    In generalized linear modelling of responses and response times, the observed response time variables are commonly transformed to make their distribution approximately normal. A normal distribution for the transformed response times is desirable as it justifies the linearity and homoscedasticity assumptions in the underlying linear model. Past research has, however, shown that the transformed response times are not always normal. Models have been developed to accommodate this violation. In the present study, we propose a modelling approach for responses and response times to test and model non-normality in the transformed response times. Most importantly, we distinguish between non-normality due to heteroscedastic residual variances, and non-normality due to a skewed speed factor. In a simulation study, we establish parameter recovery and the power to separate both effects. In addition, we apply the model to a real data set. © 2017 The Authors. British Journal of Mathematical and Statistical Psychology published by John Wiley & Sons Ltd on behalf of British Psychological Society.

  3. [Simulation and data analysis of stereological modeling based on virtual slices].

    PubMed

    Wang, Hao; Shen, Hong; Bai, Xiao-yan

    2008-05-01

    To establish a computer-assisted stereological model for simulating the process of slice section and evaluate the relationship between section surface and estimated three-dimensional structure. The model was designed by mathematic method as a win32 software based on the MFC using Microsoft visual studio as IDE for simulating the infinite process of sections and analysis of the data derived from the model. The linearity of the fitting of the model was evaluated by comparison with the traditional formula. The win32 software based on this algorithm allowed random sectioning of the particles distributed randomly in an ideal virtual cube. The stereological parameters showed very high throughput (>94.5% and 92%) in homogeneity and independence tests. The data of density, shape and size of the section were tested to conform to normal distribution. The output of the model and that from the image analysis system showed statistical correlation and consistency. The algorithm we described can be used for evaluating the stereologic parameters of the structure of tissue slices.

  4. Local Topography Effect on Plant Area Index Profile Calculation from Small Footprint Airborne Laser Scanning

    NASA Astrophysics Data System (ADS)

    Liu, J.; Wang, T.; Skidmore, A. K.; Heurich, M.

    2016-12-01

    The plant area index (PAI) profile is a quantitative description of how plants (including leaves and woody materials) are distributed vertically, as a function of height. PAI profiles can be used for many applications including biomass estimation, radiative transfer modelling, fire fuel modelling and wildlife habitat assessment. With airborne laser scanning (ALS), forest structure underneath the canopy surface can be detected. PAI profiles can be calculated through estimates of the vertically resolved gap fraction from ALS data. In this process, a gridding or aggregation step is often involved. Most current research neglects local topographic change, and utilizes a height normalization algorithm to achieve a local or relative height, implying a flat local terrain assumption inside the grid or aggregation area. However, in mountainous forest, this assumption is often not valid. Therefore, in this research, the local topographic effect on the PAI profile calculation was studied. Small footprint discrete multi-return ALS data was acquired over the Bavarian Forest National Park under leaf-off and leaf-on conditions. Ground truth data, including tree height, canopy cover, DBH as well as digital hemispherical photos, were collected in 30 plots. These plots covered a wide range of forest structure, plant species, local topography condition and understory coverage. PAI profiles were calculated both with and without height normalization. The difference between height normalized and non-normalized profiles were evaluated with the coefficient of variation of root mean squared difference (CV-RMSD). The derived metric PAI values from PAI profiles were also evaluated with ground truth PAI from the hemispherical photos. Results showed that change in local topography had significant effects on the PAI profile. The CV-RMSD between PAI profile results calculated with or without height normalization ranged from 24.5% to 163.9%. Height normalization (neglecting topography change) can lead to offsets in the height of plant material that could potentially cause large errors and uncertainty when used in applications utilizing absolute height such as radiative transfer modeling and fire fuel modelling. This research demonstrates that when calculating the PAI profile from ALS, local topography has to be taken into account.

  5. Development and Implementation of Mechanistic Terry Turbine Models in RELAP-7 to Simulate RCIC Normal Operation Conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, Haihua; Zou, Ling; Zhang, Hongbin

    As part of the efforts to understand the unexpected “self-regulating” mode of the RCIC (Reactor Core Isolation Cooling) systems in Fukushima accidents and extend BWR RCIC and PWR AFW (Auxiliary Feed Water) operational range and flexibility, mechanistic models for the Terry turbine, based on Sandia’s original work [1], have been developed and implemented in the RELAP-7 code to simulate the RCIC system. In 2016, our effort has been focused on normal working conditions of the RCIC system. More complex off-design conditions will be pursued in later years when more data are available. In the Sandia model, the turbine stator inletmore » velocity is provided according to a reduced-order model which was obtained from a large number of CFD (computational fluid dynamics) simulations. In this work, we propose an alternative method, using an under-expanded jet model to obtain the velocity and thermodynamic conditions for the turbine stator inlet. The models include both an adiabatic expansion process inside the nozzle and a free expansion process outside of the nozzle to ambient pressure. The combined models are able to predict the steam mass flow rate and supersonic velocity to the Terry turbine bucket entrance, which are the necessary input information for the Terry turbine rotor model. The analytical models for the nozzle were validated with experimental data and benchmarked with CFD simulations. The analytical models generally agree well with the experimental data and CFD simulations. The analytical models are suitable for implementation into a reactor system analysis code or severe accident code as part of mechanistic and dynamical models to understand the RCIC behaviors. The newly developed nozzle models and modified turbine rotor model according to the Sandia’s original work have been implemented into RELAP-7, along with the original Sandia Terry turbine model. A new pump model has also been developed and implemented to couple with the Terry turbine model. An input model was developed to test the Terry turbine RCIC system, which generates reasonable results. Both the INL RCIC model and the Sandia RCIC model produce results matching major rated parameters such as the rotational speed, pump torque, and the turbine shaft work for the normal operation condition. The Sandia model is more sensitive to the turbine outlet pressure than the INL model. The next step will be further refining the Terry turbine models by including two-phase flow cases so that off-design conditions can be simulated. The pump model could also be enhanced with the use of the homologous curves.« less

  6. How do normal faults grow?

    NASA Astrophysics Data System (ADS)

    Jackson, Christopher; Bell, Rebecca; Rotevatn, Atle; Tvedt, Anette

    2016-04-01

    Normal faulting accommodates stretching of the Earth's crust, and it is arguably the most fundamental tectonic process leading to continent rupture and oceanic crust emplacement. Furthermore, the incremental and finite geometries associated with normal faulting dictate landscape evolution, sediment dispersal and hydrocarbon systems development in rifts. Displacement-length scaling relationships compiled from global datasets suggest normal faults grow via a sympathetic increase in these two parameters (the 'isolated fault model'). This model has dominated the structural geology literature for >20 years and underpins the structural and tectono-stratigraphic models developed for active rifts. However, relatively recent analysis of high-quality 3D seismic reflection data suggests faults may grow by rapid establishment of their near-final length prior to significant displacement accumulation (the 'coherent fault model'). The isolated and coherent fault models make very different predictions regarding the tectono-stratigraphic evolution of rift basin, thus assessing their applicability is important. To-date, however, very few studies have explicitly set out to critically test the coherent fault model thus, it may be argued, it has yet to be widely accepted in the structural geology community. Displacement backstripping is a simple graphical technique typically used to determine how faults lengthen and accumulate displacement; this technique should therefore allow us to test the competing fault models. However, in this talk we use several subsurface case studies to show that the most commonly used backstripping methods (the 'original' and 'modified' methods) are, however, of limited value, because application of one over the other requires an a priori assumption of the model most applicable to any given fault; we argue this is illogical given that the style of growth is exactly what the analysis is attempting to determine. We then revisit our case studies and demonstrate that, in the case of seismic-scale growth faults, growth strata thickness patterns and relay zone kinematics, rather than displacement backstripping, should be assessed to directly constrain fault length and thus tip behaviour through time. We conclude that rapid length establishment prior to displacement accumulation may be more common than is typically assumed, thus challenging the well-established, widely cited and perhaps overused, isolated fault model.

  7. [Research of anti-aging mechanism of ginsenoside Rg1 on brain].

    PubMed

    Li, Cheng-peng; Zhang, Meng-si; Liu, Jun; Geng, Shan; Li, Jing; Zhu, Jia-hong; Zhang, Yan-yan; Jia, Yan-yan; Wang, Lu; Wang, Shun-he; Wang, Ya-ping

    2014-11-01

    Neurodegenerative disease is common and frequently occurs in elderly patients. Previous studies have shown that ginsenoside Rg1 was able to inhibit senescent of brain, but the mechanism on the brain during the treatment remains elucidated. To study the mechanism of ginsenoside Rg1 in the process of anti-aging of brain, forty male SD rats were randomly divided into normal group, Rg1 normal group, brain aging model group and Rg1 brain aging model group, each group with 10 rats (brain aging model group: subcutaneous injection of D-galactose (120 mg kg(-1)), qd for 42 consecutive days; Rg1 brain aging model group: while copying the same test as that of brain aging model group, begin intraperitoneal injection of ginsenosides Rg1 (20 mg x kg(-1)) qd for 27 d from 16 d. Rg1 normal group: subcutaneous injection of the same amount of saline; begin intraperitoneal injection of ginsenosides Rg1 (20 mg x kg(-1)) qd for 27 d from 16 d. Normal: injected with an equal volume of saline within the same time. Perform the related experiment on the second day after finishing copying the model or the completion of the first two days of drug injections). Learning and memory abilities were measured by Morris water maze. The number of senescent cells was detected by SA-beta-Gal staining while the level of IL-1 and IL-6 proinflammatory cytokines in hippocampus were detected by ELISA. The activities of SOD, contents of GSH in hippo- campus were quantified by chromatometry. The change of telomerase activities and telomerase length were performed by TRAP-PCR and southern blotting assay, respectively. It is pointed that, in brain aging model group, the spatial learning and memory capacities were weaken, SA-beta-Gal positive granules increased in section of brain tissue, the activity of antioxidant enzyme SOD and the contents of GSH decreased in hippocampus, the level of IL-1 and IL-6 increased in hippocampus, while the length of telomere and the activity of telomerase decreased in hippocampus. Rats of Rg1 brain aging group had their spatial learning and memory capacities enhanced, SA-beta-Gal positive granules in section of brain tissue decreased, the activity of antioxidant enzyme SOD and the contents of GSH increased in hippocampus, the level of IL-1 and IL-6 in hippocampus decreased, the length contraction of telomere suppressed while the change of telomerase activity increased in hippocampus. Compared with that of normal group, the spatial learning and memory capacities were enhanced in Rg1 normal group, SA-beta-Gal positive granules in section of brain tissue decreased in Rg1 normal group, the level of IL-1 and IL-6 in hippocampus decreased in Rg1 normal group. The results indicated that improvement of antioxidant ability, regulating the level of proinflammatory cytokines and regulation of telomerase system may be the underlying anti-aging mechanism of Ginsenoside Rg1.

  8. On Nonequivalence of Several Procedures of Structural Equation Modeling

    ERIC Educational Resources Information Center

    Yuan, Ke-Hai; Chan, Wai

    2005-01-01

    The normal theory based maximum likelihood procedure is widely used in structural equation modeling. Three alternatives are: the normal theory based generalized least squares, the normal theory based iteratively reweighted least squares, and the asymptotically distribution-free procedure. When data are normally distributed and the model structure…

  9. The impact of neurotechnology on rehabilitation.

    PubMed

    Berger, Theodore W; Gerhardt, Greg; Liker, Mark A; Soussou, Walid

    2008-01-01

    This paper present results of a multi-disciplinary project that is developing a microchip-based neural prosthesis for the hippocampus, a region of the brain responsible for the formation of long-term memories. Damage to the hippocampus is frequently associated with epilepsy, stroke, and dementia (Alzheimer's disease) and is considered to underlie the memory deficits related to these neurological conditions. The essential goals of the multi-laboratory effort include: (1) experimental study of neuron and neural network function--how does the hippocampus encode information? (2) formulation of biologically realistic models of neural system dynamics--can that encoding process be described mathematically to realize a predictive model of how the hippocampus responds to any event? (3) microchip implementation of neural system models--can the mathematical model be realized as a set of electronic circuits to achieve parallel processing, rapid computational speed, and miniaturization? and (4) creation of hybrid neuron-silicon interfaces-can structural and functional connections between electronic devices and neural tissue be achieved for long-term, bi-directional communication with the brain? By integrating solutions to these component problems, we are realizing a microchip-based model of hippocampal nonlinear dynamics that can perform the same function as part of the hippocampus. Through bi-directional communication with other neural tissue that normally provides the inputs and outputs to/from a damaged hippocampal area, the biomimetic model could serve as a neural prosthesis. A proof-of-concept will be presented in which the CA3 region of the hippocampal slice is surgically removed and is replaced by a microchip model of CA3 nonlinear dynamics--the "hybrid" hippocampal circuit displays normal physiological properties. How the work in brain slices is being extended to behaving animals also will be described.

  10. Modeling and estimating the jump risk of exchange rates: Applications to RMB

    NASA Astrophysics Data System (ADS)

    Wang, Yiming; Tong, Hanfei

    2008-11-01

    In this paper we propose a new type of continuous-time stochastic volatility model, SVDJ, for the spot exchange rate of RMB, and other foreign currencies. In the model, we assume that the change of exchange rate can be decomposed into two components. One is the normally small-cope innovation driven by the diffusion motion; the other is a large drop or rise engendered by the Poisson counting process. Furthermore, we develop a MCMC method to estimate our model. Empirical results indicate the significant existence of jumps in the exchange rate. Jump components explain a large proportion of the exchange rate change.

  11. Space Shuttle propulsion performance reconstruction from flight data

    NASA Technical Reports Server (NTRS)

    Rogers, Robert M.

    1989-01-01

    The aplication of extended Kalman filtering to estimating Space Shuttle Solid Rocket Booster (SRB) performance, specific impulse, from flight data in a post-flight processing computer program. The flight data used includes inertial platform acceleration, SRB head pressure, and ground based radar tracking data. The key feature in this application is the model used for the SRBs, which represents a reference quasi-static internal ballistics model normalized to the propellant burn depth. Dynamic states of mass overboard and propellant burn depth are included in the filter model to account for real-time deviations from the reference model used. Aerodynamic, plume, wind and main engine uncertainties are included.

  12. Statistical optimization of process parameters for lipase-catalyzed synthesis of triethanolamine-based esterquats using response surface methodology in 2-liter bioreactor.

    PubMed

    Masoumi, Hamid Reza Fard; Basri, Mahiran; Kassim, Anuar; Abdullah, Dzulkefly Kuang; Abdollahi, Yadollah; Abd Gani, Siti Salwa; Rezaee, Malahat

    2013-01-01

    Lipase-catalyzed production of triethanolamine-based esterquat by esterification of oleic acid (OA) with triethanolamine (TEA) in n-hexane was performed in 2 L stirred-tank reactor. A set of experiments was designed by central composite design to process modeling and statistically evaluate the findings. Five independent process variables, including enzyme amount, reaction time, reaction temperature, substrates molar ratio of OA to TEA, and agitation speed, were studied under the given conditions designed by Design Expert software. Experimental data were examined for normality test before data processing stage and skewness and kurtosis indices were determined. The mathematical model developed was found to be adequate and statistically accurate to predict the optimum conversion of product. Response surface methodology with central composite design gave the best performance in this study, and the methodology as a whole has been proven to be adequate for the design and optimization of the enzymatic process.

  13. Comparison of pre-processing methods for multiplex bead-based immunoassays.

    PubMed

    Rausch, Tanja K; Schillert, Arne; Ziegler, Andreas; Lüking, Angelika; Zucht, Hans-Dieter; Schulz-Knappe, Peter

    2016-08-11

    High throughput protein expression studies can be performed using bead-based protein immunoassays, such as the Luminex® xMAP® technology. Technical variability is inherent to these experiments and may lead to systematic bias and reduced power. To reduce technical variability, data pre-processing is performed. However, no recommendations exist for the pre-processing of Luminex® xMAP® data. We compared 37 different data pre-processing combinations of transformation and normalization methods in 42 samples on 384 analytes obtained from a multiplex immunoassay based on the Luminex® xMAP® technology. We evaluated the performance of each pre-processing approach with 6 different performance criteria. Three performance criteria were plots. All plots were evaluated by 15 independent and blinded readers. Four different combinations of transformation and normalization methods performed well as pre-processing procedure for this bead-based protein immunoassay. The following combinations of transformation and normalization were suitable for pre-processing Luminex® xMAP® data in this study: weighted Box-Cox followed by quantile or robust spline normalization (rsn), asinh transformation followed by loess normalization and Box-Cox followed by rsn.

  14. Random-Walk Type Model with Fat Tails for Financial Markets

    NASA Astrophysics Data System (ADS)

    Matuttis, Hans-Geors

    Starting from the random-walk model, practices of financial markets are included into the random-walk so that fat tail distributions like those in the high frequency data of the SP500 index are reproduced, though the individual mechanisms are modeled by normally distributed data. The incorporation of local correlation narrows the distribution for "frequent" events, whereas global correlations due to technical analysis leads to fat tails. Delay of market transactions in the trading process shifts the fat tail probabilities downwards. Such an inclusion of reactions to market fluctuations leads to mini-trends which are distributed with unit variance.

  15. Estimation of Microbial Contamination of Food from Prevalence and Concentration Data: Application to Listeria monocytogenes in Fresh Vegetables▿

    PubMed Central

    Crépet, Amélie; Albert, Isabelle; Dervin, Catherine; Carlin, Frédéric

    2007-01-01

    A normal distribution and a mixture model of two normal distributions in a Bayesian approach using prevalence and concentration data were used to establish the distribution of contamination of the food-borne pathogenic bacteria Listeria monocytogenes in unprocessed and minimally processed fresh vegetables. A total of 165 prevalence studies, including 15 studies with concentration data, were taken from the scientific literature and from technical reports and used for statistical analysis. The predicted mean of the normal distribution of the logarithms of viable L. monocytogenes per gram of fresh vegetables was −2.63 log viable L. monocytogenes organisms/g, and its standard deviation was 1.48 log viable L. monocytogenes organisms/g. These values were determined by considering one contaminated sample in prevalence studies in which samples are in fact negative. This deliberate overestimation is necessary to complete calculations. With the mixture model, the predicted mean of the distribution of the logarithm of viable L. monocytogenes per gram of fresh vegetables was −3.38 log viable L. monocytogenes organisms/g and its standard deviation was 1.46 log viable L. monocytogenes organisms/g. The probabilities of fresh unprocessed and minimally processed vegetables being contaminated with concentrations higher than 1, 2, and 3 log viable L. monocytogenes organisms/g were 1.44, 0.63, and 0.17%, respectively. Introducing a sensitivity rate of 80 or 95% in the mixture model had a small effect on the estimation of the contamination. In contrast, introducing a low sensitivity rate (40%) resulted in marked differences, especially for high percentiles. There was a significantly lower estimation of contamination in the papers and reports of 2000 to 2005 than in those of 1988 to 1999 and a lower estimation of contamination of leafy salads than that of sprouts and other vegetables. The interest of the mixture model for the estimation of microbial contamination is discussed. PMID:17098926

  16. Pain and the defense response: structural equation modeling reveals a coordinated psychophysiological response to increasing painful stimulation.

    PubMed

    Donaldson, Gary W; Chapman, C Richard; Nakamura, Yoshi; Bradshaw, David H; Jacobson, Robert C; Chapman, Christopher N

    2003-03-01

    The defense response theory implies that individuals should respond to increasing levels of painful stimulation with correlated increases in affectively mediated psychophysiological responses. This paper employs structural equation modeling to infer the latent processes responsible for correlated growth in the pain report, evoked potential amplitudes, pupil dilation, and skin conductance of 92 normal volunteers who experienced 144 trials of three levels of increasingly painful electrical stimulation. The analysis assumed a two-level model of latent growth as a function of stimulus level. The first level of analysis formulated a nonlinear growth model for each response measure, and allowed intercorrelations among the parameters of these models across individuals. The second level of analysis posited latent process factors to account for these intercorrelations. The best-fitting parsimonious model suggests that two latent processes account for the correlations. One of these latent factors, the activation threshold, determines the initial threshold response, while the other, the response gradient, indicates the magnitude of the coherent increase in response with stimulus level. Collectively, these two second-order factors define the defense response, a broad construct comprising both subjective pain evaluation and physiological mechanisms.

  17. Reliability prediction of ontology-based service compositions using Petri net and time series models.

    PubMed

    Li, Jia; Xia, Yunni; Luo, Xin

    2014-01-01

    OWL-S, one of the most important Semantic Web service ontologies proposed to date, provides a core ontological framework and guidelines for describing the properties and capabilities of their web services in an unambiguous, computer interpretable form. Predicting the reliability of composite service processes specified in OWL-S allows service users to decide whether the process meets the quantitative quality requirement. In this study, we consider the runtime quality of services to be fluctuating and introduce a dynamic framework to predict the runtime reliability of services specified in OWL-S, employing the Non-Markovian stochastic Petri net (NMSPN) and the time series model. The framework includes the following steps: obtaining the historical response times series of individual service components; fitting these series with a autoregressive-moving-average-model (ARMA for short) and predicting the future firing rates of service components; mapping the OWL-S process into a NMSPN model; employing the predicted firing rates as the model input of NMSPN and calculating the normal completion probability as the reliability estimate. In the case study, a comparison between the static model and our approach based on experimental data is presented and it is shown that our approach achieves higher prediction accuracy.

  18. Mechanisms of Intentional Binding and Sensory Attenuation: The Role of Temporal Prediction, Temporal Control, Identity Prediction, and Motor Prediction

    ERIC Educational Resources Information Center

    Hughes, Gethin; Desantis, Andrea; Waszak, Florian

    2013-01-01

    Sensory processing of action effects has been shown to differ from that of externally triggered stimuli, with respect both to the perceived timing of their occurrence (intentional binding) and to their intensity (sensory attenuation). These phenomena are normally attributed to forward action models, such that when action prediction is consistent…

  19. Storage-Retrieval Processes of Normal and Learning-Disabled Children: A Stages-of-Learning Analysis of Picture-Word Effects.

    ERIC Educational Resources Information Center

    Howe, Mark L.; And Others

    1985-01-01

    A stages-of-learning model was used to examine effects of picture-word manipulation on storage and retrieval differences between disabled and nondisabled grade 2 and 6 children. Results showed that disabled students are poorer at memory tasks and in developing the ability to reliably retrieve information than nondisabled children. (Author/RH)

  20. Random-walk diffusion and drying of porous materials

    NASA Astrophysics Data System (ADS)

    Mehrafarin, M.; Faghihi, M.

    2001-12-01

    Based on random-walk diffusion, a microscopic model for drying is proposed to explain the characteristic features of the drying-rate curve of porous materials. The constant drying-rate period is considered as a normal diffusion process. The transition to the falling-rate regime is attributed to the fractal nature of porous materials which results in crossover to anomalous diffusion.

  1. Apigenin exhibits protective effects in a mouse model of d-galactose-induced aging via activating the Nrf2 pathway.

    PubMed

    Sang, Ying; Zhang, Fan; Wang, Heng; Yao, Jianqiao; Chen, Ruichuan; Zhou, Zhengdao; Yang, Kun; Xie, Yan; Wan, Tianfeng; Ding, Hong

    2017-06-21

    The aim of the present research was to study the protective effects and underlying mechanisms of apigenin on d-galactose-induced aging mice. Firstly, apigenin exhibited a potent antioxidant activity in vitro. Secondly, d-galactose was administered by subcutaneous injection once daily for 8 weeks to establish an aging mouse model to investigate the protective effect of apigenin. We found that apigenin supplementation significantly ameliorated aging-related changes such as behavioral impairment, decreased organic index, histopathological injury, increased senescence-associated β-galactosidase (SAβ-gal) activity and advanced glycation end product (AGE) level. Further data showed that apigenin facilitated Nrf2 nuclear translocation both in aging mice and normal young mice, and the Nrf2 expression of normal young mice was higher than that of natural senile mice. In addition, the expressions of Nrf2 downstream gene targets, including HO-1 and NQO1, were also promoted by apigenin administration. Moreover, apigenin also decreased the MDA level and elevated SOD and CAT activities. In conclusion, focusing on the Nrf2 pathway is a suitable strategy to delay the aging process, and apigenin may exert an anti-senescent effect process via activating the Nrf2 pathway.

  2. Glomerular epithelial foot processes in normal man and rats. Distribution of true width and its intra- and inter-individual variation.

    PubMed

    Gundersen, H J; Seefeldt, T; Osterby, R

    1980-01-01

    The width of individual glomerular epithelial foot processes appears very different on electron micrographs. A method for obtainining distributions of the true width of foot processes from that of their apparent width on electron micrographs has been developed based on geometric probability theory pertaining to a specific geometric model. Analyses of foot process width in humans and rats show a remarkable interindividual invariance implying rigid control and therefore great biological significance of foot process width or a derivative thereof. The very low inter-individual variation of the true width, shown in the present paper, makes it possible to demonstrate slight changes in rather small groups of patients or experimental animals.

  3. Modeling the effect of channel number and interaction on consonant recognition in a cochlear implant peak-picking strategy.

    PubMed

    Verschuur, Carl

    2009-03-01

    Difficulties in speech recognition experienced by cochlear implant users may be attributed both to information loss caused by signal processing and to information loss associated with the interface between the electrode array and auditory nervous system, including cross-channel interaction. The objective of the work reported here was to attempt to partial out the relative contribution of these different factors to consonant recognition. This was achieved by comparing patterns of consonant feature recognition as a function of channel number and presence/absence of background noise in users of the Nucleus 24 device with normal hearing subjects listening to acoustic models that mimicked processing of that device. Additionally, in the acoustic model experiment, a simulation of cross-channel spread of excitation, or "channel interaction," was varied. Results showed that acoustic model experiments were highly correlated with patterns of performance in better-performing cochlear implant users. Deficits to consonant recognition in this subgroup could be attributed to cochlear implant processing, whereas channel interaction played a much smaller role in determining performance errors. The study also showed that large changes to channel number in the Advanced Combination Encoder signal processing strategy led to no substantial changes in performance.

  4. Mimicking aphasic semantic errors in normal speech production: evidence from a novel experimental paradigm.

    PubMed

    Hodgson, Catherine; Lambon Ralph, Matthew A

    2008-01-01

    Semantic errors are commonly found in semantic dementia (SD) and some forms of stroke aphasia and provide insights into semantic processing and speech production. Low error rates are found in standard picture naming tasks in normal controls. In order to increase error rates and thus provide an experimental model of aphasic performance, this study utilised a novel method- tempo picture naming. Experiment 1 showed that, compared to standard deadline naming tasks, participants made more errors on the tempo picture naming tasks. Further, RTs were longer and more errors were produced to living items than non-living items a pattern seen in both semantic dementia and semantically-impaired stroke aphasic patients. Experiment 2 showed that providing the initial phoneme as a cue enhanced performance whereas providing an incorrect phonemic cue further reduced performance. These results support the contention that the tempo picture naming paradigm reduces the time allowed for controlled semantic processing causing increased error rates. This experimental procedure would, therefore, appear to mimic the performance of aphasic patients with multi-modal semantic impairment that results from poor semantic control rather than the degradation of semantic representations observed in semantic dementia [Jefferies, E. A., & Lambon Ralph, M. A. (2006). Semantic impairment in stoke aphasia vs. semantic dementia: A case-series comparison. Brain, 129, 2132-2147]. Further implications for theories of semantic cognition and models of speech processing are discussed.

  5. Expression and activity levels of chymase in mast cells of burn wound tissues increase during the healing process in a hamster model.

    PubMed

    Dong, Xianglin; Xu, Tao; Ma, Shaolin; Wen, Hao

    2015-06-01

    The present study aimed to investigate the changes in the expression levels and activity of mast cell chymase in the process of burn wound healing in a hamster model of deep second-degree burn. The hamster model was established by exposing a ~3 cm diameter area of bare skin to hot water (75°C) for 0, 6, 8, 10 or 12 sec. Tissue specimens were collected 24 h after burning and histological analysis revealed that hot water contact for 12 sec was required to produce a deep second-degree burn. Quantitative polymerase chain reaction and a radioimmunoassay were used to the determine changes in chymase mRNA expression levels and activity. The mRNA expression levels and activity of chymase were increased in the burn wound tissues when compared with the normal skin. However, no statistically significant differences were observed in mast cell chymase activity amongst the various post-burn stages. Chymase mRNA expression levels peaked at day 1 post-burn, subsequently decreasing at days 3 and 7 post-burn and finally increasing again at day 14 post-burn. In summary, a hamster model of deep second-degree burn can be created by bringing the skin into contact with water at 75°C for 12 sec. Furthermore, the mRNA expression levels and activity of chymase in the burn wound tissues increased when compared with those in normal skin tissues.

  6. Expression and activity levels of chymase in mast cells of burn wound tissues increase during the healing process in a hamster model

    PubMed Central

    DONG, XIANGLIN; XU, TAO; MA, SHAOLIN; WEN, HAO

    2015-01-01

    The present study aimed to investigate the changes in the expression levels and activity of mast cell chymase in the process of burn wound healing in a hamster model of deep second-degree burn. The hamster model was established by exposing a ~3 cm diameter area of bare skin to hot water (75°C) for 0, 6, 8, 10 or 12 sec. Tissue specimens were collected 24 h after burning and histological analysis revealed that hot water contact for 12 sec was required to produce a deep second-degree burn. Quantitative polymerase chain reaction and a radioimmunoassay were used to the determine changes in chymase mRNA expression levels and activity. The mRNA expression levels and activity of chymase were increased in the burn wound tissues when compared with the normal skin. However, no statistically significant differences were observed in mast cell chymase activity amongst the various post-burn stages. Chymase mRNA expression levels peaked at day 1 post-burn, subsequently decreasing at days 3 and 7 post-burn and finally increasing again at day 14 post-burn. In summary, a hamster model of deep second-degree burn can be created by bringing the skin into contact with water at 75°C for 12 sec. Furthermore, the mRNA expression levels and activity of chymase in the burn wound tissues increased when compared with those in normal skin tissues. PMID:26136958

  7. The Mechanisms of Psychedelic Visionary Experiences: Hypotheses from Evolutionary Psychology

    PubMed Central

    Winkelman, Michael J.

    2017-01-01

    Neuropharmacological effects of psychedelics have profound cognitive, emotional, and social effects that inspired the development of cultures and religions worldwide. Findings that psychedelics objectively and reliably produce mystical experiences press the question of the neuropharmacological mechanisms by which these highly significant experiences are produced by exogenous neurotransmitter analogs. Humans have a long evolutionary relationship with psychedelics, a consequence of psychedelics' selective effects for human cognitive abilities, exemplified in the information rich visionary experiences. Objective evidence that psychedelics produce classic mystical experiences, coupled with the finding that hallucinatory experiences can be induced by many non-drug mechanisms, illustrates the need for a common model of visionary effects. Several models implicate disturbances of normal regulatory processes in the brain as the underlying mechanisms responsible for the similarities of visionary experiences produced by psychedelic and other methods for altering consciousness. Similarities in psychedelic-induced visionary experiences and those produced by practices such as meditation and hypnosis and pathological conditions such as epilepsy indicate the need for a general model explaining visionary experiences. Common mechanisms underlying diverse alterations of consciousness involve the disruption of normal functions of the prefrontal cortex and default mode network (DMN). This interruption of ordinary control mechanisms allows for the release of thalamic and other lower brain discharges that stimulate a visual information representation system and release the effects of innate cognitive functions and operators. Converging forms of evidence support the hypothesis that the source of psychedelic experiences involves the emergence of these innate cognitive processes of lower brain systems, with visionary experiences resulting from the activation of innate processes based in the mirror neuron system (MNS). PMID:29033783

  8. Applying a Lifespan Developmental Perspective to Chronic Pain: Pediatrics to Geriatrics.

    PubMed

    Walco, Gary A; Krane, Elliot J; Schmader, Kenneth E; Weiner, Debra K

    2016-09-01

    An ideal taxonomy of chronic pain would be applicable to people of all ages. Developmental sciences focus on lifespan developmental approaches, and view the trajectory of processes in the life course from birth to death. In this article we provide a review of lifespan developmental models, describe normal developmental processes that affect pain processing, and identify deviations from those processes that lead to stable individual differences of clinical interest, specifically the development of chronic pain syndromes. The goals of this review were 1) to unify what are currently separate purviews of "pediatric pain," "adult pain," and "geriatric pain," and 2) to generate models so that specific elements of the chronic pain taxonomy might include important developmental considerations. A lifespan developmental model is applied to the forthcoming Analgesic, Anesthetic, and Addiction Clinical Trial Translations, Innovations, Opportunities, and Networks-American Pain Society Pain Taxonomy to ascertain the degree to which general "adult" descriptions apply to pediatric and geriatric populations, or if age- or development-related considerations need to be invoked. Copyright © 2016. Published by Elsevier Inc.

  9. Native Silk Feedstock as a Model Biopolymer: A Rheological Perspective.

    PubMed

    Laity, Peter R; Holland, Chris

    2016-08-08

    Variability in silk's rheology is often regarded as an impediment to understanding or successfully copying the natural spinning process. We have previously reported such variability in unspun native silk extracted straight from the gland of the domesticated silkworm Bombyx mori and discounted classical explanations such as differences in molecular weight and concentration. We now report that variability in oscillatory measurements can be reduced onto a simple master-curve through normalizing with respect to the crossover. This remarkable result suggests that differences between silk feedstocks are rheologically simple and not as complex as originally thought. By comparison, solutions of poly(ethylene-oxide) and hydroxypropyl-methyl-cellulose showed similar normalization behavior; however, the resulting curves were broader than for silk, suggesting greater polydispersity in the (semi)synthetic materials. Thus, we conclude Nature may in fact produce polymer feedstocks that are more consistent than typical man-made counterparts as a model for future rheological investigations.

  10. Numerical solutions of Navier-Stokes equations for compressible turbulent two/three dimensional flows in terminal shock region of an inlet/diffuser

    NASA Technical Reports Server (NTRS)

    Liu, N. S.; Shamroth, S. J.; Mcdonald, H.

    1983-01-01

    The multidimensional ensemble averaged compressible time dependent Navier Stokes equations in conjunction with mixing length turbulence model and shock capturing technique were used to study the terminal shock type of flows in various flight regimes occurring in a diffuser/inlet model. The numerical scheme for solving the governing equations is based on a linearized block implicit approach and the following high Reynolds number calculations were carried out: (1) 2 D, steady, subsonic; (2) 2 D, steady, transonic with normal shock; (3) 2 D, steady, supersonic with terminal shock; (4) 2 D, transient process of shock development and (5) 3 D, steady, transonic with normal shock. The numerical results obtained for the 2 D and 3 D transonic shocked flows were compared with corresponding experimental data; the calculated wall static pressure distributions agree well with the measured data.

  11. Cannabinoid mitigation of neuronal morphological change important to development and learning: insight from a zebra finch model of psychopharmacology.

    PubMed

    Soderstrom, Ken; Gilbert, Marcoita T

    2013-03-19

    Normal CNS development proceeds through late-postnatal stages of adolescent development. The activity-dependence of this development underscores the significance of CNS-active drug exposure prior to completion of brain maturation. Exogenous modulation of signaling important in regulating normal development is of particular concern. This mini-review presents a summary of the accumulated behavioral, physiological and biochemical evidence supporting such a key regulatory role for endocannabinoid signaling during late-postnatal CNS development. Our focus is on the data obtained using a unique zebra finch model of developmental psychopharmacology. This animal has allowed investigation of neuronal morphological effects essential to establishment and maintenance of neural circuitry, including processes related to synaptogenesis and dendritic spine dynamics. Altered neurophysiology that follows exogenous cannabinoid exposure during adolescent development has the potential to persistently alter cognition, learning and memory. Copyright © 2012 Elsevier Inc. All rights reserved.

  12. Effect of acute pancreatitis on the pharmacokinetics of Chinese herbal ointment Liu-He-Dan in anaesthetized rats.

    PubMed

    Zhao, Xian-Lin; Xiang, Jin; Wan, Mei-Hua; Yu, Qin; Chen, Wei-wei; Chen, Guang-Yuan; Tang, Wen-Fu

    2013-01-09

    Chinese herbal preparation of Liu-He-Dan ointment has been adapted for acute pancreatitis in external application for many years in West China. To investigate the effect of acute pancreatitis on the pharmacokinetics of Liu-He-Dan ointment in rats while it was used externally on belly. Twelve male Sprague-Dawley rats were randomly divided into acute pancreatitis model group (n=6) and normal group as a control (n=6). Chinese herbal Liu-He-Dan ointment was used externally on belly. Emodin, rhein, aloe emodin, physcion and chrysophanol in plasma and pancreas (at 48 h) were detected and quantified by liquid chromatography-tandem mass spectrometry. Amylase in plasma were determined with iodide process. Among the five components, only emodin, aloe emodin and physcion from Liu-He-Dan were detected in plasma and pancreas. The absorption of each component was tended to decrease in acute pancreatitis group after topically management with Liu-He-Dan ointment on rats' abdomen. The T(max), C(max) and area under curve (AUC) of each component were distinctly lower in AP group than those in normal group (p<0.05). However, the T(1/2α) and mean retention time (MRT) of emodin lasted longer in acute pancreatitis group than those in normal group (p<0.05). There was no statistical difference in the MRT of aloe emodin and physcion between the two groups. Emodin could be detected in all rats' pancreas at 48 h in both groups, while its mean pancreatic concentration was higher in acute pancreatitis model group than in normal group (0.91 ± 0.68, 0.41 ± 0.36, respectively). Physcion could be detected in pancreas of most acute pancreatitis models, but not in normal rats. Aloe emodin was found in all pancreas from acute pancreatitis models while only one in normal group. The level of amylase in Liu-He-Dan group was obviously lower than that in the AP model group (p=0.0055). We concluded that acute pancreatitis may significantly affect the pharmacokinetics of Liu-He-Dan while external applied on belly, which indicated the dosage modification in AP. However, acute pancreatitis seems to promote the distribution of the detected components into pancreas. The ointment could help relieve the disease of pancreatitis. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  13. The Fruit Fly Drosophila melanogaster as a Model for Aging Research.

    PubMed

    Brandt, Annely; Vilcinskas, Andreas

    2013-01-01

    : Average human life expectancy is increasing and so is the impact on society of aging and age-related diseases. Here we highlight recent advances in the diverse and multidisciplinary field of aging research, focusing on the fruit fly Drosophila melanogaster, an excellent model system in which to dissect the genetic and molecular basis of the aging processes. The conservation of human disease genes in D. melanogaster allows the functional analysis of orthologues implicated in human aging and age-related diseases. D. melanogaster models have been developed for a variety of age-related processes and disorders, including stem cell decline, Alzheimer's disease, and cardiovascular deterioration. Understanding the detailed molecular events involved in normal aging and age-related diseases could facilitate the development of strategies and treatments that reduce their impact, thus improving human health and increasing longevity.

  14. On modeling animal movements using Brownian motion with measurement error.

    PubMed

    Pozdnyakov, Vladimir; Meyer, Thomas; Wang, Yu-Bo; Yan, Jun

    2014-02-01

    Modeling animal movements with Brownian motion (or more generally by a Gaussian process) has a long tradition in ecological studies. The recent Brownian bridge movement model (BBMM), which incorporates measurement errors, has been quickly adopted by ecologists because of its simplicity and tractability. We discuss some nontrivial properties of the discrete-time stochastic process that results from observing a Brownian motion with added normal noise at discrete times. In particular, we demonstrate that the observed sequence of random variables is not Markov. Consequently the expected occupation time between two successively observed locations does not depend on just those two observations; the whole path must be taken into account. Nonetheless, the exact likelihood function of the observed time series remains tractable; it requires only sparse matrix computations. The likelihood-based estimation procedure is described in detail and compared to the BBMM estimation.

  15. Corticocortical feedback increases the spatial extent of normalization.

    PubMed

    Nassi, Jonathan J; Gómez-Laberge, Camille; Kreiman, Gabriel; Born, Richard T

    2014-01-01

    Normalization has been proposed as a canonical computation operating across different brain regions, sensory modalities, and species. It provides a good phenomenological description of non-linear response properties in primary visual cortex (V1), including the contrast response function and surround suppression. Despite its widespread application throughout the visual system, the underlying neural mechanisms remain largely unknown. We recently observed that corticocortical feedback contributes to surround suppression in V1, raising the possibility that feedback acts through normalization. To test this idea, we characterized area summation and contrast response properties in V1 with and without feedback from V2 and V3 in alert macaques and applied a standard normalization model to the data. Area summation properties were well explained by a form of divisive normalization, which computes the ratio between a neuron's driving input and the spatially integrated activity of a "normalization pool." Feedback inactivation reduced surround suppression by shrinking the spatial extent of the normalization pool. This effect was independent of the gain modulation thought to mediate the influence of contrast on area summation, which remained intact during feedback inactivation. Contrast sensitivity within the receptive field center was also unaffected by feedback inactivation, providing further evidence that feedback participates in normalization independent of the circuit mechanisms involved in modulating contrast gain and saturation. These results suggest that corticocortical feedback contributes to surround suppression by increasing the visuotopic extent of normalization and, via this mechanism, feedback can play a critical role in contextual information processing.

  16. Corticocortical feedback increases the spatial extent of normalization

    PubMed Central

    Nassi, Jonathan J.; Gómez-Laberge, Camille; Kreiman, Gabriel; Born, Richard T.

    2014-01-01

    Normalization has been proposed as a canonical computation operating across different brain regions, sensory modalities, and species. It provides a good phenomenological description of non-linear response properties in primary visual cortex (V1), including the contrast response function and surround suppression. Despite its widespread application throughout the visual system, the underlying neural mechanisms remain largely unknown. We recently observed that corticocortical feedback contributes to surround suppression in V1, raising the possibility that feedback acts through normalization. To test this idea, we characterized area summation and contrast response properties in V1 with and without feedback from V2 and V3 in alert macaques and applied a standard normalization model to the data. Area summation properties were well explained by a form of divisive normalization, which computes the ratio between a neuron's driving input and the spatially integrated activity of a “normalization pool.” Feedback inactivation reduced surround suppression by shrinking the spatial extent of the normalization pool. This effect was independent of the gain modulation thought to mediate the influence of contrast on area summation, which remained intact during feedback inactivation. Contrast sensitivity within the receptive field center was also unaffected by feedback inactivation, providing further evidence that feedback participates in normalization independent of the circuit mechanisms involved in modulating contrast gain and saturation. These results suggest that corticocortical feedback contributes to surround suppression by increasing the visuotopic extent of normalization and, via this mechanism, feedback can play a critical role in contextual information processing. PMID:24910596

  17. Serum from calorie-restricted animals delays senescence and extends the lifespan of normal human fibroblasts in vitro.

    PubMed

    de Cabo, Rafael; Liu, Lijuan; Ali, Ahmed; Price, Nathan; Zhang, Jing; Wang, Mingyi; Lakatta, Edward; Irusta, Pablo M

    2015-03-01

    The cumulative effects of cellular senescence and cell loss over time in various tissues and organs are considered major contributing factors to the ageing process. In various organisms, caloric restriction (CR) slows ageing and increases lifespan, at least in part, by activating nicotinamide adenine dinucleotide (NAD+)-dependent protein deacetylases of the sirtuin family. Here, we use an in vitro model of CR to study the effects of this dietary regime on replicative senescence, cellular lifespan and modulation of the SIRT1 signaling pathway in normal human diploid fibroblasts. We found that serum from calorie-restricted animals was able to delay senescence and significantly increase replicative lifespan in these cells, when compared to serum from ad libitum fed animals. These effects correlated with CR-mediated increases in SIRT1 and decreases in p53 expression levels. In addition, we show that manipulation of SIRT1 levels by either over-expression or siRNA-mediated knockdown resulted in delayed and accelerated cellular senescence, respectively. Our results demonstrate that CR can delay senescence and increase replicative lifespan of normal human diploid fibroblasts in vitro and suggest that SIRT1 plays an important role in these processes.

  18. Serum from calorie-restricted animals delays senescence and extends the lifespan of normal human fibroblasts in vitro

    PubMed Central

    Ali, Ahmed; Price, Nathan; Zhang, Jing; Wang, Mingyi; Lakatta, Edward; Irusta, Pablo M.

    2015-01-01

    The cumulative effects of cellular senescence and cell loss over time in various tissues and organs are considered major contributing factors to the ageing process. In various organisms, caloric restriction (CR) slows ageing and increases lifespan, at least in part, by activating nicotinamide adenine dinucleotide (NAD+)-dependent protein deacetylases of the sirtuin family. Here, we use an in vitro model of CR to study the effects of this dietary regime on replicative senescence, cellular lifespan and modulation of the SIRT1 signaling pathway in normal human diploid fibroblasts. We found that serum from calorie-restricted animals was able to delay senescence and significantly increase replicative lifespan in these cells, when compared to serum from ad libitum fed animals. These effects correlated with CR-mediated increases in SIRT1 and decreases in p53 expression levels. In addition, we show that manipulation of SIRT1 levels by either over-expression or siRNA-mediated knockdown resulted in delayed and accelerated cellular senescence, respectively. Our results demonstrate that CR can delay senescence and increase replicative lifespan of normal human diploid fibroblasts in vitro and suggest that SIRT1 plays an important role in these processes. (185 words). PMID:25855056

  19. Verbal Working Memory in Children With Cochlear Implants

    PubMed Central

    Caldwell-Tarr, Amanda; Low, Keri E.; Lowenstein, Joanna H.

    2017-01-01

    Purpose Verbal working memory in children with cochlear implants and children with normal hearing was examined. Participants Ninety-three fourth graders (47 with normal hearing, 46 with cochlear implants) participated, all of whom were in a longitudinal study and had working memory assessed 2 years earlier. Method A dual-component model of working memory was adopted, and a serial recall task measured storage and processing. Potential predictor variables were phonological awareness, vocabulary knowledge, nonverbal IQ, and several treatment variables. Potential dependent functions were literacy, expressive language, and speech-in-noise recognition. Results Children with cochlear implants showed deficits in storage and processing, similar in size to those at second grade. Predictors of verbal working memory differed across groups: Phonological awareness explained the most variance in children with normal hearing; vocabulary explained the most variance in children with cochlear implants. Treatment variables explained little of the variance. Where potentially dependent functions were concerned, verbal working memory accounted for little variance once the variance explained by other predictors was removed. Conclusions The verbal working memory deficits of children with cochlear implants arise due to signal degradation, which limits their abilities to acquire phonological awareness. That hinders their abilities to store items using a phonological code. PMID:29075747

  20. Kalman/Map filtering-aided fast normalized cross correlation-based Wi-Fi fingerprinting location sensing.

    PubMed

    Sun, Yongliang; Xu, Yubin; Li, Cheng; Ma, Lin

    2013-11-13

    A Kalman/map filtering (KMF)-aided fast normalized cross correlation (FNCC)-based Wi-Fi fingerprinting location sensing system is proposed in this paper. Compared with conventional neighbor selection algorithms that calculate localization results with received signal strength (RSS) mean samples, the proposed FNCC algorithm makes use of all the on-line RSS samples and reference point RSS variations to achieve higher fingerprinting accuracy. The FNCC computes efficiently while maintaining the same accuracy as the basic normalized cross correlation. Additionally, a KMF is also proposed to process fingerprinting localization results. It employs a new map matching algorithm to nonlinearize the linear location prediction process of Kalman filtering (KF) that takes advantage of spatial proximities of consecutive localization results. With a calibration model integrated into an indoor map, the map matching algorithm corrects unreasonable prediction locations of the KF according to the building interior structure. Thus, more accurate prediction locations are obtained. Using these locations, the KMF considerably improves fingerprinting algorithm performance. Experimental results demonstrate that the FNCC algorithm with reduced computational complexity outperforms other neighbor selection algorithms and the KMF effectively improves location sensing accuracy by using indoor map information and spatial proximities of consecutive localization results.

  1. Kalman/Map Filtering-Aided Fast Normalized Cross Correlation-Based Wi-Fi Fingerprinting Location Sensing

    PubMed Central

    Sun, Yongliang; Xu, Yubin; Li, Cheng; Ma, Lin

    2013-01-01

    A Kalman/map filtering (KMF)-aided fast normalized cross correlation (FNCC)-based Wi-Fi fingerprinting location sensing system is proposed in this paper. Compared with conventional neighbor selection algorithms that calculate localization results with received signal strength (RSS) mean samples, the proposed FNCC algorithm makes use of all the on-line RSS samples and reference point RSS variations to achieve higher fingerprinting accuracy. The FNCC computes efficiently while maintaining the same accuracy as the basic normalized cross correlation. Additionally, a KMF is also proposed to process fingerprinting localization results. It employs a new map matching algorithm to nonlinearize the linear location prediction process of Kalman filtering (KF) that takes advantage of spatial proximities of consecutive localization results. With a calibration model integrated into an indoor map, the map matching algorithm corrects unreasonable prediction locations of the KF according to the building interior structure. Thus, more accurate prediction locations are obtained. Using these locations, the KMF considerably improves fingerprinting algorithm performance. Experimental results demonstrate that the FNCC algorithm with reduced computational complexity outperforms other neighbor selection algorithms and the KMF effectively improves location sensing accuracy by using indoor map information and spatial proximities of consecutive localization results. PMID:24233027

  2. Pre-Test Assessment of the Use Envelope of the Normal Force of a Wind Tunnel Strain-Gage Balance

    NASA Technical Reports Server (NTRS)

    Ulbrich, N.

    2016-01-01

    The relationship between the aerodynamic lift force generated by a wind tunnel model, the model weight, and the measured normal force of a strain-gage balance is investigated to better understand the expected use envelope of the normal force during a wind tunnel test. First, the fundamental relationship between normal force, model weight, lift curve slope, model reference area, dynamic pressure, and angle of attack is derived. Then, based on this fundamental relationship, the use envelope of a balance is examined for four typical wind tunnel test cases. The first case looks at the use envelope of the normal force during the test of a light wind tunnel model at high subsonic Mach numbers. The second case examines the use envelope of the normal force during the test of a heavy wind tunnel model in an atmospheric low-speed facility. The third case reviews the use envelope of the normal force during the test of a floor-mounted semi-span model. The fourth case discusses the normal force characteristics during the test of a rotated full-span model. The wind tunnel model's lift-to-weight ratio is introduced as a new parameter that may be used for a quick pre-test assessment of the use envelope of the normal force of a balance. The parameter is derived as a function of the lift coefficient, the dimensionless dynamic pressure, and the dimensionless model weight. Lower and upper bounds of the use envelope of a balance are defined using the model's lift-to-weight ratio. Finally, data from a pressurized wind tunnel is used to illustrate both application and interpretation of the model's lift-to-weight ratio.

  3. Modeling of 2D diffusion processes based on microscopy data: parameter estimation and practical identifiability analysis.

    PubMed

    Hock, Sabrina; Hasenauer, Jan; Theis, Fabian J

    2013-01-01

    Diffusion is a key component of many biological processes such as chemotaxis, developmental differentiation and tissue morphogenesis. Since recently, the spatial gradients caused by diffusion can be assessed in-vitro and in-vivo using microscopy based imaging techniques. The resulting time-series of two dimensional, high-resolutions images in combination with mechanistic models enable the quantitative analysis of the underlying mechanisms. However, such a model-based analysis is still challenging due to measurement noise and sparse observations, which result in uncertainties of the model parameters. We introduce a likelihood function for image-based measurements with log-normal distributed noise. Based upon this likelihood function we formulate the maximum likelihood estimation problem, which is solved using PDE-constrained optimization methods. To assess the uncertainty and practical identifiability of the parameters we introduce profile likelihoods for diffusion processes. As proof of concept, we model certain aspects of the guidance of dendritic cells towards lymphatic vessels, an example for haptotaxis. Using a realistic set of artificial measurement data, we estimate the five kinetic parameters of this model and compute profile likelihoods. Our novel approach for the estimation of model parameters from image data as well as the proposed identifiability analysis approach is widely applicable to diffusion processes. The profile likelihood based method provides more rigorous uncertainty bounds in contrast to local approximation methods.

  4. Pareto genealogies arising from a Poisson branching evolution model with selection.

    PubMed

    Huillet, Thierry E

    2014-02-01

    We study a class of coalescents derived from a sampling procedure out of N i.i.d. Pareto(α) random variables, normalized by their sum, including β-size-biasing on total length effects (β < α). Depending on the range of α we derive the large N limit coalescents structure, leading either to a discrete-time Poisson-Dirichlet (α, -β) Ξ-coalescent (α ε[0, 1)), or to a family of continuous-time Beta (2 - α, α - β)Λ-coalescents (α ε[1, 2)), or to the Kingman coalescent (α ≥ 2). We indicate that this class of coalescent processes (and their scaling limits) may be viewed as the genealogical processes of some forward in time evolving branching population models including selection effects. In such constant-size population models, the reproduction step, which is based on a fitness-dependent Poisson Point Process with scaling power-law(α) intensity, is coupled to a selection step consisting of sorting out the N fittest individuals issued from the reproduction step.

  5. Microstructure and growth model for rice-hull-derived SiC whiskers

    NASA Technical Reports Server (NTRS)

    Nutt, Steven R.

    1988-01-01

    The microstructure of silicon carbide whiskers grown from rice hulls has been studied using methods of high-resolution analytical electron microscopy. Small, partially crystalline inclusions (about 10 nm) containing calcium, manganese, and oxygen are concentrated in whisker core regions, while peripheral regions are generally inclusion free. The distinct microphase distribution is evidence of a two-stage growth process in which the core region grows first, followed by normal growth toward whisker sides. Partial dislocations extend radially from the core region to the surface and tend to be paired in V-shaped configurations. Whisker surfaces exhibit microroughness due to a tendency to develop small facets on close-packed planes. The microstructural data obtained from TEM observations are used as a basis for discussion of the mechanisms involved in whisker growth, and a model of the growth process is proposed. The model includes a two-dimensional growth mechanism involving vapor, liquid, and solid phases, although it is significantly different from the classical vapor-liquid-solid (VLS) process of whisker growth.

  6. Engineering epithelial-stromal interactions in vitro for toxicology assessment.

    PubMed

    Belair, David G; Abbott, Barbara D

    2017-05-01

    Crosstalk between epithelial and stromal cells drives the morphogenesis of ectodermal organs during development and promotes normal mature adult epithelial tissue homeostasis. Epithelial-stromal interactions (ESIs) have historically been examined using mammalian models and ex vivo tissue recombination. Although these approaches have elucidated signaling mechanisms underlying embryonic morphogenesis processes and adult mammalian epithelial tissue function, they are limited by the availability of tissue, low throughput, and human developmental or physiological relevance. In this review, we describe how bioengineered ESIs, using either human stem cells or co-cultures of human primary epithelial and stromal cells, have enabled the development of human in vitro epithelial tissue models that recapitulate the architecture, phenotype, and function of adult human epithelial tissues. We discuss how the strategies used to engineer mature epithelial tissue models in vitro could be extrapolated to instruct the design of organotypic culture models that can recapitulate the structure of embryonic ectodermal tissues and enable the in vitro assessment of events critical to organ/tissue morphogenesis. Given the importance of ESIs towards normal epithelial tissue development and function, such models present a unique opportunity for toxicological screening assays to incorporate ESIs to assess the impact of chemicals on mature and developing epidermal tissues. Published by Elsevier B.V.

  7. Engineering epithelial-stromal interactions in vitro for toxicology assessment

    PubMed Central

    Belair, David G.; Abbott, Barbara D.

    2018-01-01

    Crosstalk between epithelial and stromal cells drives the morphogenesis of ectodermal organs during development and promotes normal mature adult epithelial tissue homeostasis. Epithelial-stromal interactions (ESIs) have historically been examined using mammalian models and ex vivo tissue recombination. Although these approaches have elucidated signaling mechanisms underlying embryonic morphogenesis processes and adult mammalian epithelial tissue function, they are limited by the availability of tissue, low throughput, and human developmental or physiological relevance. In this review, we describe how bioengineered ESIs, using either human stem cells or co-cultures of human primary epithelial and stromal cells, have enabled the development of human in vitro epithelial tissue models that recapitulate the architecture, phenotype, and function of adult human epithelial tissues. We discuss how the strategies used to engineer mature epithelial tissue models in vitro could be extrapolated to instruct the design of organotypic culture models that can recapitulate the structure of embryonic ectodermal tissues and enable the in vitro assessment of events critical to organ/tissue morphogenesis. Given the importance of ESIs towards normal epithelial tissue development and function, such models present a unique opportunity for toxicological screening assays to incorporate ESIs to assess the impact of chemicals on mature and developing epidermal tissues. PMID:28285100

  8. A PDP model of the simultaneous perception of multiple objects

    NASA Astrophysics Data System (ADS)

    Henderson, Cynthia M.; McClelland, James L.

    2011-06-01

    Illusory conjunctions in normal and simultanagnosic subjects are two instances where the visual features of multiple objects are incorrectly 'bound' together. A connectionist model explores how multiple objects could be perceived correctly in normal subjects given sufficient time, but could give rise to illusory conjunctions with damage or time pressure. In this model, perception of two objects benefits from lateral connections between hidden layers modelling aspects of the ventral and dorsal visual pathways. As with simultanagnosia, simulations of dorsal lesions impair multi-object recognition. In contrast, a large ventral lesion has minimal effect on dorsal functioning, akin to dissociations between simple object manipulation (retained in visual form agnosia and semantic dementia) and object discrimination (impaired in these disorders) [Hodges, J.R., Bozeat, S., Lambon Ralph, M.A., Patterson, K., and Spatt, J. (2000), 'The Role of Conceptual Knowledge: Evidence from Semantic Dementia', Brain, 123, 1913-1925; Milner, A.D., and Goodale, M.A. (2006), The Visual Brain in Action (2nd ed.), New York: Oxford]. It is hoped that the functioning of this model might suggest potential processes underlying dorsal and ventral contributions to the correct perception of multiple objects.

  9. [Effects in the adherence treatment and psychological adjustment after the disclosure of HIV/AIDS diagnosis with the "DIRE" clinical model in Colombian children under 17].

    PubMed

    Trejos, Ana María; Reyes, Lizeth; Bahamon, Marly Johana; Alarcón, Yolima; Gaviria, Gladys

    2015-08-01

    A study in five Colombian cities in 2006, confirms the findings of other international studies: the majority of HIV-positive children not know their diagnosis, caregivers are reluctant to give this information because they believe that the news will cause emotional distress to the child becoming primary purpose of this study to validate a model of revelation. We implemented a clinical model, referred to as: "DIRE" that hypothetically had normalizing effects on psychological adjustment and adherence to antiretroviral treatment of HIV seropositive children, using a quasi-experimental design. Test were administered (questionnaire to assess patterns of disclosure and non-disclosure of the diagnosis of VIH/SIDA on children in health professionals and participants caregivers, Family Apgar, EuroQol EQ- 5D, MOS Social Support Survey Questionnaire Information treatment for VIH/SIDA and child Symptom Checklist CBCL/6-18 adapted to Latinos) before and after implementation of the model to 31 children (n: 31), 30 caregivers (n: 30) and 41 health professionals. Data processing was performed using the Statistical Package for the Social Science version 21 by applying parametric tests (Friedman) and nonparametric (t Student). No significant differences in adherence to treatment (p=0.392), in the psychological adjustment were found positive significant differences at follow-ups compared to baseline 2 weeks (p: 0.001), 3 months (p: 0.000) and 6 months (p: 0.000). The clinical model demonstrated effectiveness in normalizing of psychological adjustment and maintaining treatment compliance. The process also generated confidence in caregivers and health professionals in this difficult task.

  10. Physically based modeling of bedrock incision by abrasion, plucking, and macroabrasion

    NASA Astrophysics Data System (ADS)

    Chatanantavet, Phairot; Parker, Gary

    2009-11-01

    Many important insights into the dynamic coupling among climate, erosion, and tectonics in mountain areas have derived from several numerical models of the past few decades which include descriptions of bedrock incision. However, many questions regarding incision processes and morphology of bedrock streams still remain unanswered. A more mechanistically based incision model is needed as a component to study landscape evolution. Major bedrock incision processes include (among other mechanisms) abrasion by bed load, plucking, and macroabrasion (a process of fracturing of the bedrock into pluckable sizes mediated by particle impacts). The purpose of this paper is to develop a physically based model of bedrock incision that includes all three processes mentioned above. To build the model, we start by developing a theory of abrasion, plucking, and macroabrasion mechanisms. We then incorporate hydrology, the evaluation of boundary shear stress, capacity transport, an entrainment relation for pluckable particles, a routing model linking in-stream sediment and hillslopes, a formulation for alluvial channel coverage, a channel width relation, Hack's law, and Exner equation into the model so that we can simulate the evolution of bedrock channels. The model successfully simulates various features of bed elevation profiles of natural bedrock rivers under a variety of input or boundary conditions. The results also illustrate that knickpoints found in bedrock rivers may be autogenic in addition to being driven by base level fall and lithologic changes. This supports the concept that bedrock incision by knickpoint migration may be an integral part of normal incision processes. The model is expected to improve the current understanding of the linkage among physically meaningful input parameters, the physics of incision process, and morphological changes in bedrock streams.

  11. When modularization fails to occur: a developmental perspective.

    PubMed

    D'Souza, Dean; Karmiloff-Smith, Annette

    2011-05-01

    We argue that models of adult cognition defined in terms of independently functioning modules cannot be applied to development, whether typical or atypical. The infant brain starts out highly interconnected, and it is only over developmental time that neural networks become increasingly specialized-that is, relatively modularized. In the case of atypical development, even when behavioural scores fall within the normal range, they are frequently underpinned by different cognitive and neural processes. In other words, in neurodevelopmental disorders the gradual process of relative modularization may fail to occur.

  12. Treating Brain Tumor with Microbeam Radiation Generated by a Compact Carbon-Nanotube-Based Irradiator: Initial Radiation Efficacy Study.

    PubMed

    Yuan, Hong; Zhang, Lei; Frank, Jonathan E; Inscoe, Christina R; Burk, Laurel M; Hadsell, Mike; Lee, Yueh Z; Lu, Jianping; Chang, Sha; Zhou, Otto

    2015-09-01

    Microbeam radiation treatment (MRT) using synchrotron radiation has shown great promise in the treatment of brain tumors, with a demonstrated ability to eradicate the tumor while sparing normal tissue in small animal models. With the goal of expediting the advancement of MRT research beyond the limited number of synchrotron facilities in the world, we recently developed a compact laboratory-scale microbeam irradiator using carbon nanotube (CNT) field emission-based X-ray source array technology. The focus of this study is to evaluate the effects of the microbeam radiation generated by this compact irradiator in terms of tumor control and normal tissue damage in a mouse brain tumor model. Mice with U87MG human glioblastoma were treated with sham irradiation, low-dose MRT, high-dose MRT or 10 Gy broad-beam radiation treatment (BRT). The microbeams were 280 μm wide and spaced at 900 μm center-to-center with peak dose at either 48 Gy (low-dose MRT) or 72 Gy (high-dose MRT). Survival studies showed that the mice treated with both MRT protocols had a significantly extended life span compared to the untreated control group (31.4 and 48.5% of life extension for low- and high-dose MRT, respectively) and had similar survival to the BRT group. Immunostaining on MRT mice demonstrated much higher DNA damage and apoptosis level in tumor tissue compared to the normal brain tissue. Apoptosis in normal tissue was significantly lower in the low-dose MRT group compared to that in the BRT group at 48 h postirradiation. Interestingly, there was a significantly higher level of cell proliferation in the MRT-treated normal tissue compared to that in the BRT-treated mice, indicating rapid normal tissue repairing process after MRT. Microbeam radiation exposure on normal brain tissue causes little apoptosis and no macrophage infiltration at 30 days after exposure. This study is the first biological assessment on MRT effects using the compact CNT-based irradiator. It provides an alternative technology that can enable widespread MRT research on mechanistic studies using a preclinical model, as well as further translational research towards clinical applications.

  13. Nonpoint source solute transport normal to aquifer bedding in heterogeneous, Markov chain random fields

    NASA Astrophysics Data System (ADS)

    Zhang, Hua; Harter, Thomas; Sivakumar, Bellie

    2006-06-01

    Facies-based geostatistical models have become important tools for analyzing flow and mass transport processes in heterogeneous aquifers. Yet little is known about the relationship between these latter processes and the parameters of facies-based geostatistical models. In this study, we examine the transport of a nonpoint source solute normal (perpendicular) to the major bedding plane of an alluvial aquifer medium that contains multiple geologic facies, including interconnected, high-conductivity (coarse textured) facies. We also evaluate the dependence of the transport behavior on the parameters of the constitutive facies model. A facies-based Markov chain geostatistical model is used to quantify the spatial variability of the aquifer system's hydrostratigraphy. It is integrated with a groundwater flow model and a random walk particle transport model to estimate the solute traveltime probability density function (pdf) for solute flux from the water table to the bottom boundary (the production horizon) of the aquifer. The cases examined include two-, three-, and four-facies models, with mean length anisotropy ratios for horizontal to vertical facies, ek, from 25:1 to 300:1 and with a wide range of facies volume proportions (e.g., from 5 to 95% coarse-textured facies). Predictions of traveltime pdfs are found to be significantly affected by the number of hydrostratigraphic facies identified in the aquifer. Those predictions of traveltime pdfs also are affected by the proportions of coarse-textured sediments, the mean length of the facies (particularly the ratio of length to thickness of coarse materials), and, to a lesser degree, the juxtapositional preference among the hydrostratigraphic facies. In transport normal to the sedimentary bedding plane, traveltime is not lognormally distributed as is often assumed. Also, macrodispersive behavior (variance of the traveltime) is found not to be a unique function of the conductivity variance. For the parameter range examined, the third moment of the traveltime pdf varies from negatively skewed to strongly positively skewed. We also show that the Markov chain approach may give significantly different traveltime distributions when compared to the more commonly used Gaussian random field approach, even when the first- and second-order moments in the geostatistical distribution of the lnK field are identical. The choice of the appropriate geostatistical model is therefore critical in the assessment of nonpoint source transport, and uncertainty about that choice must be considered in evaluating the results.

  14. Investigation on the effect of nonlinear processes on similarity law in high-pressure argon discharges

    NASA Astrophysics Data System (ADS)

    Fu, Yangyang; Parsey, Guy M.; Verboncoeur, John P.; Christlieb, Andrew J.

    2017-11-01

    In this paper, the effect of nonlinear processes (such as three-body collisions and stepwise ionizations) on the similarity law in high-pressure argon discharges has been studied by the use of the Kinetic Global Model framework. In the discharge model, the ground state argon atoms (Ar), electrons (e), atom ions (Ar+), molecular ions (Ar2+), and fourteen argon excited levels Ar*(4s and 4p) are considered. The steady-state electron and ion densities are obtained with nonlinear processes included and excluded in the designed models, respectively. It is found that in similar gas gaps, keeping the product of gas pressure and linear dimension unchanged, with the nonlinear processes included, the normalized density relations deviate from the similarity relations gradually as the scale-up factor decreases. Without the nonlinear processes, the parameter relations are in good agreement with the similarity law predictions. Furthermore, the pressure and the dimension effects are also investigated separately with and without the nonlinear processes. It is shown that the gas pressure effect on the results is less obvious than the dimension effect. Without the nonlinear processes, the pressure and the dimension effects could be estimated from one to the other based on the similarity relations.

  15. Neurolinguistically constrained simulation of sentence comprehension: integrating artificial intelligence and brain theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gigley, H.M.

    1982-01-01

    An artificial intelligence approach to the simulation of neurolinguistically constrained processes in sentence comprehension is developed using control strategies for simulation of cooperative computation in associative networks. The desirability of this control strategy in contrast to ATN and production system strategies is explained. A first pass implementation of HOPE, an artificial intelligence simulation model of sentence comprehension, constrained by studies of aphasic performance, psycholinguistics, neurolinguistics, and linguistic theory is described. Claims that the model could serve as a basis for sentence production simulation and for a model of language acquisition as associative learning are discussed. HOPE is a model thatmore » performs in a normal state and includes a lesion simulation facility. HOPE is also a research tool. Its modifiability and use as a tool to investigate hypothesized causes of degradation in comprehension performance by aphasic patients are described. Issues of using behavioral constraints in modelling and obtaining appropriate data for simulated process modelling are discussed. Finally, problems of validation of the simulation results are raised; and issues of how to interpret clinical results to define the evolution of the model are discussed. Conclusions with respect to the feasibility of artificial intelligence simulation process modelling are discussed based on the current state of research.« less

  16. [Influence of Spectral Pre-Processing on PLS Quantitative Model of Detecting Cu in Navel Orange by LIBS].

    PubMed

    Li, Wen-bing; Yao, Lin-tao; Liu, Mu-hua; Huang, Lin; Yao, Ming-yin; Chen, Tian-bing; He, Xiu-wen; Yang, Ping; Hu, Hui-qin; Nie, Jiang-hui

    2015-05-01

    Cu in navel orange was detected rapidly by laser-induced breakdown spectroscopy (LIBS) combined with partial least squares (PLS) for quantitative analysis, then the effect on the detection accuracy of the model with different spectral data ptetreatment methods was explored. Spectral data for the 52 Gannan navel orange samples were pretreated by different data smoothing, mean centralized and standard normal variable transform. Then 319~338 nm wavelength section containing characteristic spectral lines of Cu was selected to build PLS models, the main evaluation indexes of models such as regression coefficient (r), root mean square error of cross validation (RMSECV) and the root mean square error of prediction (RMSEP) were compared and analyzed. Three indicators of PLS model after 13 points smoothing and processing of the mean center were found reaching 0. 992 8, 3. 43 and 3. 4 respectively, the average relative error of prediction model is only 5. 55%, and in one word, the quality of calibration and prediction of this model are the best results. The results show that selecting the appropriate data pre-processing method, the prediction accuracy of PLS quantitative model of fruits and vegetables detected by LIBS can be improved effectively, providing a new method for fast and accurate detection of fruits and vegetables by LIBS.

  17. Understanding the challenges to implementing case management for people with dementia in primary care in England: a qualitative study using Normalization Process Theory.

    PubMed

    Bamford, Claire; Poole, Marie; Brittain, Katie; Chew-Graham, Carolyn; Fox, Chris; Iliffe, Steve; Manthorpe, Jill; Robinson, Louise

    2014-11-08

    Case management has been suggested as a way of improving the quality and cost-effectiveness of support for people with dementia. In this study we adapted and implemented a successful United States' model of case management in primary care in England. The results are reported elsewhere, but a key finding was that little case management took place. This paper reports the findings of the process evaluation which used Normalization Process Theory to understand the barriers to implementation. Ethnographic methods were used to explore the views and experiences of case management. Interviews with 49 stakeholders (patients, carers, case managers, health and social care professionals) were supplemented with observation of case managers during meetings and initial assessments with patients. Transcripts and field notes were analysed initially using the constant comparative approach and emerging themes were then mapped onto the framework of Normalization Process Theory. The primary focus during implementation was on the case managers as isolated individuals, with little attention being paid to the social or organizational context within which they worked. Barriers relating to each of the four main constructs of Normalization Process Theory were identified, with a lack of clarity over the scope and boundaries of the intervention (coherence); variable investment in the intervention (cognitive participation); a lack of resources, skills and training to deliver case management (collective action); and limited reflection and feedback on the case manager role (reflexive monitoring). Despite the intuitive appeal of case management to all stakeholders, there were multiple barriers to implementation in primary care in England including: difficulties in embedding case managers within existing well-established community networks; the challenges of protecting time for case management; and case managers' inability to identify, and act on, emerging patient and carer needs (an essential, but previously unrecognised, training need). In the light of these barriers it is unclear whether primary care is the most appropriate setting for case management in England. The process evaluation highlights key aspects of implementation and training to be addressed in future studies of case management for dementia.

  18. The processing of the Viking Orbiter range data and its contribution to Mars gravity solutions

    NASA Technical Reports Server (NTRS)

    Lemoine, Frank G.; Rosborough, George W.; Smith, David E.

    1992-01-01

    The processing of Doppler data has been the primary method for deriving models of the Mars gravity field. Since the Mariner 9 and Viking spacecraft were placed in orbit about Mars, many models from degree and order 6 to degree and order 50 have been developed. However, during the Viking mission, some 26,000 range measurements to the two Viking Orbiters were also obtained. These data have not previously been used in the derivation of Mars gravity models. A portion of these range data have been processed simultaneously with the Doppler data. Normal equations were generated for both sets of data and were used to create two solutions complete to degree and order 30: a nominal solution including both the range and the Doppler data (MGM-R100), and another solution including only the Doppler data (MGM-R101). Tests with the covariances of these solutions, as well as with orbit overlap tests indicate that the interplanetary range data can be used to improve the modeling of the Mars gravity field.

  19. Exogenous peripheral blood mononuclear cells affect the healing process of deep-degree burns

    PubMed Central

    Yu, Guanying; Li, Yaonan; Ye, Lan; Wang, Xinglei; Zhang, Jixun; Dong, Zhengxue; Jiang, Duyin

    2017-01-01

    The regenerative repair of deep-degree (second degree) burned skin remains a notable challenge in the treatment of burn injury, despite improvements being made with regards to treatment modality and the emergence of novel therapies. Fetal skin constitutes an attractive target for investigating scarless healing of burned skin. To investigate the inflammatory response during scarless healing of burned fetal skin, the present study developed a nude mouse model, which was implanted with normal human fetal skin and burned fetal skin. Subsequently, human peripheral blood mononuclear cells (PBMCs) were used to treat the nude mouse model carrying the burned fetal skin. The expression levels of matrix metalloproteinase (MMP)-9 and tissue inhibitor of metalloproteinases (TIMP)-1 were investigated during this process. In the present study, fetal skin was subcutaneously implanted into the nude mice to establish the murine model. Hematoxylin and eosin staining was used to detect alterations in the skin during the development of fetal skin and during the healing process of deep-degree burned fetal skin. The expression levels of MMP-9 and TIMP-1 were determined using immunochemical staining, and their staining intensity was evaluated by mean optical density. The results demonstrated that fetal skin subcutaneously implanted into the dorsal skin flap of nude mice developed similarly to the normal growth process in the womb. In addition, the scarless healing process was clearly observed in the mice carrying the burned fetal skin. A total of 2 weeks was required to complete scarless healing. Following treatment with PBMCs, the burned fetal skin generated inflammatory factors and enhanced the inflammatory response, which consequently resulted in a reduction in the speed of healing and in the formation of scars. Therefore, exogenous PBMCs may alter the lowered immune response environment, which is required for scarless healing, resulting in scar formation. In conclusion, the present study indicated that the involvement of inflammatory cells is important during the healing process of deep-degree burned skin, and MMP-9 and TIMP-1 may serve important roles in the process of scar formation. PMID:28990101

  20. On the ontological assumptions of the medical model of psychiatry: philosophical considerations and pragmatic tasks

    PubMed Central

    2010-01-01

    A common theme in the contemporary medical model of psychiatry is that pathophysiological processes are centrally involved in the explanation, evaluation, and treatment of mental illnesses. Implied in this perspective is that clinical descriptors of these pathophysiological processes are sufficient to distinguish underlying etiologies. Psychiatric classification requires differentiation between what counts as normality (i.e.- order), and what counts as abnormality (i.e.- disorder). The distinction(s) between normality and pathology entail assumptions that are often deeply presupposed, manifesting themselves in statements about what mental disorders are. In this paper, we explicate that realism, naturalism, reductionism, and essentialism are core ontological assumptions of the medical model of psychiatry. We argue that while naturalism, realism, and reductionism can be reconciled with advances in contemporary neuroscience, essentialism - as defined to date - may be conceptually problematic, and we pose an eidetic construct of bio-psychosocial order and disorder based upon complex systems' dynamics. However we also caution against the overuse of any theory, and claim that practical distinctions are important to the establishment of clinical thresholds. We opine that as we move ahead toward both a new edition of the Diagnostic and Statistical Manual, and a proposed Decade of the Mind, the task at hand is to re-visit nosologic and ontologic assumptions pursuant to a re-formulation of diagnostic criteria and practice. PMID:20109176

Top