Sample records for time consistent model

  1. An algebraic method for constructing stable and consistent autoregressive filters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harlim, John, E-mail: jharlim@psu.edu; Department of Meteorology, the Pennsylvania State University, University Park, PA 16802; Hong, Hoon, E-mail: hong@ncsu.edu

    2015-02-15

    In this paper, we introduce an algebraic method to construct stable and consistent univariate autoregressive (AR) models of low order for filtering and predicting nonlinear turbulent signals with memory depth. By stable, we refer to the classical stability condition for the AR model. By consistent, we refer to the classical consistency constraints of Adams–Bashforth methods of order-two. One attractive feature of this algebraic method is that the model parameters can be obtained without directly knowing any training data set as opposed to many standard, regression-based parameterization methods. It takes only long-time average statistics as inputs. The proposed method provides amore » discretization time step interval which guarantees the existence of stable and consistent AR model and simultaneously produces the parameters for the AR models. In our numerical examples with two chaotic time series with different characteristics of decaying time scales, we find that the proposed AR models produce significantly more accurate short-term predictive skill and comparable filtering skill relative to the linear regression-based AR models. These encouraging results are robust across wide ranges of discretization times, observation times, and observation noise variances. Finally, we also find that the proposed model produces an improved short-time prediction relative to the linear regression-based AR-models in forecasting a data set that characterizes the variability of the Madden–Julian Oscillation, a dominant tropical atmospheric wave pattern.« less

  2. Consistency of internal fluxes in a hydrological model running at multiple time steps

    NASA Astrophysics Data System (ADS)

    Ficchi, Andrea; Perrin, Charles; Andréassian, Vazken

    2016-04-01

    Improving hydrological models remains a difficult task and many ways can be explored, among which one can find the improvement of spatial representation, the search for more robust parametrization, the better formulation of some processes or the modification of model structures by trial-and-error procedure. Several past works indicate that model parameters and structure can be dependent on the modelling time step, and there is thus some rationale in investigating how a model behaves across various modelling time steps, to find solutions for improvements. Here we analyse the impact of data time step on the consistency of the internal fluxes of a rainfall-runoff model run at various time steps, by using a large data set of 240 catchments. To this end, fine time step hydro-climatic information at sub-hourly resolution is used as input of a parsimonious rainfall-runoff model (GR) that is run at eight different model time steps (from 6 minutes to one day). The initial structure of the tested model (i.e. the baseline) corresponds to the daily model GR4J (Perrin et al., 2003), adapted to be run at variable sub-daily time steps. The modelled fluxes considered are interception, actual evapotranspiration and intercatchment groundwater flows. Observations of these fluxes are not available, but the comparison of modelled fluxes at multiple time steps gives additional information for model identification. The joint analysis of flow simulation performance and consistency of internal fluxes at different time steps provides guidance to the identification of the model components that should be improved. Our analysis indicates that the baseline model structure is to be modified at sub-daily time steps to warrant the consistency and realism of the modelled fluxes. For the baseline model improvement, particular attention is devoted to the interception model component, whose output flux showed the strongest sensitivity to modelling time step. The dependency of the optimal model complexity on time step is also analysed. References: Perrin, C., Michel, C., Andréassian, V., 2003. Improvement of a parsimonious model for streamflow simulation. Journal of Hydrology, 279(1-4): 275-289. DOI:10.1016/S0022-1694(03)00225-7

  3. A comparison of the conditional inference survival forest model to random survival forests based on a simulation study as well as on two applications with time-to-event data.

    PubMed

    Nasejje, Justine B; Mwambi, Henry; Dheda, Keertan; Lesosky, Maia

    2017-07-28

    Random survival forest (RSF) models have been identified as alternative methods to the Cox proportional hazards model in analysing time-to-event data. These methods, however, have been criticised for the bias that results from favouring covariates with many split-points and hence conditional inference forests for time-to-event data have been suggested. Conditional inference forests (CIF) are known to correct the bias in RSF models by separating the procedure for the best covariate to split on from that of the best split point search for the selected covariate. In this study, we compare the random survival forest model to the conditional inference model (CIF) using twenty-two simulated time-to-event datasets. We also analysed two real time-to-event datasets. The first dataset is based on the survival of children under-five years of age in Uganda and it consists of categorical covariates with most of them having more than two levels (many split-points). The second dataset is based on the survival of patients with extremely drug resistant tuberculosis (XDR TB) which consists of mainly categorical covariates with two levels (few split-points). The study findings indicate that the conditional inference forest model is superior to random survival forest models in analysing time-to-event data that consists of covariates with many split-points based on the values of the bootstrap cross-validated estimates for integrated Brier scores. However, conditional inference forests perform comparably similar to random survival forests models in analysing time-to-event data consisting of covariates with fewer split-points. Although survival forests are promising methods in analysing time-to-event data, it is important to identify the best forest model for analysis based on the nature of covariates of the dataset in question.

  4. The effectiveness of snow cube throwing learning model based on exploration

    NASA Astrophysics Data System (ADS)

    Sari, Nenden Mutiara

    2017-08-01

    This study aimed to know the effectiveness of Snow Cube Throwing (SCT) and Cooperative Model in Exploration-Based Math Learning in terms of the time required to complete the teaching materials and student engagement. This study was quasi-experimental research was conducted at SMPN 5 Cimahi, Indonesia. All student in grade VIII SMPN 5 Cimahi which consists of 382 students is used as population. The sample consists of two classes which had been chosen randomly with purposive sampling. First experiment class consists of 38 students and the second experiment class consists of 38 students. Observation sheet was used to observe the time required to complete the teaching materials and record the number of students involved in each meeting. The data obtained was analyzed by independent sample-t test and used the chart. The results of this study: SCT learning model based on exploration are more effective than cooperative learning models based on exploration in terms of the time required to complete teaching materials based on exploration and student engagement.

  5. The tell-tale look: viewing time, preferences, and prices.

    PubMed

    Gunia, Brian C; Murnighan, J Keith

    2015-01-01

    Even the simplest choices can prompt decision-makers to balance their preferences against other, more pragmatic considerations like price. Thus, discerning people's preferences from their decisions creates theoretical, empirical, and practical challenges. The current paper addresses these challenges by highlighting some specific circumstances in which the amount of time that people spend examining potential purchase items (i.e., viewing time) can in fact reveal their preferences. Our model builds from the gazing literature, in a purchasing context, to propose that the informational value of viewing time depends on prices. Consistent with the model's predictions, four studies show that when prices are absent or moderate, viewing time provides a signal that is consistent with a person's preferences and purchase intentions. When prices are extreme or consistent with a person's preferences, however, viewing time is a less reliable predictor of either. Thus, our model highlights a price-contingent "viewing bias," shedding theoretical, empirical, and practical light on the psychology of preferences and visual attention, and identifying a readily observable signal of preference.

  6. Storm time plasma transport in a unified and inter-coupled global magnetosphere model

    NASA Astrophysics Data System (ADS)

    Ilie, R.; Liemohn, M. W.; Toth, G.

    2014-12-01

    We present results from the two-way self-consistent coupling between the kinetic Hot Electron and Ion Drift Integrator (HEIDI) model and the Space Weather Modeling Framework (SWMF). HEIDI solves the time dependent, gyration and bounced averaged kinetic equation for the phase space density of different ring current species and computes full pitch angle distributions for all local times and radial distances. During geomagnetic times the dipole approximation becomes unsuitable even in the inner magnetosphere. Therefore the HEIDI model was generalized to accommodate an arbitrary magnetic field and through the coupling with SWMF it obtains a magnetic field description throughout the HEIDI domain along with a plasma distribution at the model outer boundary from the Block Adaptive Tree Solar Wind Roe Upwind Scheme (BATS-R-US) magnetohydrodynamics (MHD) model within SWMF. Electric field self-consistency is assured by the passing of convection potentials from the Ridley Ionosphere Model (RIM) within SWMF. In this study we test the various levels of coupling between the 3 physics based models, highlighting the role that the magnetic field, plasma sheet conditions and the cross polar cap potential play in the formation and evolution of the ring current. We show that the dynamically changing geospace environment itself plays a key role in determining the geoeffectiveness of the driver. The results of the self-consistent coupling between HEIDI, BATS-R-US and RIM during disturbed conditions emphasize the importance of a kinetic self-consistent approach to the description of geospace.

  7. Solvent effects in time-dependent self-consistent field methods. I. Optical response calculations

    DOE PAGES

    Bjorgaard, J. A.; Kuzmenko, V.; Velizhanin, K. A.; ...

    2015-01-22

    In this study, we implement and examine three excited state solvent models in time-dependent self-consistent field methods using a consistent formalism which unambiguously shows their relationship. These are the linear response, state specific, and vertical excitation solvent models. Their effects on energies calculated with the equivalent of COSMO/CIS/AM1 are given for a set of test molecules with varying excited state charge transfer character. The resulting solvent effects are explained qualitatively using a dipole approximation. It is shown that the fundamental differences between these solvent models are reflected by the character of the calculated excitations.

  8. Bayesian Threshold Estimation

    ERIC Educational Resources Information Center

    Gustafson, S. C.; Costello, C. S.; Like, E. C.; Pierce, S. J.; Shenoy, K. N.

    2009-01-01

    Bayesian estimation of a threshold time (hereafter simply threshold) for the receipt of impulse signals is accomplished given the following: 1) data, consisting of the number of impulses received in a time interval from zero to one and the time of the largest time impulse; 2) a model, consisting of a uniform probability density of impulse time…

  9. Nine time steps: ultra-fast statistical consistency testing of the Community Earth System Model (pyCECT v3.0)

    NASA Astrophysics Data System (ADS)

    Milroy, Daniel J.; Baker, Allison H.; Hammerling, Dorit M.; Jessup, Elizabeth R.

    2018-02-01

    The Community Earth System Model Ensemble Consistency Test (CESM-ECT) suite was developed as an alternative to requiring bitwise identical output for quality assurance. This objective test provides a statistical measurement of consistency between an accepted ensemble created by small initial temperature perturbations and a test set of CESM simulations. In this work, we extend the CESM-ECT suite with an inexpensive and robust test for ensemble consistency that is applied to Community Atmospheric Model (CAM) output after only nine model time steps. We demonstrate that adequate ensemble variability is achieved with instantaneous variable values at the ninth step, despite rapid perturbation growth and heterogeneous variable spread. We refer to this new test as the Ultra-Fast CAM Ensemble Consistency Test (UF-CAM-ECT) and demonstrate its effectiveness in practice, including its ability to detect small-scale events and its applicability to the Community Land Model (CLM). The new ultra-fast test facilitates CESM development, porting, and optimization efforts, particularly when used to complement information from the original CESM-ECT suite of tools.

  10. A fragmentation model of earthquake-like behavior in internet access activity

    NASA Astrophysics Data System (ADS)

    Paguirigan, Antonino A.; Angco, Marc Jordan G.; Bantang, Johnrob Y.

    We present a fragmentation model that generates almost any inverse power-law size distribution, including dual-scaled versions, consistent with the underlying dynamics of systems with earthquake-like behavior. We apply the model to explain the dual-scaled power-law statistics observed in an Internet access dataset that covers more than 32 million requests. The non-Poissonian statistics of the requested data sizes m and the amount of time τ needed for complete processing are consistent with the Gutenberg-Richter-law. Inter-event times δt between subsequent requests are also shown to exhibit power-law distributions consistent with the generalized Omori law. Thus, the dataset is similar to the earthquake data except that two power-law regimes are observed. Using the proposed model, we are able to identify underlying dynamics responsible in generating the observed dual power-law distributions. The model is universal enough for its applicability to any physical and human dynamics that is limited by finite resources such as space, energy, time or opportunity.

  11. Consistency of forest presence and biomass predictions modeled across overlapping spatial and temporal extents

    Treesearch

    Mark D. Nelson; Sean Healey; W. Keith Moser; J.G. Masek; Warren Cohen

    2011-01-01

    We assessed the consistency across space and time of spatially explicit models of forest presence and biomass in southern Missouri, USA, for adjacent, partially overlapping satellite image Path/Rows, and for coincident satellite images from the same Path/Row acquired in different years. Such consistency in satellite image-based classification and estimation is critical...

  12. Separating predictable and unpredictable work to manage interruptions and promote safe and effective work flow.

    PubMed

    Kowinsky, Amy M; Shovel, Judith; McLaughlin, Maribeth; Vertacnik, Lisa; Greenhouse, Pamela K; Martin, Susan Christie; Minnier, Tamra E

    2012-01-01

    Predictable and unpredictable patient care tasks compete for caregiver time and attention, making it difficult for patient care staff to reliably and consistently meet patient needs. We have piloted a redesigned care model that separates the work of patient care technicians based on task predictability and creates role specificity. This care model shows promise in improving the ability of staff to reliably complete tasks in a more consistent and timely manner.

  13. Model-assisted template extraction SRAF application to contact holes patterns in high-end flash memory device fabrication

    NASA Astrophysics Data System (ADS)

    Seoud, Ahmed; Kim, Juhwan; Ma, Yuansheng; Jayaram, Srividya; Hong, Le; Chae, Gyu-Yeol; Lee, Jeong-Woo; Park, Dae-Jin; Yune, Hyoung-Soon; Oh, Se-Young; Park, Chan-Ha

    2018-03-01

    Sub-resolution assist feature (SRAF) insertion techniques have been effectively used for a long time now to increase process latitude in the lithography patterning process. Rule-based SRAF and model-based SRAF are complementary solutions, and each has its own benefits, depending on the objectives of applications and the criticality of the impact on manufacturing yield, efficiency, and productivity. Rule-based SRAF provides superior geometric output consistency and faster runtime performance, but the associated recipe development time can be of concern. Model-based SRAF provides better coverage for more complicated pattern structures in terms of shapes and sizes, with considerably less time required for recipe development, although consistency and performance may be impacted. In this paper, we introduce a new model-assisted template extraction (MATE) SRAF solution, which employs decision tree learning in a model-based solution to provide the benefits of both rule-based and model-based SRAF insertion approaches. The MATE solution is designed to automate the creation of rules/templates for SRAF insertion, and is based on the SRAF placement predicted by model-based solutions. The MATE SRAF recipe provides optimum lithographic quality in relation to various manufacturing aspects in a very short time, compared to traditional methods of rule optimization. Experiments were done using memory device pattern layouts to compare the MATE solution to existing model-based SRAF and pixelated SRAF approaches, based on lithographic process window quality, runtime performance, and geometric output consistency.

  14. Agatha: Disentangling period signals from correlated noise in a periodogram framework

    NASA Astrophysics Data System (ADS)

    Feng, F.; Tuomi, M.; Jones, H. R. A.

    2018-04-01

    Agatha is a framework of periodograms to disentangle periodic signals from correlated noise and to solve the two-dimensional model selection problem: signal dimension and noise model dimension. These periodograms are calculated by applying likelihood maximization and marginalization and combined in a self-consistent way. Agatha can be used to select the optimal noise model and to test the consistency of signals in time and can be applied to time series analyses in other astronomical and scientific disciplines. An interactive web implementation of the software is also available at http://agatha.herts.ac.uk/.

  15. Seasonal Variability in Global Eddy Diffusion and the Effect on Thermospheric Neutral Density

    NASA Astrophysics Data System (ADS)

    Pilinski, M.; Crowley, G.

    2014-12-01

    We describe a method for making single-satellite estimates of the seasonal variability in global-average eddy diffusion coefficients. Eddy diffusion values as a function of time between January 2004 and January 2008 were estimated from residuals of neutral density measurements made by the CHallenging Minisatellite Payload (CHAMP) and simulations made using the Thermosphere Ionosphere Mesosphere Electrodynamics - Global Circulation Model (TIME-GCM). The eddy diffusion coefficient results are quantitatively consistent with previous estimates based on satellite drag observations and are qualitatively consistent with other measurement methods such as sodium lidar observations and eddy-diffusivity models. The eddy diffusion coefficient values estimated between January 2004 and January 2008 were then used to generate new TIME-GCM results. Based on these results, the RMS difference between the TIME-GCM model and density data from a variety of satellites is reduced by an average of 5%. This result, indicates that global thermospheric density modeling can be improved by using data from a single satellite like CHAMP. This approach also demonstrates how eddy diffusion could be estimated in near real-time from satellite observations and used to drive a global circulation model like TIME-GCM. Although the use of global values improves modeled neutral densities, there are some limitations of this method, which are discussed, including that the latitude-dependence of the seasonal neutral-density signal is not completely captured by a global variation of eddy diffusion coefficients. This demonstrates the need for a latitude-dependent specification of eddy diffusion consistent with diffusion observations made by other techniques.

  16. Seasonal variability in global eddy diffusion and the effect on neutral density

    NASA Astrophysics Data System (ADS)

    Pilinski, M. D.; Crowley, G.

    2015-04-01

    We describe a method for making single-satellite estimates of the seasonal variability in global-average eddy diffusion coefficients. Eddy diffusion values as a function of time were estimated from residuals of neutral density measurements made by the Challenging Minisatellite Payload (CHAMP) and simulations made using the thermosphere-ionosphere-mesosphere electrodynamics global circulation model (TIME-GCM). The eddy diffusion coefficient results are quantitatively consistent with previous estimates based on satellite drag observations and are qualitatively consistent with other measurement methods such as sodium lidar observations and eddy diffusivity models. Eddy diffusion coefficient values estimated between January 2004 and January 2008 were then used to generate new TIME-GCM results. Based on these results, the root-mean-square sum for the TIME-GCM model is reduced by an average of 5% when compared to density data from a variety of satellites, indicating that the fidelity of global density modeling can be improved by using data from a single satellite like CHAMP. This approach also demonstrates that eddy diffusion could be estimated in near real-time from satellite observations and used to drive a global circulation model like TIME-GCM. Although the use of global values improves modeled neutral densities, there are limitations to this method, which are discussed, including that the latitude dependence of the seasonal neutral-density signal is not completely captured by a global variation of eddy diffusion coefficients. This demonstrates the need for a latitude-dependent specification of eddy diffusion which is also consistent with diffusion observations made by other techniques.

  17. A theoretically consistent stochastic cascade for temporal disaggregation of intermittent rainfall

    NASA Astrophysics Data System (ADS)

    Lombardo, F.; Volpi, E.; Koutsoyiannis, D.; Serinaldi, F.

    2017-06-01

    Generating fine-scale time series of intermittent rainfall that are fully consistent with any given coarse-scale totals is a key and open issue in many hydrological problems. We propose a stationary disaggregation method that simulates rainfall time series with given dependence structure, wet/dry probability, and marginal distribution at a target finer (lower-level) time scale, preserving full consistency with variables at a parent coarser (higher-level) time scale. We account for the intermittent character of rainfall at fine time scales by merging a discrete stochastic representation of intermittency and a continuous one of rainfall depths. This approach yields a unique and parsimonious mathematical framework providing general analytical formulations of mean, variance, and autocorrelation function (ACF) for a mixed-type stochastic process in terms of mean, variance, and ACFs of both continuous and discrete components, respectively. To achieve the full consistency between variables at finer and coarser time scales in terms of marginal distribution and coarse-scale totals, the generated lower-level series are adjusted according to a procedure that does not affect the stochastic structure implied by the original model. To assess model performance, we study rainfall process as intermittent with both independent and dependent occurrences, where dependence is quantified by the probability that two consecutive time intervals are dry. In either case, we provide analytical formulations of main statistics of our mixed-type disaggregation model and show their clear accordance with Monte Carlo simulations. An application to rainfall time series from real world is shown as a proof of concept.

  18. Real-time simulation of a Doubly-Fed Induction Generator based wind power system on eMEGASimRTM Real-Time Digital Simulator

    NASA Astrophysics Data System (ADS)

    Boakye-Boateng, Nasir Abdulai

    The growing demand for wind power integration into the generation mix prompts the need to subject these systems to stringent performance requirements. This study sought to identify the required tools and procedures needed to perform real-time simulation studies of Doubly-Fed Induction Generator (DFIG) based wind generation systems as basis for performing more practical tests of reliability and performance for both grid-connected and islanded wind generation systems. The author focused on developing a platform for wind generation studies and in addition, the author tested the performance of two DFIG models on the platform real-time simulation model; an average SimpowerSystemsRTM DFIG wind turbine, and a detailed DFIG based wind turbine using ARTEMiSRTM components. The platform model implemented here consists of a high voltage transmission system with four integrated wind farm models consisting in total of 65 DFIG based wind turbines and it was developed and tested on OPAL-RT's eMEGASimRTM Real-Time Digital Simulator.

  19. Joint inversion of gravity and arrival time data from Parkfield: New constraints on structure and hypocenter locations near the SAFOD drill site

    USGS Publications Warehouse

    Roecker, S.; Thurber, C.; McPhee, D.

    2004-01-01

    Taking advantage of large datasets of both gravity and elastic wave arrival time observations available for the Parkfield, California region, we generated an image consistent with both types of data. Among a variety of strategies, the best result was obtained from a simultaneous inversion with a stability requirement that encouraged the perturbed model to remain close to a starting model consisting of a best fit to the arrival time data. The preferred model looks essentially the same as the best-fit arrival time model in areas where ray coverage is dense, with differences being greatest at shallow depths and near the edges of the model where ray paths are few. Earthquake locations change by no more than about 100 m, the general effect being migration of the seismic zone to the northeast, closer to the surface trace of the San Andreas Fault. Copyright 2004 by the American Geophysical Union.

  20. Copula based flexible modeling of associations between clustered event times.

    PubMed

    Geerdens, Candida; Claeskens, Gerda; Janssen, Paul

    2016-07-01

    Multivariate survival data are characterized by the presence of correlation between event times within the same cluster. First, we build multi-dimensional copulas with flexible and possibly symmetric dependence structures for such data. In particular, clustered right-censored survival data are modeled using mixtures of max-infinitely divisible bivariate copulas. Second, these copulas are fit by a likelihood approach where the vast amount of copula derivatives present in the likelihood is approximated by finite differences. Third, we formulate conditions for clustered right-censored survival data under which an information criterion for model selection is either weakly consistent or consistent. Several of the familiar selection criteria are included. A set of four-dimensional data on time-to-mastitis is used to demonstrate the developed methodology.

  1. Beyond ROC Curvature: Strength Effects and Response Time Data Support Continuous-Evidence Models of Recognition Memory

    ERIC Educational Resources Information Center

    Dube, Chad; Starns, Jeffrey J.; Rotello, Caren M.; Ratcliff, Roger

    2012-01-01

    A classic question in the recognition memory literature is whether retrieval is best described as a continuous-evidence process consistent with signal detection theory (SDT), or a threshold process consistent with many multinomial processing tree (MPT) models. Because receiver operating characteristics (ROCs) based on confidence ratings are…

  2. SGR-like behaviour of the repeating FRB 121102

    NASA Astrophysics Data System (ADS)

    Wang, F. Y.; Yu, H.

    2017-03-01

    Fast radio bursts (FRBs) are millisecond-duration radio signals occurring at cosmological distances. However the physical model of FRBs is mystery, many models have been proposed. Here we study the frequency distributions of peak flux, fluence, duration and waiting time for the repeating FRB 121102. The cumulative distributions of peak flux, fluence and duration show power-law forms. The waiting time distribution also shows power-law distribution, and is consistent with a non-stationary Poisson process. These distributions are similar as those of soft gamma repeaters (SGRs). We also use the statistical results to test the proposed models for FRBs. These distributions are consistent with the predictions from avalanche models of slowly driven nonlinear dissipative systems.

  3. Effects of electric field methods on modeling the midlatitude ionospheric electrodynamics and inner magnetosphere dynamics

    DOE PAGES

    Yu, Yiqun; Jordanova, Vania Koleva; Ridley, Aaron J.; ...

    2017-05-10

    Here, we report a self-consistent electric field coupling between the midlatitude ionospheric electrodynamics and inner magnetosphere dynamics represented in a kinetic ring current model. This implementation in the model features another self-consistency in addition to its already existing self-consistent magnetic field coupling with plasma. The model is therefore named as Ring current-Atmosphere interaction Model with Self-Consistent magnetic (B) and electric (E) fields, or RAM-SCB-E. With this new model, we explore, by comparing with previously employed empirical Weimer potential, the impact of using self-consistent electric fields on the modeling of storm time global electric potential distribution, plasma sheet particle injection, andmore » the subauroral polarization streams (SAPS) which heavily rely on the coupled interplay between the inner magnetosphere and midlatitude ionosphere. We find the following phenomena in the self-consistent model: (1) The spatially localized enhancement of electric field is produced within 2.5 < L < 4 during geomagnetic active time in the dusk-premidnight sector, with a similar dynamic penetration as found in statistical observations. (2) The electric potential contours show more substantial skewing toward the postmidnight than the Weimer potential, suggesting the resistance on the particles from directly injecting toward the low-L region. (3) The proton flux indeed indicates that the plasma sheet inner boundary at the dusk-premidnight sector is located further away from the Earth than in the Weimer potential, and a “tongue” of low-energy protons extends eastward toward the dawn, leading to the Harang reversal. (4) SAPS are reproduced in the subauroral region, and their magnitude and latitudinal width are in reasonable agreement with data.« less

  4. Effects of electric field methods on modeling the midlatitude ionospheric electrodynamics and inner magnetosphere dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, Yiqun; Jordanova, Vania Koleva; Ridley, Aaron J.

    Here, we report a self-consistent electric field coupling between the midlatitude ionospheric electrodynamics and inner magnetosphere dynamics represented in a kinetic ring current model. This implementation in the model features another self-consistency in addition to its already existing self-consistent magnetic field coupling with plasma. The model is therefore named as Ring current-Atmosphere interaction Model with Self-Consistent magnetic (B) and electric (E) fields, or RAM-SCB-E. With this new model, we explore, by comparing with previously employed empirical Weimer potential, the impact of using self-consistent electric fields on the modeling of storm time global electric potential distribution, plasma sheet particle injection, andmore » the subauroral polarization streams (SAPS) which heavily rely on the coupled interplay between the inner magnetosphere and midlatitude ionosphere. We find the following phenomena in the self-consistent model: (1) The spatially localized enhancement of electric field is produced within 2.5 < L < 4 during geomagnetic active time in the dusk-premidnight sector, with a similar dynamic penetration as found in statistical observations. (2) The electric potential contours show more substantial skewing toward the postmidnight than the Weimer potential, suggesting the resistance on the particles from directly injecting toward the low-L region. (3) The proton flux indeed indicates that the plasma sheet inner boundary at the dusk-premidnight sector is located further away from the Earth than in the Weimer potential, and a “tongue” of low-energy protons extends eastward toward the dawn, leading to the Harang reversal. (4) SAPS are reproduced in the subauroral region, and their magnitude and latitudinal width are in reasonable agreement with data.« less

  5. Effects of electric field methods on modeling the midlatitude ionospheric electrodynamics and inner magnetosphere dynamics

    NASA Astrophysics Data System (ADS)

    Yu, Yiqun; Jordanova, Vania K.; Ridley, Aaron J.; Toth, Gabor; Heelis, Roderick

    2017-05-01

    We report a self-consistent electric field coupling between the midlatitude ionospheric electrodynamics and inner magnetosphere dynamics represented in a kinetic ring current model. This implementation in the model features another self-consistency in addition to its already existing self-consistent magnetic field coupling with plasma. The model is therefore named as Ring current-Atmosphere interaction Model with Self-Consistent magnetic (B) and electric (E) fields, or RAM-SCB-E. With this new model, we explore, by comparing with previously employed empirical Weimer potential, the impact of using self-consistent electric fields on the modeling of storm time global electric potential distribution, plasma sheet particle injection, and the subauroral polarization streams (SAPS) which heavily rely on the coupled interplay between the inner magnetosphere and midlatitude ionosphere. We find the following phenomena in the self-consistent model: (1) The spatially localized enhancement of electric field is produced within 2.5 < L < 4 during geomagnetic active time in the dusk-premidnight sector, with a similar dynamic penetration as found in statistical observations. (2) The electric potential contours show more substantial skewing toward the postmidnight than the Weimer potential, suggesting the resistance on the particles from directly injecting toward the low-L region. (3) The proton flux indeed indicates that the plasma sheet inner boundary at the dusk-premidnight sector is located further away from the Earth than in the Weimer potential, and a "tongue" of low-energy protons extends eastward toward the dawn, leading to the Harang reversal. (4) SAPS are reproduced in the subauroral region, and their magnitude and latitudinal width are in reasonable agreement with data.

  6. Model-Based Systems

    NASA Technical Reports Server (NTRS)

    Frisch, Harold P.

    2007-01-01

    Engineers, who design systems using text specification documents, focus their work upon the completed system to meet Performance, time and budget goals. Consistency and integrity is difficult to maintain within text documents for a single complex system and more difficult to maintain as several systems are combined into higher-level systems, are maintained over decades, and evolve technically and in performance through updates. This system design approach frequently results in major changes during the system integration and test phase, and in time and budget overruns. Engineers who build system specification documents within a model-based systems environment go a step further and aggregate all of the data. They interrelate all of the data to insure consistency and integrity. After the model is constructed, the various system specification documents are prepared, all from the same database. The consistency and integrity of the model is assured, therefore the consistency and integrity of the various specification documents is insured. This article attempts to define model-based systems relative to such an environment. The intent is to expose the complexity of the enabling problem by outlining what is needed, why it is needed and how needs are being addressed by international standards writing teams.

  7. Evaluation of annual, global seismicity forecasts, including ensemble models

    NASA Astrophysics Data System (ADS)

    Taroni, Matteo; Zechar, Jeremy; Marzocchi, Warner

    2013-04-01

    In 2009, the Collaboratory for the Study of the Earthquake Predictability (CSEP) initiated a prototype global earthquake forecast experiment. Three models participated in this experiment for 2009, 2010 and 2011—each model forecast the number of earthquakes above magnitude 6 in 1x1 degree cells that span the globe. Here we use likelihood-based metrics to evaluate the consistency of the forecasts with the observed seismicity. We compare model performance with statistical tests and a new method based on the peer-to-peer gambling score. The results of the comparisons are used to build ensemble models that are a weighted combination of the individual models. Notably, in these experiments the ensemble model always performs significantly better than the single best-performing model. Our results indicate the following: i) time-varying forecasts, if not updated after each major shock, may not provide significant advantages with respect to time-invariant models in 1-year forecast experiments; ii) the spatial distribution seems to be the most important feature to characterize the different forecasting performances of the models; iii) the interpretation of consistency tests may be misleading because some good models may be rejected while trivial models may pass consistency tests; iv) a proper ensemble modeling seems to be a valuable procedure to get the best performing model for practical purposes.

  8. Picosecond time-resolved measurements of dense plasma line shifts

    DOE PAGES

    Stillman, C. R.; Nilson, P. M.; Ivancic, S. T.; ...

    2017-06-13

    Picosecond time-resolved x-ray spectroscopy is used to measure the spectral line shift of the 1s2p–1s 2 transition in He-like Al ions as a function of the instantaneous plasma conditions. The plasma temperature and density are inferred from the Al He α complex using a nonlocal-thermodynamic-equilibrium atomic physics model. The experimental spectra show a linearly increasing red shift for electron densities of 1 to 5 × 10 23 cm –3. Furthermore, the measured line shifts are broadly consistent with a generalized analytic line-shift model based on calculations of a self-consistent field ion sphere model.

  9. Picosecond time-resolved measurements of dense plasma line shifts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stillman, C. R.; Nilson, P. M.; Ivancic, S. T.

    Picosecond time-resolved x-ray spectroscopy is used to measure the spectral line shift of the 1s2p–1s 2 transition in He-like Al ions as a function of the instantaneous plasma conditions. The plasma temperature and density are inferred from the Al He α complex using a nonlocal-thermodynamic-equilibrium atomic physics model. The experimental spectra show a linearly increasing red shift for electron densities of 1 to 5 × 10 23 cm –3. Furthermore, the measured line shifts are broadly consistent with a generalized analytic line-shift model based on calculations of a self-consistent field ion sphere model.

  10. The Tell-Tale Look: Viewing Time, Preferences, and Prices

    PubMed Central

    Gunia, Brian C.; Murnighan, J. Keith

    2015-01-01

    Even the simplest choices can prompt decision-makers to balance their preferences against other, more pragmatic considerations like price. Thus, discerning people’s preferences from their decisions creates theoretical, empirical, and practical challenges. The current paper addresses these challenges by highlighting some specific circumstances in which the amount of time that people spend examining potential purchase items (i.e., viewing time) can in fact reveal their preferences. Our model builds from the gazing literature, in a purchasing context, to propose that the informational value of viewing time depends on prices. Consistent with the model’s predictions, four studies show that when prices are absent or moderate, viewing time provides a signal that is consistent with a person’s preferences and purchase intentions. When prices are extreme or consistent with a person’s preferences, however, viewing time is a less reliable predictor of either. Thus, our model highlights a price-contingent “viewing bias,” shedding theoretical, empirical, and practical light on the psychology of preferences and visual attention, and identifying a readily observable signal of preference. PMID:25581382

  11. Effect of operational cycle time length on nitrogen removal in an alternating oxidation ditch system.

    PubMed

    Mantziaras, I D; Stamou, A; Katsiri, A

    2011-06-01

    This paper refers to nitrogen removal optimization of an alternating oxidation ditch system through the use of a mathematical model and pilot testing. The pilot system where measurements have been made has a total volume of 120 m(3) and consists of two ditches operating in four phases during one cycle and performs carbon oxidation, nitrification, denitrification and settling. The mathematical model consists of one-dimensional mass balance (convection-dispersion) equations based on the IAWPRC ASM 1 model. After the calibration and verification of the model, simulation system performance was made. Optimization is achieved by testing operational cycles and phases with different time lengths. The limits of EU directive 91/271 for nitrogen removal have been used for comparison. The findings show that operational cycles with smaller time lengths can achieve higher nitrogen removals and that an "equilibrium" between phase time percentages in the whole cycle, for a given inflow, must be achieved.

  12. Thermosphere-Ionosphere-Mesosphere Modeling Using the TIME-GCM

    DTIC Science & Technology

    2014-09-30

    respectively. The CCM3 is the NCAR Community Climate Model, Version 3.6, a GCM of the troposphere and stratosphere. All models include self-consistent...middle atmosphere version of the NCAR Community Climate Model, (2) the NCAR TIME-GCM, and (3) the Model for Ozone and Related Chemical Tracers (MOZART... troposphere , but the impacts of such events extend well into the mesosphere. The coupled NCAR thermosphere-ionosphere-mesosphere- electrodynamics general

  13. Model Performance Evaluation and Scenario Analysis ...

    EPA Pesticide Factsheets

    This tool consists of two parts: model performance evaluation and scenario analysis (MPESA). The model performance evaluation consists of two components: model performance evaluation metrics and model diagnostics. These metrics provides modelers with statistical goodness-of-fit measures that capture magnitude only, sequence only, and combined magnitude and sequence errors. The performance measures include error analysis, coefficient of determination, Nash-Sutcliffe efficiency, and a new weighted rank method. These performance metrics only provide useful information about the overall model performance. Note that MPESA is based on the separation of observed and simulated time series into magnitude and sequence components. The separation of time series into magnitude and sequence components and the reconstruction back to time series provides diagnostic insights to modelers. For example, traditional approaches lack the capability to identify if the source of uncertainty in the simulated data is due to the quality of the input data or the way the analyst adjusted the model parameters. This report presents a suite of model diagnostics that identify if mismatches between observed and simulated data result from magnitude or sequence related errors. MPESA offers graphical and statistical options that allow HSPF users to compare observed and simulated time series and identify the parameter values to adjust or the input data to modify. The scenario analysis part of the too

  14. Modeling and optimum time performance for concurrent processing

    NASA Technical Reports Server (NTRS)

    Mielke, Roland R.; Stoughton, John W.; Som, Sukhamoy

    1988-01-01

    The development of a new graph theoretic model for describing the relation between a decomposed algorithm and its execution in a data flow environment is presented. Called ATAMM, the model consists of a set of Petri net marked graphs useful for representing decision-free algorithms having large-grained, computationally complex primitive operations. Performance time measures which determine computing speed and throughput capacity are defined, and the ATAMM model is used to develop lower bounds for these times. A concurrent processing operating strategy for achieving optimum time performance is presented and illustrated by example.

  15. Linear system identification via backward-time observer models

    NASA Technical Reports Server (NTRS)

    Juang, Jer-Nan; Phan, Minh

    1993-01-01

    This paper presents an algorithm to identify a state-space model of a linear system using a backward-time approach. The procedure consists of three basic steps. First, the Markov parameters of a backward-time observer are computed from experimental input-output data. Second, the backward-time observer Markov parameters are decomposed to obtain the backward-time system Markov parameters (backward-time pulse response samples) from which a backward-time state-space model is realized using the Eigensystem Realization Algorithm. Third, the obtained backward-time state space model is converted to the usual forward-time representation. Stochastic properties of this approach will be discussed. Experimental results are given to illustrate when and to what extent this concept works.

  16. The Atmospheric Chemistry and Climate Model Intercomparison Project (ACCMIP): Overview and Description of Models, Simulations and Climate Diagnostics

    NASA Technical Reports Server (NTRS)

    Lamarque, J.-F.; Shindell, D. T.; Naik, V.; Plummer, D.; Josse, B.; Righi, M.; Rumbold, S. T.; Schulz, M.; Skeie, R. B.; Strode, S.; hide

    2013-01-01

    The Atmospheric Chemistry and Climate Model Intercomparison Project (ACCMIP) consists of a series of time slice experiments targeting the long-term changes in atmospheric composition between 1850 and 2100, with the goal of documenting composition changes and the associated radiative forcing. In this overview paper, we introduce the ACCMIP activity, the various simulations performed (with a requested set of 14) and the associated model output. The 16 ACCMIP models have a wide range of horizontal and vertical resolutions, vertical extent, chemistry schemes and interaction with radiation and clouds. While anthropogenic and biomass burning emissions were specified for all time slices in the ACCMIP protocol, it is found that the natural emissions are responsible for a significant range across models, mostly in the case of ozone precursors. The analysis of selected present-day climate diagnostics (precipitation, temperature, specific humidity and zonal wind) reveals biases consistent with state-of-the-art climate models. The model-to- model comparison of changes in temperature, specific humidity and zonal wind between 1850 and 2000 and between 2000 and 2100 indicates mostly consistent results. However, models that are clear outliers are different enough from the other models to significantly affect their simulation of atmospheric chemistry.

  17. Does Age of Entrance Affect Community College Completion Probabilities? Evidence from a Discrete-Time Hazard Model

    ERIC Educational Resources Information Center

    Calcagno, Juan Carlos; Crosta, Peter; Bailey, Thomas; Jenkins, Davis

    2007-01-01

    Research has consistently shown that older students--those who enter college for the first time at age 25 or older--are less likely to complete a degree or certificate. The authors estimate a single-risk discrete-time hazard model using transcript data on a cohort of first-time community college students in Florida to compare the educational…

  18. Future time perspective and promotion focus as determinants of intraindividual change in work motivation.

    PubMed

    Kooij, Dorien T A M; Bal, P Matthijs; Kanfer, Ruth

    2014-06-01

    In the near future, workforces will increasingly consist of older workers. At the same time, research has demonstrated that work-related growth motives decrease with age. Although this finding is consistent with life span theories, such as the selection optimization and compensation (SOC) model, we know relatively little about the process variables that bring about this change in work motivation. Therefore, we use a 4-wave study design to examine the mediating role of future time perspective and promotion focus in the negative association between age and work-related growth motives. Consistent with the SOC model, we found that future time perspective was negatively associated with age, which, in turn, was associated with lower promotion focus, lower work-related growth motive strength, and lower motivation to continue working. These findings have important theoretical implications for the literature on aging and work motivation, and practical implications for how to motivate older workers. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  19. Transitioning NWChem to the Next Generation of Manycore Machines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bylaska, Eric J.; Apra, E; Kowalski, Karol

    The NorthWest chemistry (NWChem) modeling software is a popular molecular chemistry simulation software that was designed from the start to work on massively parallel processing supercomputers [1-3]. It contains an umbrella of modules that today includes self-consistent eld (SCF), second order Møller-Plesset perturbation theory (MP2), coupled cluster (CC), multiconguration self-consistent eld (MCSCF), selected conguration interaction (CI), tensor contraction engine (TCE) many body methods, density functional theory (DFT), time-dependent density functional theory (TDDFT), real-time time-dependent density functional theory, pseudopotential plane-wave density functional theory (PSPW), band structure (BAND), ab initio molecular dynamics (AIMD), Car-Parrinello molecular dynamics (MD), classical MD, hybrid quantum mechanicsmore » molecular mechanics (QM/MM), hybrid ab initio molecular dynamics molecular mechanics (AIMD/MM), gauge independent atomic orbital nuclear magnetic resonance (GIAO NMR), conductor like screening solvation model (COSMO), conductor-like screening solvation model based on density (COSMO-SMD), and reference interaction site model (RISM) solvation models, free energy simulations, reaction path optimization, parallel in time, among other capabilities [4]. Moreover, new capabilities continue to be added with each new release.« less

  20. An Idealized Test of the Response of the Community Atmosphere Model to Near-Grid-Scale Forcing Across Hydrostatic Resolutions

    NASA Astrophysics Data System (ADS)

    Herrington, A. R.; Reed, K. A.

    2018-02-01

    A set of idealized experiments are developed using the Community Atmosphere Model (CAM) to understand the vertical velocity response to reductions in forcing scale that is known to occur when the horizontal resolution of the model is increased. The test consists of a set of rising bubble experiments, in which the horizontal radius of the bubble and the model grid spacing are simultaneously reduced. The test is performed with moisture, through incorporating moist physics routines of varying complexity, although convection schemes are not considered. Results confirm that the vertical velocity in CAM is to first-order, proportional to the inverse of the horizontal forcing scale, which is consistent with a scale analysis of the dry equations of motion. In contrast, experiments in which the coupling time step between the moist physics routines and the dynamical core (i.e., the "physics" time step) are relaxed back to more conventional values results in severely damped vertical motion at high resolution, degrading the scaling. A set of aqua-planet simulations using different physics time steps are found to be consistent with the results of the idealized experiments.

  1. Two classes of ODE models with switch-like behavior.

    PubMed

    Just, Winfried; Korb, Mason; Elbert, Ben; Young, Todd

    2013-12-01

    In cases where the same real-world system can be modeled both by an ODE system ⅅ and a Boolean system , it is of interest to identify conditions under which the two systems will be consistent, that is, will make qualitatively equivalent predictions. In this note we introduce two broad classes of relatively simple models that provide a convenient framework for studying such questions. In contrast to the widely known class of Glass networks, the right-hand sides of our ODEs are Lipschitz-continuous. We prove that if has certain structures, consistency between ⅅ and is implied by sufficient separation of time scales in one class of our models. Namely, if the trajectories of are "one-stepping" then we prove a strong form of consistency and if has a certain monotonicity property then there is a weaker consistency between ⅅ and . These results appear to point to more general structure properties that favor consistency between ODE and Boolean models.

  2. A novel condition for stable nonlinear sampled-data models using higher-order discretized approximations with zero dynamics.

    PubMed

    Zeng, Cheng; Liang, Shan; Xiang, Shuwen

    2017-05-01

    Continuous-time systems are usually modelled by the form of ordinary differential equations arising from physical laws. However, the use of these models in practice and utilizing, analyzing or transmitting these data from such systems must first invariably be discretized. More importantly, for digital control of a continuous-time nonlinear system, a good sampled-data model is required. This paper investigates the new consistency condition which is weaker than the previous similar results presented. Moreover, given the stability of the high-order approximate model with stable zero dynamics, the novel condition presented stabilizes the exact sampled-data model of the nonlinear system for sufficiently small sampling periods. An insightful interpretation of the obtained results can be made in terms of the stable sampling zero dynamics, and the new consistency condition is surprisingly associated with the relative degree of the nonlinear continuous-time system. Our controller design, based on the higher-order approximate discretized model, extends the existing methods which mainly deal with the Euler approximation. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  3. Circular analysis in complex stochastic systems

    PubMed Central

    Valleriani, Angelo

    2015-01-01

    Ruling out observations can lead to wrong models. This danger occurs unwillingly when one selects observations, experiments, simulations or time-series based on their outcome. In stochastic processes, conditioning on the future outcome biases all local transition probabilities and makes them consistent with the selected outcome. This circular self-consistency leads to models that are inconsistent with physical reality. It is also the reason why models built solely on macroscopic observations are prone to this fallacy. PMID:26656656

  4. Differential Equations Models to Study Quorum Sensing.

    PubMed

    Pérez-Velázquez, Judith; Hense, Burkhard A

    2018-01-01

    Mathematical models to study quorum sensing (QS) have become an important tool to explore all aspects of this type of bacterial communication. A wide spectrum of mathematical tools and methods such as dynamical systems, stochastics, and spatial models can be employed. In this chapter, we focus on giving an overview of models consisting of differential equations (DE), which can be used to describe changing quantities, for example, the dynamics of one or more signaling molecule in time and space, often in conjunction with bacterial growth dynamics. The chapter is divided into two sections: ordinary differential equations (ODE) and partial differential equations (PDE) models of QS. Rates of change are represented mathematically by derivatives, i.e., in terms of DE. ODE models allow describing changes in one independent variable, for example, time. PDE models can be used to follow changes in more than one independent variable, for example, time and space. Both types of models often consist of systems (i.e., more than one equation) of equations, such as equations for bacterial growth and autoinducer concentration dynamics. Almost from the onset, mathematical modeling of QS using differential equations has been an interdisciplinary endeavor and many of the works we revised here will be placed into their biological context.

  5. Real-time simulation of a spiking neural network model of the basal ganglia circuitry using general purpose computing on graphics processing units.

    PubMed

    Igarashi, Jun; Shouno, Osamu; Fukai, Tomoki; Tsujino, Hiroshi

    2011-11-01

    Real-time simulation of a biologically realistic spiking neural network is necessary for evaluation of its capacity to interact with real environments. However, the real-time simulation of such a neural network is difficult due to its high computational costs that arise from two factors: (1) vast network size and (2) the complicated dynamics of biologically realistic neurons. In order to address these problems, mainly the latter, we chose to use general purpose computing on graphics processing units (GPGPUs) for simulation of such a neural network, taking advantage of the powerful computational capability of a graphics processing unit (GPU). As a target for real-time simulation, we used a model of the basal ganglia that has been developed according to electrophysiological and anatomical knowledge. The model consists of heterogeneous populations of 370 spiking model neurons, including computationally heavy conductance-based models, connected by 11,002 synapses. Simulation of the model has not yet been performed in real-time using a general computing server. By parallelization of the model on the NVIDIA Geforce GTX 280 GPU in data-parallel and task-parallel fashion, faster-than-real-time simulation was robustly realized with only one-third of the GPU's total computational resources. Furthermore, we used the GPU's full computational resources to perform faster-than-real-time simulation of three instances of the basal ganglia model; these instances consisted of 1100 neurons and 33,006 synapses and were synchronized at each calculation step. Finally, we developed software for simultaneous visualization of faster-than-real-time simulation output. These results suggest the potential power of GPGPU techniques in real-time simulation of realistic neural networks. Copyright © 2011 Elsevier Ltd. All rights reserved.

  6. Integrating field plots, lidar, and landsat time series to provide temporally consistent annual estimates of biomass from 1990 to present

    Treesearch

    Warren B. Cohen; Hans-Erik Andersen; Sean P. Healey; Gretchen G. Moisen; Todd A. Schroeder; Christopher W. Woodall; Grant M. Domke; Zhiqiang Yang; Robert E. Kennedy; Stephen V. Stehman; Curtis Woodcock; Jim Vogelmann; Zhe Zhu; Chengquan Huang

    2015-01-01

    We are developing a system that provides temporally consistent biomass estimates for national greenhouse gas inventory reporting to the United Nations Framework Convention on Climate Change. Our model-assisted estimation framework relies on remote sensing to scale from plot measurements to lidar strip samples, to Landsat time series-based maps. As a demonstration, new...

  7. Retrieving hydrological connectivity from empirical causality in karst systems

    NASA Astrophysics Data System (ADS)

    Delforge, Damien; Vanclooster, Marnik; Van Camp, Michel; Poulain, Amaël; Watlet, Arnaud; Hallet, Vincent; Kaufmann, Olivier; Francis, Olivier

    2017-04-01

    Because of their complexity, karst systems exhibit nonlinear dynamics. Moreover, if one attempts to model a karst, the hidden behavior complicates the choice of the most suitable model. Therefore, both intense investigation methods and nonlinear data analysis are needed to reveal the underlying hydrological connectivity as a prior for a consistent physically based modelling approach. Convergent Cross Mapping (CCM), a recent method, promises to identify causal relationships between time series belonging to the same dynamical systems. The method is based on phase space reconstruction and is suitable for nonlinear dynamics. As an empirical causation detection method, it could be used to highlight the hidden complexity of a karst system by revealing its inner hydrological and dynamical connectivity. Hence, if one can link causal relationships to physical processes, the method should show great potential to support physically based model structure selection. We present the results of numerical experiments using karst model blocks combined in different structures to generate time series from actual rainfall series. CCM is applied between the time series to investigate if the empirical causation detection is consistent with the hydrological connectivity suggested by the karst model.

  8. Self-consistency in the phonon space of the particle-phonon coupling model

    NASA Astrophysics Data System (ADS)

    Tselyaev, V.; Lyutorovich, N.; Speth, J.; Reinhard, P.-G.

    2018-04-01

    In the paper the nonlinear generalization of the time blocking approximation (TBA) is presented. The TBA is one of the versions of the extended random-phase approximation (RPA) developed within the Green-function method and the particle-phonon coupling model. In the generalized version of the TBA the self-consistency principle is extended onto the phonon space of the model. The numerical examples show that this nonlinear version of the TBA leads to the convergence of results with respect to enlarging the phonon space of the model.

  9. Consistent initial conditions for the Saint-Venant equations in river network modeling

    NASA Astrophysics Data System (ADS)

    Yu, Cheng-Wei; Liu, Frank; Hodges, Ben R.

    2017-09-01

    Initial conditions for flows and depths (cross-sectional areas) throughout a river network are required for any time-marching (unsteady) solution of the one-dimensional (1-D) hydrodynamic Saint-Venant equations. For a river network modeled with several Strahler orders of tributaries, comprehensive and consistent synoptic data are typically lacking and synthetic starting conditions are needed. Because of underlying nonlinearity, poorly defined or inconsistent initial conditions can lead to convergence problems and long spin-up times in an unsteady solver. Two new approaches are defined and demonstrated herein for computing flows and cross-sectional areas (or depths). These methods can produce an initial condition data set that is consistent with modeled landscape runoff and river geometry boundary conditions at the initial time. These new methods are (1) the pseudo time-marching method (PTM) that iterates toward a steady-state initial condition using an unsteady Saint-Venant solver and (2) the steady-solution method (SSM) that makes use of graph theory for initial flow rates and solution of a steady-state 1-D momentum equation for the channel cross-sectional areas. The PTM is shown to be adequate for short river reaches but is significantly slower and has occasional non-convergent behavior for large river networks. The SSM approach is shown to provide a rapid solution of consistent initial conditions for both small and large networks, albeit with the requirement that additional code must be written rather than applying an existing unsteady Saint-Venant solver.

  10. Self-Consistent Ring Current/Electromagnetic Ion Cyclotron Waves Modeling

    NASA Technical Reports Server (NTRS)

    Khazanov, G. V.; Gamayunov, K. V.; Gallagher, D. L.

    2006-01-01

    The self-consistent treatment of the RC ion dynamics and EMIC waves, which are thought to exert important influences on the ion dynamical evolution, is an important missing element in our understanding of the storm-and recovery-time ring current evolution. For example, the EMlC waves cause the RC decay on a time scale of about one hour or less during the main phase of storms. The oblique EMIC waves damp due to Landau resonance with the thermal plasmaspheric electrons, and subsequent transport of the dissipating wave energy into the ionosphere below causes an ionosphere temperature enhancement. Under certain conditions, relativistic electrons, with energies 21 MeV, can be removed from the outer radiation belt by EMIC wave scattering during a magnetic storm. That is why the modeling of EMIC waves is critical and timely issue in magnetospheric physics. This study will generalize the self-consistent theoretical description of RC ions and EMIC waves that has been developed by Khazanov et al. [2002, 2003] and include the heavy ions and propagation effects of EMIC waves in the global dynamic of self-consistent RC - EMIC waves coupling. The results of our newly developed model that will be presented at the meeting, focusing mainly on the dynamic of EMIC waves and comparison of these results with the previous global RC modeling studies devoted to EMIC waves formation. We also discuss RC ion precipitations and wave induced thermal electron fluxes into the ionosphere.

  11. Event models and the fan effect.

    PubMed

    Radvansky, G A; O'Rear, Andrea E; Fisher, Jerry S

    2017-08-01

    The current study explored the persistence of event model organizations and how this influences the experience of interference during retrieval. People in this study memorized lists of sentences about objects in locations, such as "The potted palm is in the hotel." Previous work has shown that such information can either be stored in separate event models, thereby producing retrieval interference, or integrated into common event models, thereby eliminating retrieval interference. Unlike prior studies, the current work explored the impact of forgetting up to 2 weeks later on this pattern of performance. We explored three possible outcomes across the various retention intervals. First, consistent with research showing that longer delays reduce proactive and retroactive interference, any retrieval interference effects of competing event models could be reduced over time. Second, the binding of information into events models may weaken over time, causing interference effects to emerge when they had previously been absent. Third, and finally, the organization of information into event models could remain stable over long periods of time. The results reported here are most consistent with the last outcome. While there were some minor variations across the various retention intervals, the basic pattern of event model organization remained preserved over the two-week retention period.

  12. A Data Analytical Framework for Improving Real-Time, Decision Support Systems in Healthcare

    ERIC Educational Resources Information Center

    Yahav, Inbal

    2010-01-01

    In this dissertation we develop a framework that combines data mining, statistics and operations research methods for improving real-time decision support systems in healthcare. Our approach consists of three main concepts: data gathering and preprocessing, modeling, and deployment. We introduce the notion of offline and semi-offline modeling to…

  13. Narrative event boundaries, reading times, and expectation.

    PubMed

    Pettijohn, Kyle A; Radvansky, Gabriel A

    2016-10-01

    During text comprehension, readers create mental representations of the described events, called situation models. When new information is encountered, these models must be updated or new ones created. Consistent with the event indexing model, previous studies have shown that when readers encounter an event shift, reading times often increase. However, such increases are not consistently observed. This paper addresses this inconsistency by examining the extent to which reading-time differences observed at event shifts reflect an unexpectedness in the narrative rather than processes involved in model updating. In two reassessments of prior work, event shifts known to increase reading time were rated as less expected, and expectedness ratings significantly predicted reading time. In three new experiments, participants read stories in which an event shift was or was not foreshadowed, thereby influencing expectedness of the shift. Experiment 1 revealed that readers do not expect event shifts, but foreshadowing eliminates this. Experiment 2 showed that foreshadowing does not affect identification of event shifts. Finally, Experiment 3 found that, although reading times increased when an event shift was not foreshadowed, they were not different from controls when it was. Moreover, responses to memory probes were slower following an event shift regardless of foreshadowing, suggesting that situation model updating had taken place. Overall, the results support the idea that previously observed reading time increases at event shifts reflect, at least in part, a reader's unexpected encounter with a shift rather than an increase in processing effort required to update a situation model.

  14. Self-Consistent Model of Magnetospheric Electric Field, Ring Current, Plasmasphere, and Electromagnetic Ion Cyclotron Waves: Initial Results

    NASA Technical Reports Server (NTRS)

    Gamayunov, K. V.; Khazanov, G. V.; Liemohn, M. W.; Fok, M.-C.; Ridley, A. J.

    2009-01-01

    Further development of our self-consistent model of interacting ring current (RC) ions and electromagnetic ion cyclotron (EMIC) waves is presented. This model incorporates large scale magnetosphere-ionosphere coupling and treats self-consistently not only EMIC waves and RC ions, but also the magnetospheric electric field, RC, and plasmasphere. Initial simulations indicate that the region beyond geostationary orbit should be included in the simulation of the magnetosphere-ionosphere coupling. Additionally, a self-consistent description, based on first principles, of the ionospheric conductance is required. These initial simulations further show that in order to model the EMIC wave distribution and wave spectral properties accurately, the plasmasphere should also be simulated self-consistently, since its fine structure requires as much care as that of the RC. Finally, an effect of the finite time needed to reestablish a new potential pattern throughout the ionosphere and to communicate between the ionosphere and the equatorial magnetosphere cannot be ignored.

  15. Modeling ultrafast solvated electronic dynamics using time-dependent density functional theory and polarizable continuum model.

    PubMed

    Liang, Wenkel; Chapman, Craig T; Ding, Feizhi; Li, Xiaosong

    2012-03-01

    A first-principles solvated electronic dynamics method is introduced. Solvent electronic degrees of freedom are coupled to the time-dependent electronic density of a solute molecule by means of the implicit reaction field method, and the entire electronic system is propagated in time. This real-time time-dependent approach, incorporating the polarizable continuum solvation model, is shown to be very effective in describing the dynamical solvation effect in the charge transfer process and yields a consistent absorption spectrum in comparison to the conventional linear response results in solution. © 2012 American Chemical Society

  16. Consistency assessment of rating curve data in various locations using Bidirectional Reach (BReach)

    NASA Astrophysics Data System (ADS)

    Van Eerdenbrugh, Katrien; Van Hoey, Stijn; Coxon, Gemma; Freer, Jim; Verhoest, Niko E. C.

    2017-10-01

    When estimating discharges through rating curves, temporal data consistency is a critical issue. In this research, consistency in stage-discharge data is investigated using a methodology called Bidirectional Reach (BReach), which departs from a (in operational hydrology) commonly used definition of consistency. A period is considered to be consistent if no consecutive and systematic deviations from a current situation occur that exceed observational uncertainty. Therefore, the capability of a rating curve model to describe a subset of the (chronologically sorted) data is assessed in each observation by indicating the outermost data points for which the rating curve model behaves satisfactorily. These points are called the maximum left or right reach, depending on the direction of the investigation. This temporal reach should not be confused with a spatial reach (indicating a part of a river). Changes in these reaches throughout the data series indicate possible changes in data consistency and if not resolved could introduce additional errors and biases. In this research, various measurement stations in the UK, New Zealand and Belgium are selected based on their significant historical ratings information and their specific characteristics related to data consistency. For each country, regional information is maximally used to estimate observational uncertainty. Based on this uncertainty, a BReach analysis is performed and, subsequently, results are validated against available knowledge about the history and behavior of the site. For all investigated cases, the methodology provides results that appear to be consistent with this knowledge of historical changes and thus facilitates a reliable assessment of (in)consistent periods in stage-discharge measurements. This assessment is not only useful for the analysis and determination of discharge time series, but also to enhance applications based on these data (e.g., by informing hydrological and hydraulic model evaluation design about consistent time periods to analyze).

  17. Alcohol and liver cirrhosis mortality in the United States: comparison of methods for the analyses of time-series panel data models.

    PubMed

    Ye, Yu; Kerr, William C

    2011-01-01

    To explore various model specifications in estimating relationships between liver cirrhosis mortality rates and per capita alcohol consumption in aggregate-level cross-section time-series data. Using a series of liver cirrhosis mortality rates from 1950 to 2002 for 47 U.S. states, the effects of alcohol consumption were estimated from pooled autoregressive integrated moving average (ARIMA) models and 4 types of panel data models: generalized estimating equation, generalized least square, fixed effect, and multilevel models. Various specifications of error term structure under each type of model were also examined. Different approaches controlling for time trends and for using concurrent or accumulated consumption as predictors were also evaluated. When cirrhosis mortality was predicted by total alcohol, highly consistent estimates were found between ARIMA and panel data analyses, with an average overall effect of 0.07 to 0.09. Less consistent estimates were derived using spirits, beer, and wine consumption as predictors. When multiple geographic time series are combined as panel data, none of existent models could accommodate all sources of heterogeneity such that any type of panel model must employ some form of generalization. Different types of panel data models should thus be estimated to examine the robustness of findings. We also suggest cautious interpretation when beverage-specific volumes are used as predictors. Copyright © 2010 by the Research Society on Alcoholism.

  18. Solvent effects in time-dependent self-consistent field methods. II. Variational formulations and analytical gradients

    DOE PAGES

    Bjorgaard, J. A.; Velizhanin, K. A.; Tretiak, S.

    2015-08-06

    This study describes variational energy expressions and analytical excited state energy gradients for time-dependent self-consistent field methods with polarizable solvent effects. Linear response, vertical excitation, and state-specific solventmodels are examined. Enforcing a variational ground stateenergy expression in the state-specific model is found to reduce it to the vertical excitation model. Variational excited state energy expressions are then provided for the linear response and vertical excitation models and analytical gradients are formulated. Using semiempiricalmodel chemistry, the variational expressions are verified by numerical and analytical differentiation with respect to a static external electric field. Lastly, analytical gradients are further tested by performingmore » microcanonical excited state molecular dynamics with p-nitroaniline.« less

  19. Introduction of a new laboratory test: an econometric approach with the use of neural network analysis.

    PubMed

    Jabor, A; Vlk, T; Boril, P

    1996-04-15

    We designed a simulation model for the assessment of the financial risks involved when a new diagnostic test is introduced in the laboratory. The model is based on a neural network consisting of ten neurons and assumes that input entities can have assigned appropriate uncertainty. Simulations are done on a 1-day interval basis. Risk analysis completes the model and the financial effects are evaluated for a selected time period. The basic output of the simulation consists of total expenses and income during the simulation time, net present value of the project at the end of simulation, total number of control samples during simulation, total number of patients evaluated and total number of used kits.

  20. Analysis of EDZ Development of Columnar Jointed Rock Mass in the Baihetan Diversion Tunnel

    NASA Astrophysics Data System (ADS)

    Hao, Xian-Jie; Feng, Xia-Ting; Yang, Cheng-Xiang; Jiang, Quan; Li, Shao-Jun

    2016-04-01

    Due to the time dependency of the crack propagation, columnar jointed rock masses exhibit marked time-dependent behaviour. In this study, in situ measurements, scanning electron microscope (SEM), back-analysis method and numerical simulations are presented to study the time-dependent development of the excavation damaged zone (EDZ) around underground diversion tunnels in a columnar jointed rock mass. Through in situ measurements of crack propagation and EDZ development, their extent is seen to have increased over time, despite the fact that the advancing face has passed. Similar to creep behaviour, the time-dependent EDZ development curve also consists of three stages: a deceleration stage, a stabilization stage, and an acceleration stage. A corresponding constitutive model of columnar jointed rock mass considering time-dependent behaviour is proposed. The time-dependent degradation coefficient of the roughness coefficient and residual friction angle in the Barton-Bandis strength criterion are taken into account. An intelligent back-analysis method is adopted to obtain the unknown time-dependent degradation coefficients for the proposed constitutive model. The numerical modelling results are in good agreement with the measured EDZ. Not only that, the failure pattern simulated by this time-dependent constitutive model is consistent with that observed in the scanning electron microscope (SEM) and in situ observation, indicating that this model could accurately simulate the failure pattern and time-dependent EDZ development of columnar joints. Moreover, the effects of the support system provided and the in situ stress on the time-dependent coefficients are studied. Finally, the long-term stability analysis of diversion tunnels excavated in columnar jointed rock masses is performed.

  1. Two classes of ODE models with switch-like behavior

    PubMed Central

    Just, Winfried; Korb, Mason; Elbert, Ben; Young, Todd

    2013-01-01

    In cases where the same real-world system can be modeled both by an ODE system ⅅ and a Boolean system 𝔹, it is of interest to identify conditions under which the two systems will be consistent, that is, will make qualitatively equivalent predictions. In this note we introduce two broad classes of relatively simple models that provide a convenient framework for studying such questions. In contrast to the widely known class of Glass networks, the right-hand sides of our ODEs are Lipschitz-continuous. We prove that if 𝔹 has certain structures, consistency between ⅅ and 𝔹 is implied by sufficient separation of time scales in one class of our models. Namely, if the trajectories of 𝔹 are “one-stepping” then we prove a strong form of consistency and if 𝔹 has a certain monotonicity property then there is a weaker consistency between ⅅ and 𝔹. These results appear to point to more general structure properties that favor consistency between ODE and Boolean models. PMID:24244061

  2. Application of a deconvolution method for identifying burst amplitudes and arrival times in Alcator C-Mod far SOL plasma fluctuations

    NASA Astrophysics Data System (ADS)

    Theodorsen, Audun; Garcia, Odd Erik; Kube, Ralph; Labombard, Brian; Terry, Jim

    2017-10-01

    In the far scrape-off layer (SOL), radial motion of filamentary structures leads to excess transport of particles and heat. Amplitudes and arrival times of these filaments have previously been studied by conditional averaging in single-point measurements from Langmuir Probes and Gas Puff Imaging (GPI). Conditional averaging can be problematic: the cutoff for large amplitudes is mostly chosen by convention; the conditional windows used may influence the arrival time distribution; and the amplitudes cannot be separated from a background. Previous work has shown that SOL fluctuations are well described by a stochastic model consisting of a super-position of pulses with fixed shape and randomly distributed amplitudes and arrival times. The model can be formulated as a pulse shape convolved with a train of delta pulses. By choosing a pulse shape consistent with the power spectrum of the fluctuation time series, Richardson-Lucy deconvolution can be used to recover the underlying amplitudes and arrival times of the delta pulses. We apply this technique to both L and H-mode GPI data from the Alcator C-Mod tokamak. The pulse arrival times are shown to be uncorrelated and uniformly distributed, consistent with a Poisson process, and the amplitude distribution has an exponential tail.

  3. A consistent NPMLE of the joint distribution function with competing risks data under the dependent masking and right-censoring model.

    PubMed

    Li, Jiahui; Yu, Qiqing

    2016-01-01

    Dinse (Biometrics, 38:417-431, 1982) provides a special type of right-censored and masked competing risks data and proposes a non-parametric maximum likelihood estimator (NPMLE) and a pseudo MLE of the joint distribution function [Formula: see text] with such data. However, their asymptotic properties have not been studied so far. Under the extention of either the conditional masking probability (CMP) model or the random partition masking (RPM) model (Yu and Li, J Nonparametr Stat 24:753-764, 2012), we show that (1) Dinse's estimators are consistent if [Formula: see text] takes on finitely many values and each point in the support set of [Formula: see text] can be observed; (2) if the failure time is continuous, the NPMLE is not uniquely determined, and the standard approach (which puts weights only on one element in each observed set) leads to an inconsistent NPMLE; (3) in general, Dinse's estimators are not consistent even under the discrete assumption; (4) we construct a consistent NPMLE. The consistency is given under a new model called dependent masking and right-censoring model. The CMP model and the RPM model are indeed special cases of the new model. We compare our estimator to Dinse's estimators through simulation and real data. Simulation study indicates that the consistent NPMLE is a good approximation to the underlying distribution for moderate sample sizes.

  4. iGLASS: An Improvement to the GLASS Method for Estimating Species Trees from Gene Trees

    PubMed Central

    Rosenberg, Noah A.

    2012-01-01

    Abstract Several methods have been designed to infer species trees from gene trees while taking into account gene tree/species tree discordance. Although some of these methods provide consistent species tree topology estimates under a standard model, most either do not estimate branch lengths or are computationally slow. An exception, the GLASS method of Mossel and Roch, is consistent for the species tree topology, estimates branch lengths, and is computationally fast. However, GLASS systematically overestimates divergence times, leading to biased estimates of species tree branch lengths. By assuming a multispecies coalescent model in which multiple lineages are sampled from each of two taxa at L independent loci, we derive the distribution of the waiting time until the first interspecific coalescence occurs between the two taxa, considering all loci and measuring from the divergence time. We then use the mean of this distribution to derive a correction to the GLASS estimator of pairwise divergence times. We show that our improved estimator, which we call iGLASS, consistently estimates the divergence time between a pair of taxa as the number of loci approaches infinity, and that it is an unbiased estimator of divergence times when one lineage is sampled per taxon. We also show that many commonly used clustering methods can be combined with the iGLASS estimator of pairwise divergence times to produce a consistent estimator of the species tree topology. Through simulations, we show that iGLASS can greatly reduce the bias and mean squared error in obtaining estimates of divergence times in a species tree. PMID:22216756

  5. Testing a Nursing-Specific Model of Electronic Patient Record documentation with regard to information completeness, comprehensiveness and consistency.

    PubMed

    von Krogh, Gunn; Nåden, Dagfinn; Aasland, Olaf Gjerløw

    2012-10-01

    To present the results from the test site application of the documentation model KPO (quality assurance, problem solving and caring) designed to impact the quality of nursing information in electronic patient record (EPR). The KPO model was developed by means of consensus group and clinical testing. Four documentation arenas and eight content categories, nursing terminologies and a decision-support system were designed to impact the completeness, comprehensiveness and consistency of nursing information. The testing was performed in a pre-test/post-test time series design, three times at a one-year interval. Content analysis of nursing documentation was accomplished through the identification, interpretation and coding of information units. Data from the pre-test and post-test 2 were subjected to statistical analyses. To estimate the differences, paired t-tests were used. At post-test 2, the information is found to be more complete, comprehensive and consistent than at pre-test. The findings indicate that documentation arenas combining work flow and content categories deduced from theories on nursing practice can influence the quality of nursing information. The KPO model can be used as guide when shifting from paper-based to electronic-based nursing documentation with the aim of obtaining complete, comprehensive and consistent nursing information. © 2012 Blackwell Publishing Ltd.

  6. Self-Consistent Ring Current Modeling with Propagating Electromagnetic Ion Cyclotron Waves in the Presence of Heavy Ions

    NASA Technical Reports Server (NTRS)

    Khazanov, G. V.; Gamayunov, K. V.; Gallagher, D. L.; Kozyra, J. U.; Liemohn, M. W.

    2006-01-01

    The self-consistent treatment of the RC ion dynamics and EMlC waves, which are thought to exert important influences on the ion dynamical evolution, is an important missing element in our understanding of the storm-and recovery-time ring current evolution. Under certain conditions, relativistic electrons, with energies greater than or equal to 1 MeV, can be removed from the outer radiation belt by EMlC wave scattering during a magnetic storm (Summers and Thorne, 2003; Albert, 2003). That is why the modeling of EMlC waves is critical and timely issue in magnetospheric physics. This study will generalize the self-consistent theoretical description of RC ions and EMlC waves that has been developed by Khazanov et al. [2002, 2003] and include the heavy ions and propagation effects of EMlC waves in the global dynamic of self-consistent RC - EMlC waves coupling. The results of our newly developed model that will be presented at Huntsville 2006 meeting, focusing mainly on the dynamic of EMlC waves and comparison of these results with the previous global RC modeling studies devoted to EMlC waves formation. We also discuss RC ion precipitations and wave induced thermal electron fluxes into the ionosphere.

  7. Cross-Scale Modelling of Subduction from Minute to Million of Years Time Scale

    NASA Astrophysics Data System (ADS)

    Sobolev, S. V.; Muldashev, I. A.

    2015-12-01

    Subduction is an essentially multi-scale process with time-scales spanning from geological to earthquake scale with the seismic cycle in-between. Modelling of such process constitutes one of the largest challenges in geodynamic modelling today.Here we present a cross-scale thermomechanical model capable of simulating the entire subduction process from rupture (1 min) to geological time (millions of years) that employs elasticity, mineral-physics-constrained non-linear transient viscous rheology and rate-and-state friction plasticity. The model generates spontaneous earthquake sequences. The adaptive time-step algorithm recognizes moment of instability and drops the integration time step to its minimum value of 40 sec during the earthquake. The time step is then gradually increased to its maximal value of 5 yr, following decreasing displacement rates during the postseismic relaxation. Efficient implementation of numerical techniques allows long-term simulations with total time of millions of years. This technique allows to follow in details deformation process during the entire seismic cycle and multiple seismic cycles. We observe various deformation patterns during modelled seismic cycle that are consistent with surface GPS observations and demonstrate that, contrary to the conventional ideas, the postseismic deformation may be controlled by viscoelastic relaxation in the mantle wedge, starting within only a few hours after the great (M>9) earthquakes. Interestingly, in our model an average slip velocity at the fault closely follows hyperbolic decay law. In natural observations, such deformation is interpreted as an afterslip, while in our model it is caused by the viscoelastic relaxation of mantle wedge with viscosity strongly varying with time. We demonstrate that our results are consistent with the postseismic surface displacement after the Great Tohoku Earthquake for the day-to-year time range. We will also present results of the modeling of deformation of the upper plate during multiple earthquake cycles at times of hundred thousand and million years and discuss effect of great earthquakes in changing long-term stress field in the upper plate.

  8. Continuous state-space representation of a bucket-type rainfall-runoff model: a case study with the GR4 model using state-space GR4 (version 1.0)

    NASA Astrophysics Data System (ADS)

    Santos, Léonard; Thirel, Guillaume; Perrin, Charles

    2018-04-01

    In many conceptual rainfall-runoff models, the water balance differential equations are not explicitly formulated. These differential equations are solved sequentially by splitting the equations into terms that can be solved analytically with a technique called operator splitting. As a result, only the solutions of the split equations are used to present the different models. This article provides a methodology to make the governing water balance equations of a bucket-type rainfall-runoff model explicit and to solve them continuously. This is done by setting up a comprehensive state-space representation of the model. By representing it in this way, the operator splitting, which makes the structural analysis of the model more complex, could be removed. In this state-space representation, the lag functions (unit hydrographs), which are frequent in rainfall-runoff models and make the resolution of the representation difficult, are first replaced by a so-called Nash cascade and then solved with a robust numerical integration technique. To illustrate this methodology, the GR4J model is taken as an example. The substitution of the unit hydrographs with a Nash cascade, even if it modifies the model behaviour when solved using operator splitting, does not modify it when the state-space representation is solved using an implicit integration technique. Indeed, the flow time series simulated by the new representation of the model are very similar to those simulated by the classic model. The use of a robust numerical technique that approximates a continuous-time model also improves the lag parameter consistency across time steps and provides a more time-consistent model with time-independent parameters.

  9. Robust model predictive control for constrained continuous-time nonlinear systems

    NASA Astrophysics Data System (ADS)

    Sun, Tairen; Pan, Yongping; Zhang, Jun; Yu, Haoyong

    2018-02-01

    In this paper, a robust model predictive control (MPC) is designed for a class of constrained continuous-time nonlinear systems with bounded additive disturbances. The robust MPC consists of a nonlinear feedback control and a continuous-time model-based dual-mode MPC. The nonlinear feedback control guarantees the actual trajectory being contained in a tube centred at the nominal trajectory. The dual-mode MPC is designed to ensure asymptotic convergence of the nominal trajectory to zero. This paper extends current results on discrete-time model-based tube MPC and linear system model-based tube MPC to continuous-time nonlinear model-based tube MPC. The feasibility and robustness of the proposed robust MPC have been demonstrated by theoretical analysis and applications to a cart-damper springer system and a one-link robot manipulator.

  10. Diagnosis of inconsistencies in multi-year gridded precipitation data over mountainous areas and related impacts on hydrologic simulations

    NASA Astrophysics Data System (ADS)

    Mizukami, N.; Smith, M. B.

    2010-12-01

    It is common for the error characteristics of long-term precipitation data to change over time due to various factors such as gauge relocation and changes in data processing methods. The temporal consistency of precipitation data error characteristics is as important as data accuracy itself for hydrologic model calibration and subsequent use of the calibrated model for streamflow prediction. In mountainous areas, the generation of precipitation grids relies on sparse gage networks, the makeup of which often varies over time. This causes a change in error characteristics of the long-term precipitation data record. We will discuss the diagnostic analysis of the consistency of gridded precipitation time series and illustrate the adverse effect of inconsistent precipitation data on a hydrologic model simulation. We used hourly 4 km gridded precipitation time series over a mountainous basin in the Sierra Nevada Mountains of California from October 1988 through September 2006. The basin is part of the broader study area that served as the focus of the second phase of the Distributed Model Intercomparison Project (DMIP-2), organized by the U.S. National Weather Service (NWS) of the National Oceanographic and Atmospheric Administration (NOAA). To check the consistency of the gridded precipitation time series, double mass analysis was performed using single pixel and basin mean areal precipitation (MAP) values derived from gridded DMIP-2 and Parameter-Elevation Regressions on Independent Slopes Model (PRISM) precipitation data. The analysis leads to the conclusion that over the entire study time period, a clear change in error characteristics in the DMIP-2 data occurred in the beginning of 2003. This matches the timing of one of the major gage network changes. The inconsistency of two MAP time series computed from the gridded precipitation fields over two elevation zones was corrected by adjusting hourly values based on the double mass analysis. We show that model simulations using the adjusted MAP data produce improved stream flow compared to simulations using the inconsistent MAP input data.

  11. Consistent modelling of wind turbine noise propagation from source to receiver.

    PubMed

    Barlas, Emre; Zhu, Wei Jun; Shen, Wen Zhong; Dag, Kaya O; Moriarty, Patrick

    2017-11-01

    The unsteady nature of wind turbine noise is a major reason for annoyance. The variation of far-field sound pressure levels is not only caused by the continuous change in wind turbine noise source levels but also by the unsteady flow field and the ground characteristics between the turbine and receiver. To take these phenomena into account, a consistent numerical technique that models the sound propagation from the source to receiver is developed. Large eddy simulation with an actuator line technique is employed for the flow modelling and the corresponding flow fields are used to simulate sound generation and propagation. The local blade relative velocity, angle of attack, and turbulence characteristics are input to the sound generation model. Time-dependent blade locations and the velocity between the noise source and receiver are considered within a quasi-3D propagation model. Long-range noise propagation of a 5 MW wind turbine is investigated. Sound pressure level time series evaluated at the source time are studied for varying wind speeds, surface roughness, and ground impedances within a 2000 m radius from the turbine.

  12. Consistent modelling of wind turbine noise propagation from source to receiver

    DOE PAGES

    Barlas, Emre; Zhu, Wei Jun; Shen, Wen Zhong; ...

    2017-11-28

    The unsteady nature of wind turbine noise is a major reason for annoyance. The variation of far-field sound pressure levels is not only caused by the continuous change in wind turbine noise source levels but also by the unsteady flow field and the ground characteristics between the turbine and receiver. To take these phenomena into account, a consistent numerical technique that models the sound propagation from the source to receiver is developed. Large eddy simulation with an actuator line technique is employed for the flow modelling and the corresponding flow fields are used to simulate sound generation and propagation. Themore » local blade relative velocity, angle of attack, and turbulence characteristics are input to the sound generation model. Time-dependent blade locations and the velocity between the noise source and receiver are considered within a quasi-3D propagation model. Long-range noise propagation of a 5 MW wind turbine is investigated. Sound pressure level time series evaluated at the source time are studied for varying wind speeds, surface roughness, and ground impedances within a 2000 m radius from the turbine.« less

  13. Consistent modelling of wind turbine noise propagation from source to receiver

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barlas, Emre; Zhu, Wei Jun; Shen, Wen Zhong

    The unsteady nature of wind turbine noise is a major reason for annoyance. The variation of far-field sound pressure levels is not only caused by the continuous change in wind turbine noise source levels but also by the unsteady flow field and the ground characteristics between the turbine and receiver. To take these phenomena into account, a consistent numerical technique that models the sound propagation from the source to receiver is developed. Large eddy simulation with an actuator line technique is employed for the flow modelling and the corresponding flow fields are used to simulate sound generation and propagation. Themore » local blade relative velocity, angle of attack, and turbulence characteristics are input to the sound generation model. Time-dependent blade locations and the velocity between the noise source and receiver are considered within a quasi-3D propagation model. Long-range noise propagation of a 5 MW wind turbine is investigated. Sound pressure level time series evaluated at the source time are studied for varying wind speeds, surface roughness, and ground impedances within a 2000 m radius from the turbine.« less

  14. A Second Order Semi-Discrete Cosserat Rod Model Suitable for Dynamic Simulations in Real Time

    NASA Astrophysics Data System (ADS)

    Lang, Holger; Linn, Joachim

    2009-09-01

    We present an alternative approach for a semi-discrete viscoelastic Cosserat rod model that allows both fast dynamic computations within milliseconds and accurate results compared to detailed finite element solutions. The model is able to represent extension, shearing, bending and torsion. For inner dissipation, a consistent damping potential from Antman is chosen. The continuous equations of motion, which consist a system of nonlinear hyperbolic partial differential algebraic equations, are derived from a two dimensional variational principle. The semi-discrete balance equations are obtained by spatial finite difference schemes on a staggered grid and standard index reduction techniques. The right-hand side of the model and its Jacobian can be chosen free of higher algebraic (e.g. root) or transcendent (e.g. trigonometric or exponential) functions and is therefore extremely cheap to evaluate numerically. For the time integration of the system, we use well established stiff solvers. As our model yields computational times within milliseconds, it is suitable for interactive manipulation. It reflects structural mechanics solutions sufficiently correct, as comparison with detailed finite element results shows.

  15. Self-Consistent Ring Current Modeling with Propagating Electromagnetic Ion Cyclotron Waves in the Presence of Heavy Ions

    NASA Technical Reports Server (NTRS)

    Khazanov, George V.

    2006-01-01

    The self-consistent treatment of the RC ion dynamics and EMIC waves, which are thought to exert important influences on the ion dynamical evolution, is an important missing element in our understanding of the storm-and recovery-time ring current evolution. Under certain conditions, relativistic electrons, with energies 21 MeV, can be removed from the outer radiation belt by EMIC wave scattering during a magnetic storm. That is why the modeling of EMIC waves is critical and timely issue in magnetospheric physics. To describe the RC evolution itself this study uses the ring current-atmosphere interaction model (RAM). RAM solves the gyration and bounce-averaged Boltzmann-Landau equation inside of geosynchronous orbit. Originally developed at the University of Michigan, there are now several branches of this model currently in use as describe by Liemohn namely those at NASA Goddard Space Flight Center This study will generalize the self-consistent theoretical description of RC ions and EMIC waves that has been developed by Khazanov and include the heavy ions and propagation effects of EMIC waves in the global dynamic of self-consistent RC - EMIC waves coupling. The results of our newly developed model that will be presented at GEM meeting, focusing mainly on the dynamic of EMIC waves and comparison of these results with the previous global RC modeling studies devoted to EMIC waves formation. We also discuss RC ion precipitations and wave induced thermal electron fluxes into the ionosphere.

  16. Estimating the timing and location of shallow rainfall-induced landslides using a model for transient, unsaturated infiltration

    USGS Publications Warehouse

    Baum, Rex L.; Godt, Jonathan W.; Savage, William Z.

    2010-01-01

    Shallow rainfall-induced landslides commonly occur under conditions of transient infiltration into initially unsaturated soils. In an effort to predict the timing and location of such landslides, we developed a model of the infiltration process using a two-layer system that consists of an unsaturated zone above a saturated zone and implemented this model in a geographic information system (GIS) framework. The model links analytical solutions for transient, unsaturated, vertical infiltration above the water table to pressure-diffusion solutions for pressure changes below the water table. The solutions are coupled through a transient water table that rises as water accumulates at the base of the unsaturated zone. This scheme, though limited to simplified soil-water characteristics and moist initial conditions, greatly improves computational efficiency over numerical models in spatially distributed modeling applications. Pore pressures computed by these coupled models are subsequently used in one-dimensional slope-stability computations to estimate the timing and locations of slope failures. Applied over a digital landscape near Seattle, Washington, for an hourly rainfall history known to trigger shallow landslides, the model computes a factor of safety for each grid cell at any time during a rainstorm. The unsaturated layer attenuates and delays the rainfall-induced pore-pressure response of the model at depth, consistent with observations at an instrumented hillside near Edmonds, Washington. This attenuation results in realistic estimates of timing for the onset of slope instability (7 h earlier than observed landslides, on average). By considering the spatial distribution of physical properties, the model predicts the primary source areas of landslides.

  17. Modeling elasticity in crystal growth.

    PubMed

    Elder, K R; Katakowski, Mark; Haataja, Mikko; Grant, Martin

    2002-06-17

    A new model of crystal growth is presented that describes the phenomena on atomic length and diffusive time scales. The former incorporates elastic and plastic deformation in a natural manner, and the latter enables access to time scales much larger than conventional atomic methods. The model is shown to be consistent with the predictions of Read and Shockley for grain boundary energy, and Matthews and Blakeslee for misfit dislocations in epitaxial growth.

  18. Consistency.

    PubMed

    Levin, Roger

    2005-09-01

    Consistency is a reflection of having the right model, the right systems and the right implementation. As Vince Lombardi, the legendary coach of the Green Bay Packers, once said, "You don't do things right once in a while. You do them right all the time." To provide the ultimate level of patient care, reduce stress for the dentist and staff members and ensure high practice profitability, consistency is key.

  19. Detecting a periodic signal in the terrestrial cratering record

    NASA Technical Reports Server (NTRS)

    Grieve, Richard A. F.; Rupert, James D.; Goodacre, Alan K.; Sharpton, Virgil L.

    1988-01-01

    A time-series analysis of model periodic data, where the period and phase are known, has been performed in order to investigate whether a significant period can be detected consistently from a mix of random and periodic impacts. Special attention is given to the effect of age uncertainties and random ages in the detection of a periodic signal. An equivalent analysis is performed with observed data on crater ages and compared with the model data, and the effects of the temporal distribution of crater ages on the results from the time-series analysis are studied. Evidence for a consistent 30-m.y. period is found to be weak.

  20. A PHYSIOLOGICALLY-BASED PHARMACOKINETIC MODEL FOR TRICHLOROETHYLENE WITH SPECIFICITY FOR THE LONG EVANS RAT

    EPA Science Inventory

    A PBPK model for TCE with specificity for the male LE rat that accurately predicts TCE tissue time-course data has not been developed, although other PBPK models for TCE exist. Development of such a model was the present aim. The PBPK model consisted of 5 compartments: fat; slowl...

  1. The Effects of Training on Pre-Service English Teachers' Regulation of Their Study Time

    ERIC Educational Resources Information Center

    Daloglu, Aysegul; Vural, Seniye

    2013-01-01

    Based on Zimmerman, Bonner, and Kovach's (1996) academy model, an intervention consisting of seven weekly training sessions to increase students' awareness of and ability to plan and manage their study time was developed. Participant students reflected on the implementation of each phase of the learning model in their weekly journal entries,…

  2. Temporal validation for landsat-based volume estimation model

    Treesearch

    Renaldo J. Arroyo; Emily B. Schultz; Thomas G. Matney; David L. Evans; Zhaofei Fan

    2015-01-01

    Satellite imagery can potentially reduce the costs and time associated with ground-based forest inventories; however, for satellite imagery to provide reliable forest inventory data, it must produce consistent results from one time period to the next. The objective of this study was to temporally validate a Landsat-based volume estimation model in a four county study...

  3. Processing Speed in Children: Examination of the Structure in Middle Childhood and Its Impact on Reading

    ERIC Educational Resources Information Center

    Gerst, Elyssa H.

    2017-01-01

    The primary aim of this study was to examine the structure of processing speed (PS) in middle childhood by comparing five theoretically driven models of PS. The models consisted of two conceptual models (a unitary model, a complexity model) and three methodological models (a stimulus material model, an output modality model, and a timing modality…

  4. Modeling parameters that characterize pacing of elite female 800-m freestyle swimmers.

    PubMed

    Lipińska, Patrycja; Allen, Sian V; Hopkins, Will G

    2016-01-01

    Pacing offers a potential avenue for enhancement of endurance performance. We report here a novel method for characterizing pacing in 800-m freestyle swimming. Websites provided 50-m lap and race times for 192 swims of 20 elite female swimmers between 2000 and 2013. Pacing for each swim was characterized with five parameters derived from a linear model: linear and quadratic coefficients for effect of lap number, reductions from predicted time for first and last laps, and lap-time variability (standard error of the estimate). Race-to-race consistency of the parameters was expressed as intraclass correlation coefficients (ICCs). The average swim was a shallow negative quadratic with slowest time in the eleventh lap. First and last laps were faster by 6.4% and 3.6%, and lap-time variability was ±0.64%. Consistency between swimmers ranged from low-moderate for the linear and quadratic parameters (ICC = 0.29 and 0.36) to high for the last-lap parameter (ICC = 0.62), while consistency for race time was very high (ICC = 0.80). Only ~15% of swimmers had enough swims (~15 or more) to provide reasonable evidence of optimum parameter values in plots of race time vs. each parameter. The modest consistency of most of the pacing parameters and lack of relationships between parameters and performance suggest that swimmers usually compensated for changes in one parameter with changes in another. In conclusion, pacing in 800-m elite female swimmers can be characterized with five parameters, but identifying an optimal pacing profile is generally impractical.

  5. Theoretical models of the electrical discharge machining process. III. The variable mass, cylindrical plasma model

    NASA Astrophysics Data System (ADS)

    Eubank, Philip T.; Patel, Mukund R.; Barrufet, Maria A.; Bozkurt, Bedri

    1993-06-01

    A variable mass, cylindrical plasma model (VMCPM) is developed for sparks created by electrical discharge in a liquid media. The model consist of three differential equations—one each from fluid dynamics, an energy balance, and the radiation equation—combined with a plasma equation of state. A thermophysical property subroutine allows realistic estimation of plasma enthalpy, mass density, and particle fractions by inclusion of the heats of dissociation and ionization for a plasma created from deionized water. Problems with the zero-time boundary conditions are overcome by an electron balance procedure. Numerical solution of the model provides plasma radius, temperature, pressure, and mass as a function of pulse time for fixed current, electrode gap, and power fraction remaining in the plasma. Moderately high temperatures (≳5000 K) and pressures (≳4 bar) persist in the sparks even after long pulse times (to ˜500 μs). Quantitative proof that superheating is the dominant mechanism for electrical discharge machining (EDM) erosion is thus provided for the first time. Some quantitative inconsistencies developed between our (1) cathode, (2) anode, and (3) plasma models (this series) are discussed with indication as to how they will be rectified in a fourth article to follow shortly in this journal. While containing oversimplifications, these three models are believed to contain the respective dominant physics of the EDM process but need be brought into numerical consistency for each time increment of the numerical solution.

  6. Collaborative Research with Chinese, Indian, Filipino and North European Research Organizations on Infectious Disease Epidemics.

    PubMed

    Sumi, Ayako; Kobayashi, Nobumichi

    2017-01-01

    In this report, we present a short review of applications of time series analysis, which consists of spectral analysis based on the maximum entropy method in the frequency domain and the least squares method in the time domain, to the incidence data of infectious diseases. This report consists of three parts. First, we present our results obtained by collaborative research on infectious disease epidemics with Chinese, Indian, Filipino and North European research organizations. Second, we present the results obtained with the Japanese infectious disease surveillance data and the time series numerically generated from a mathematical model, called the susceptible/exposed/infectious/recovered (SEIR) model. Third, we present an application of the time series analysis to pathologic tissues to examine the usefulness of time series analysis for investigating the spatial pattern of pathologic tissue. It is anticipated that time series analysis will become a useful tool for investigating not only infectious disease surveillance data but also immunological and genetic tests.

  7. Authoring Model-Tracing Cognitive Tutors

    ERIC Educational Resources Information Center

    Blessing, Stephen B.; Gilbert, Stephen B.; Ourada, Stephen; Ritter, Steven

    2009-01-01

    Intelligent Tutoring Systems (ITSs) that employ a model-tracing methodology have consistently shown their effectiveness. However, what evidently makes these tutors effective, the cognitive model embedded within them, has traditionally been difficult to create, requiring great expertise and time, both of which come at a cost. Furthermore, an…

  8. Modeling level of urban taxi services using neural network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, J.; Wong, S.C.; Tong, C.O.

    1999-05-01

    This paper is concerned with the modeling of the complex demand-supply relationship in urban taxi services. A neural network model is developed, based on a taxi service situation observed in the urban area of Hong Kong. The input consists of several exogenous variables including number of licensed taxis, incremental charge of taxi fare, average occupied taxi journey time, average disposable income, and population and customer price index; the output consists of a set of endogenous variables including daily taxi passenger demand, passenger waiting time, vacant taxi headway, average percentage of occupied taxis, taxi utilization, and average taxi waiting time. Comparisonsmore » of the estimation accuracy are made between the neural network model and the simultaneous equations model. The results show that the neural network-based macro taxi model can obtain much more accurate information of the taxi services than the simultaneous equations model does. Although the data set used for training the neural network is small, the results obtained thus far are very encouraging. The neural network model can be used as a policy tool by regulator to assist with the decisions concerning the restriction over the number of taxi licenses and the fixing of the taxi fare structure as well as a range of service quality control.« less

  9. General mechanism of two-state protein folding kinetics.

    PubMed

    Rollins, Geoffrey C; Dill, Ken A

    2014-08-13

    We describe here a general model of the kinetic mechanism of protein folding. In the Foldon Funnel Model, proteins fold in units of secondary structures, which form sequentially along the folding pathway, stabilized by tertiary interactions. The model predicts that the free energy landscape has a volcano shape, rather than a simple funnel, that folding is two-state (single-exponential) when secondary structures are intrinsically unstable, and that each structure along the folding path is a transition state for the previous structure. It shows how sequential pathways are consistent with multiple stochastic routes on funnel landscapes, and it gives good agreement with the 9 order of magnitude dependence of folding rates on protein size for a set of 93 proteins, at the same time it is consistent with the near independence of folding equilibrium constant on size. This model gives estimates of folding rates of proteomes, leading to a median folding time in Escherichia coli of about 5 s.

  10. Some Aspects of Advanced Tokamak Modeling in DIII-D

    NASA Astrophysics Data System (ADS)

    St John, H. E.; Petty, C. C.; Murakami, M.; Kinsey, J. E.

    2000-10-01

    We extend previous work(M. Murakami, et al., General Atomics Report GA-A23310 (1999).) done on time dependent DIII-D advanced tokamak simulations by introducing theoretical confinement models rather than relying on power balance derived transport coefficients. We explore using NBCD and off axis ECCD together with a self-consistent aligned bootstrap current, driven by the internal transport barrier dynamics generated with the GLF23 confinement model, to shape the hollow current profile and to maintain MHD stable conditions. Our theoretical modeling approach uses measured DIII-D initial conditions to start off the simulations in a smooth consistent manner. This mitigates the troublesome long lived perturbations in the ohmic current profile that is normally caused by inconsistent initial data. To achieve this goal our simulation uses a sequence of time dependent eqdsks generated autonomously by the EFIT MHD equilibrium code in analyzing experimental data to supply the history for the simulation.

  11. Information Flow in an Atmospheric Model and Data Assimilation

    ERIC Educational Resources Information Center

    Yoon, Young-noh

    2011-01-01

    Weather forecasting consists of two processes, model integration and analysis (data assimilation). During the model integration, the state estimate produced by the analysis evolves to the next cycle time according to the atmospheric model to become the background estimate. The analysis then produces a new state estimate by combining the background…

  12. Pseudo-dynamic source characterization accounting for rough-fault effects

    NASA Astrophysics Data System (ADS)

    Galis, Martin; Thingbaijam, Kiran K. S.; Mai, P. Martin

    2016-04-01

    Broadband ground-motion simulations, ideally for frequencies up to ~10Hz or higher, are important for earthquake engineering; for example, seismic hazard analysis for critical facilities. An issue with such simulations is realistic generation of radiated wave-field in the desired frequency range. Numerical simulations of dynamic ruptures propagating on rough faults suggest that fault roughness is necessary for realistic high-frequency radiation. However, simulations of dynamic ruptures are too expensive for routine applications. Therefore, simplified synthetic kinematic models are often used. They are usually based on rigorous statistical analysis of rupture models inferred by inversions of seismic and/or geodetic data. However, due to limited resolution of the inversions, these models are valid only for low-frequency range. In addition to the slip, parameters such as rupture-onset time, rise time and source time functions are needed for complete spatiotemporal characterization of the earthquake rupture. But these parameters are poorly resolved in the source inversions. To obtain a physically consistent quantification of these parameters, we simulate and analyze spontaneous dynamic ruptures on rough faults. First, by analyzing the impact of fault roughness on the rupture and seismic radiation, we develop equivalent planar-fault kinematic analogues of the dynamic ruptures. Next, we investigate the spatial interdependencies between the source parameters to allow consistent modeling that emulates the observed behavior of dynamic ruptures capturing the rough-fault effects. Based on these analyses, we formulate a framework for pseudo-dynamic source model, physically consistent with the dynamic ruptures on rough faults.

  13. Computing the sensitivity of drag and lift in flow past a circular cylinder: Time-stepping versus self-consistent analysis

    NASA Astrophysics Data System (ADS)

    Meliga, Philippe

    2017-07-01

    We provide in-depth scrutiny of two methods making use of adjoint-based gradients to compute the sensitivity of drag in the two-dimensional, periodic flow past a circular cylinder (Re≲189 ): first, the time-stepping analysis used in Meliga et al. [Phys. Fluids 26, 104101 (2014), 10.1063/1.4896941] that relies on classical Navier-Stokes modeling and determines the sensitivity to any generic control force from time-dependent adjoint equations marched backwards in time; and, second, a self-consistent approach building on the model of Mantič-Lugo et al. [Phys. Rev. Lett. 113, 084501 (2014), 10.1103/PhysRevLett.113.084501] to compute semilinear approximations of the sensitivity to the mean and fluctuating components of the force. Both approaches are applied to open-loop control by a small secondary cylinder and allow identifying the sensitive regions without knowledge of the controlled states. The theoretical predictions obtained by time-stepping analysis reproduce well the results obtained by direct numerical simulation of the two-cylinder system. So do the predictions obtained by self-consistent analysis, which corroborates the relevance of the approach as a guideline for efficient and systematic control design in the attempt to reduce drag, even though the Reynolds number is not close to the instability threshold and the oscillation amplitude is not small. This is because, unlike simpler approaches relying on linear stability analysis to predict the main features of the flow unsteadiness, the semilinear framework encompasses rigorously the effect of the control on the mean flow, as well as on the finite-amplitude fluctuation that feeds back nonlinearly onto the mean flow via the formation of Reynolds stresses. Such results are especially promising as the self-consistent approach determines the sensitivity from time-independent equations that can be solved iteratively, which makes it generally less computationally demanding. We ultimately discuss the extent to which relevant information can be gained from a hybrid modeling computing self-consistent sensitivities from the postprocessing of DNS data. Application to alternative control objectives such as increasing the lift and alleviating the fluctuating drag and lift is also discussed.

  14. A New Comptonization Model for Weakly Magnetized Accreting NS LMXBs

    NASA Astrophysics Data System (ADS)

    Paizis, A.; Farinelli, R.; Titarchuk, L.; Frontera, F.; Cocchi, M.; Ferrigno, C.

    2009-05-01

    We have developed a new Comptonization model to propose, for the first time, a self consistent physical interpretation of the complex spectral evolution seen in NS LMXBs. The model and its application to LMXBs are presented and compared to the Simbol-X expected capabilities.

  15. A recipe for consistent 3D management of velocity data and time-depth conversion using Vel-IO 3D

    NASA Astrophysics Data System (ADS)

    Maesano, Francesco E.; D'Ambrogi, Chiara

    2017-04-01

    3D geological model production and related basin analyses need large and consistent seismic dataset and hopefully well logs to support correlation and calibration; the workflow and tools used to manage and integrate different type of data control the soundness of the final 3D model. Even though seismic interpretation is a basic early step in such workflow, the most critical step to obtain a comprehensive 3D model useful for further analyses is represented by the construction of an effective 3D velocity model and a well constrained time-depth conversion. We present a complex workflow that includes comprehensive management of large seismic dataset and velocity data, the construction of a 3D instantaneous multilayer-cake velocity model, the time-depth conversion of highly heterogeneous geological framework, including both depositional and structural complexities. The core of the workflow is the construction of the 3D velocity model using Vel-IO 3D tool (Maesano and D'Ambrogi, 2017; https://github.com/framae80/Vel-IO3D) that is composed by the following three scripts, written in Python 2.7.11 under ArcGIS ArcPy environment: i) the 3D instantaneous velocity model builder creates a preliminary 3D instantaneous velocity model using key horizons in time domain and velocity data obtained from the analysis of well and pseudo-well logs. The script applies spatial interpolation to the velocity parameters and calculates the value of depth of each point on each horizon bounding the layer-cake velocity model. ii) the velocity model optimizer improves the consistency of the velocity model by adding new velocity data indirectly derived from measured depths, thus reducing the geometrical uncertainties in the areas located far from the original velocity data. iii) the time-depth converter runs the time-depth conversion of any object located inside the 3D velocity model The Vel-IO 3D tool allows one to create 3D geological models consistent with the primary geological constraints (e.g. depth of the markers on wells). The workflow and Vel-IO 3D tool have been developed and tested for the construction of the 3D geological model of a flat region, 5700 km2 in area, located in the central part of the Po Plain (Northern Italy) in the frame of the European funded Project GeoMol. The study area was covered by a dense dataset of seismic lines (ca. 12000 km) and exploration wells (130 drilling), mainly deriving from oil and gas exploration activities. The interpretation of the seismic dataset leads to the construction of a 3D model in time domain that has been depth converted using Vel-IO 3D, with a 4 layer-cake 3D instantaneous velocity model. The resulting final 3D geological model, composed of 15 horizons and 150 faults, has been used for basin analysis at regional scale, for geothermal assessment, and for the update of the seismotectonic knowledge of the Po Plain. The Vel-IO 3D has been further used for the depth conversion of the accretionary prism of the Calabrian subduction (Southern Italy) and for a basin scale analysis of the Po Plain Plio-Pleistocene evolution. Maesano F.E. and D'Ambrogi C., (2017), Computers and Geosciences, doi: 10.1016/j.cageo.2016.11.013 Vel-IO 3D is available at: https://github.com/framae80/Vel-IO3D

  16. Consistent View of Protein Fluctuations from All-Atom Molecular Dynamics and Coarse-Grained Dynamics with Knowledge-Based Force-Field.

    PubMed

    Jamroz, Michal; Orozco, Modesto; Kolinski, Andrzej; Kmiecik, Sebastian

    2013-01-08

    It is widely recognized that atomistic Molecular Dynamics (MD), a classical simulation method, captures the essential physics of protein dynamics. That idea is supported by a theoretical study showing that various MD force-fields provide a consensus picture of protein fluctuations in aqueous solution [Rueda, M. et al. Proc. Natl. Acad. Sci. U.S.A. 2007, 104, 796-801]. However, atomistic MD cannot be applied to most biologically relevant processes due to its limitation to relatively short time scales. Much longer time scales can be accessed by properly designed coarse-grained models. We demonstrate that the aforementioned consensus view of protein dynamics from short (nanosecond) time scale MD simulations is fairly consistent with the dynamics of the coarse-grained protein model - the CABS model. The CABS model employs stochastic dynamics (a Monte Carlo method) and a knowledge-based force-field, which is not biased toward the native structure of a simulated protein. Since CABS-based dynamics allows for the simulation of entire folding (or multiple folding events) in a single run, integration of the CABS approach with all-atom MD promises a convenient (and computationally feasible) means for the long-time multiscale molecular modeling of protein systems with atomistic resolution.

  17. Toroidal Ampere-Faraday Equations Solved Consistently with the CQL3D Fokker-Planck Time-Evolution

    NASA Astrophysics Data System (ADS)

    Harvey, R. W.; Petrov, Yu. V.

    2013-10-01

    A self-consistent, time-dependent toroidal electric field calculation is a key feature of a complete 3D Fokker-Planck kinetic distribution radial transport code for f(v,theta,rho,t). In the present CQL3D finite-difference model, the electric field E(rho,t) is either prescribed, or iteratively adjusted to obtain prescribed toroidal or parallel currents. We discuss first results of an implementation of the Ampere-Faraday equation for the self-consistent toroidal electric field, as applied to the runaway electron production in tokamaks due to rapid reduction of the plasma temperature as occurs in a plasma disruption. Our previous results assuming a constant current density (Lenz' Law) model showed that prompt ``hot-tail runaways'' dominated ``knock-on'' and Dreicer ``drizzle'' runaways; we will examine modifications due to the more complete Ampere-Faraday solution. Work supported by US DOE under DE-FG02-ER54744.

  18. Consistent Chemical Mechanism from Collaborative Data Processing

    DOE PAGES

    Slavinskaya, Nadezda; Starcke, Jan-Hendrik; Abbasi, Mehdi; ...

    2016-04-01

    Numerical tool of Process Informatics Model (PrIMe) is mathematically rigorous and numerically efficient approach for analysis and optimization of chemical systems. It handles heterogeneous data and is scalable to a large number of parameters. The Boundto-Bound Data Collaboration module of the automated data-centric infrastructure of PrIMe was used for the systematic uncertainty and data consistency analyses of the H 2/CO reaction model (73/17) and 94 experimental targets (ignition delay times). The empirical rule for evaluation of the shock tube experimental data is proposed. The initial results demonstrate clear benefits of the PrIMe methods for an evaluation of the kinetic datamore » quality and data consistency and for developing predictive kinetic models.« less

  19. A Time-dependent Heliospheric Model Driven by Empirical Boundary Conditions

    NASA Astrophysics Data System (ADS)

    Kim, T. K.; Arge, C. N.; Pogorelov, N. V.

    2017-12-01

    Consisting of charged particles originating from the Sun, the solar wind carries the Sun's energy and magnetic field outward through interplanetary space. The solar wind is the predominant source of space weather events, and modeling the solar wind propagation to Earth is a critical component of space weather research. Solar wind models are typically separated into coronal and heliospheric parts to account for the different physical processes and scales characterizing each region. Coronal models are often coupled with heliospheric models to propagate the solar wind out to Earth's orbit and beyond. The Wang-Sheeley-Arge (WSA) model is a semi-empirical coronal model consisting of a potential field source surface model and a current sheet model that takes synoptic magnetograms as input to estimate the magnetic field and solar wind speed at any distance above the coronal region. The current version of the WSA model takes the Air Force Data Assimilative Photospheric Flux Transport (ADAPT) model as input to provide improved time-varying solutions for the ambient solar wind structure. When heliospheric MHD models are coupled with the WSA model, density and temperature at the inner boundary are treated as free parameters that are tuned to optimal values. For example, the WSA-ENLIL model prescribes density and temperature assuming momentum flux and thermal pressure balance across the inner boundary of the ENLIL heliospheric MHD model. We consider an alternative approach of prescribing density and temperature using empirical correlations derived from Ulysses and OMNI data. We use our own modeling software (Multi-scale Fluid-kinetic Simulation Suite) to drive a heliospheric MHD model with ADAPT-WSA input. The modeling results using the two different approaches of density and temperature prescription suggest that the use of empirical correlations may be a more straightforward, consistent method.

  20. Integrated Modeling of Time Evolving 3D Kinetic MHD Equilibria and NTV Torque

    NASA Astrophysics Data System (ADS)

    Logan, N. C.; Park, J.-K.; Grierson, B. A.; Haskey, S. R.; Nazikian, R.; Cui, L.; Smith, S. P.; Meneghini, O.

    2016-10-01

    New analysis tools and integrated modeling of plasma dynamics developed in the OMFIT framework are used to study kinetic MHD equilibria evolution on the transport time scale. The experimentally observed profile dynamics following the application of 3D error fields are described using a new OMFITprofiles workflow that directly addresses the need for rapid and comprehensive analysis of dynamic equilibria for next-step theory validation. The workflow treats all diagnostic data as fundamentally time dependent, provides physics-based manipulations such as ELM phase data selection, and is consistent across multiple machines - including DIII-D and NSTX-U. The seamless integration of tokamak data and simulation is demonstrated by using the self-consistent kinetic EFIT equilibria and profiles as input into 2D particle, momentum and energy transport calculations using TRANSP as well as 3D kinetic MHD equilibrium stability and neoclassical transport modeling using General Perturbed Equilibrium Code (GPEC). The result is a smooth kinetic stability and NTV torque evolution over transport time scales. Work supported by DE-AC02-09CH11466.

  1. Mathematical models for prediction of rheological parameters in vinasses derived from sugar cane

    NASA Astrophysics Data System (ADS)

    Chacua, Leidy M.; Ayala, Germán; Rojas, Hernán; Agudelo, Ana C.

    2016-04-01

    The rheological behaviour of vinasses derived from sugar cane was studied as a function of time (0 and 600 s), soluble solids content (44 and 60 °Brix), temperature (10 and 50°C), and shear rate (0.33 and 1.0 s-1). The results indicated that vinasses were time-independent at 25°C, where shear stress values ranged between 0.01 and 0.08 Pa. Flow curves showed a shear-thinning rheological behaviour in vinasses with a flow behaviour index between 0.69 and 0.89, for temperature between 10 and 20°C. With increasing temperature, the flow behaviour index was modified, reaching values close to 1.0. The Arrhenius model described well the thermal activation of shear stress and the consistency coefficient as a function of temperature. Activation energy from the Arrhenius model ranged between 31 and 45 kJ mol-1. Finally, the consistency coefficient as a function of the soluble solids content and temperature was well fitted using an exponential model (R2 = 0.951), showing that the soluble solids content and temperature have an opposite effect on consistency coefficient values.

  2. An Evaluation of Nutrition Education Program for Low-Income Youth

    ERIC Educational Resources Information Center

    Kemirembe, Olive M. K.; Radhakrishna, Rama B.; Gurgevich, Elise; Yoder, Edgar P.; Ingram, Patreese D.

    2011-01-01

    A quasi-experimental design consisting of pretest, posttest, and delayed posttest comparison control group was used. Nutrition knowledge and behaviors were measured at pretest (time 1) posttest (time 2) and delayed posttest (time 3). General Linear Model (GLM) repeated measure ANCOVA results showed that youth who received nutrition education…

  3. Time Series ARIMA Models of Undergraduate Grade Point Average.

    ERIC Educational Resources Information Center

    Rogers, Bruce G.

    The Auto-Regressive Integrated Moving Average (ARIMA) Models, often referred to as Box-Jenkins models, are regression methods for analyzing sequential dependent observations with large amounts of data. The Box-Jenkins approach, a three-stage procedure consisting of identification, estimation and diagnosis, was used to select the most appropriate…

  4. Method of locating underground mines fires

    DOEpatents

    Laage, Linneas; Pomroy, William

    1992-01-01

    An improved method of locating an underground mine fire by comparing the pattern of measured combustion product arrival times at detector locations with a real time computer-generated array of simulated patterns. A number of electronic fire detection devices are linked thru telemetry to a control station on the surface. The mine's ventilation is modeled on a digital computer using network analysis software. The time reguired to locate a fire consists of the time required to model the mines' ventilation, generate the arrival time array, scan the array, and to match measured arrival time patterns to the simulated patterns.

  5. Negative emotional reactivity moderates the relations between family cohesion and internalizing and externalizing symptoms in adolescence✩

    PubMed Central

    Rabinowitz, Jill A.; Osigwe, Ijeoma; Drabick, Deborah A.G.; Reynolds, Maureen D.

    2016-01-01

    Lower family cohesion is associated with adolescent internalizing and externalizing problems. However, there are likely individual differences in youth's responses to family processes. For example, adolescents higher in negative emotional reactivity, who often exhibit elevated physiological responsivity to context, may be differentially affected by family cohesion. We explored whether youth's negative emotional reactivity moderated the relation between family cohesion and youth's symptoms and tested whether findings were consistent with the diathesis-stress model or differential susceptibility hypothesis. Participants were 651 adolescents (M = 12.99 ± .95 years old; 72% male) assessed at two time points (Time 1, ages 12–14; Time 2, age 16) in Pittsburgh, PA. At Time 1, mothers reported on family cohesion and youth reported on their negative emotional reactivity. At Time 2, youth reported on their symptoms. Among youth higher in negative emotional reactivity, lower family cohesion predicted higher symptoms than higher family cohesion, consistent with the diathesis-stress model. PMID:27718379

  6. A functional form for injected MRI Gd-chelate contrast agent concentration incorporating recirculation, extravasation and excretion

    NASA Astrophysics Data System (ADS)

    Horsfield, Mark A.; Thornton, John S.; Gill, Andrew; Jager, H. Rolf; Priest, Andrew N.; Morgan, Bruno

    2009-05-01

    A functional form for the vascular concentration of MRI contrast agent after intravenous bolus injection was developed that can be used to model the concentration at any vascular site at which contrast concentration can be measured. The form is based on previous models of blood circulation, and is consistent with previously measured data at long post-injection times, when the contrast agent is fully and evenly dispersed in the blood. It allows the first-pass and recirculation peaks of contrast agent to be modelled, and measurement of the absolute concentration of contrast agent at a single time point allows the whole time course to be rescaled to give absolute contrast agent concentration values. This measure of absolute concentration could be performed at a long post-injection time using either MRI or blood-sampling methods. In order to provide a model that is consistent with measured data, it was necessary to include both rapid and slow extravasation, together with excretion via the kidneys. The model was tested on T1-weighted data from the descending aorta and hepatic portal vein, and on T*2-weighted data from the cerebral arteries. Fitting of the model was successful for all datasets, but there was a considerable variation in fit parameters between subjects, which suggests that the formation of a meaningful population-averaged vascular concentration function is precluded.

  7. Multiplicative point process as a model of trading activity

    NASA Astrophysics Data System (ADS)

    Gontis, V.; Kaulakys, B.

    2004-11-01

    Signals consisting of a sequence of pulses show that inherent origin of the 1/ f noise is a Brownian fluctuation of the average interevent time between subsequent pulses of the pulse sequence. In this paper, we generalize the model of interevent time to reproduce a variety of self-affine time series exhibiting power spectral density S( f) scaling as a power of the frequency f. Furthermore, we analyze the relation between the power-law correlations and the origin of the power-law probability distribution of the signal intensity. We introduce a stochastic multiplicative model for the time intervals between point events and analyze the statistical properties of the signal analytically and numerically. Such model system exhibits power-law spectral density S( f)∼1/ fβ for various values of β, including β= {1}/{2}, 1 and {3}/{2}. Explicit expressions for the power spectra in the low-frequency limit and for the distribution density of the interevent time are obtained. The counting statistics of the events is analyzed analytically and numerically, as well. The specific interest of our analysis is related with the financial markets, where long-range correlations of price fluctuations largely depend on the number of transactions. We analyze the spectral density and counting statistics of the number of transactions. The model reproduces spectral properties of the real markets and explains the mechanism of power-law distribution of trading activity. The study provides evidence that the statistical properties of the financial markets are enclosed in the statistics of the time interval between trades. A multiplicative point process serves as a consistent model generating this statistics.

  8. Predictors of nursing home residents' time to hospitalization.

    PubMed

    O'Malley, A James; Caudry, Daryl J; Grabowski, David C

    2011-02-01

    To model the predictors of the time to first acute hospitalization for nursing home residents, and accounting for previous hospitalizations, model the predictors of time between subsequent hospitalizations. Merged file from New York State for the period 1998-2004 consisting of nursing home information from the minimum dataset and hospitalization information from the Statewide Planning and Research Cooperative System. Accelerated failure time models were used to estimate the model parameters and predict survival times. The models were fit to observations from 50 percent of the nursing homes and validated on the remaining observations. Pressure ulcers and facility-level deficiencies were associated with a decreased time to first hospitalization, while the presence of advance directives and facility staffing was associated with an increased time. These predictors of the time to first hospitalization model had effects of similar magnitude in predicting the time between subsequent hospitalizations. This study provides novel evidence suggesting modifiable patient and nursing home characteristics are associated with the time to first hospitalization and time to subsequent hospitalizations for nursing home residents. © Health Research and Educational Trust.

  9. Conformity and Dissonance in Generalized Voter Models

    NASA Astrophysics Data System (ADS)

    Page, Scott E.; Sander, Leonard M.; Schneider-Mizell, Casey M.

    2007-09-01

    We generalize the voter model to include social forces that produce conformity among voters and avoidance of cognitive dissonance of opinions within a voter. The time for both conformity and consistency (which we call the exit time) is, in general, much longer than for either process alone. We show that our generalized model can be applied quite widely: it is a form of Wright's island model of population genetics, and is related to problems in the physical sciences. We give scaling arguments, numerical simulations, and analytic estimates for the exit time for a range of relative strengths in the tendency to conform and to avoid dissonance.

  10. When Mothers' Work Matters for Youths' Daily Time Use: Implications of Evening and Weekend Shifts.

    PubMed

    Lee, Soomi; Davis, Kelly D; McHale, Susan M; Kelly, Erin L; Kossek, Ellen Ernst; Crouter, Ann C

    2017-08-01

    Drawing upon the work-home resources model, this study examined the implications of mothers' evening and weekend shifts for youths' time with mother, alone, and hanging out with peers unsupervised, with attention to both the amount and day-to-day consistency of time use. Data came from 173 mothers who worked in the long-term care industry and their youths who provided daily diaries. Multilevel modeling revealed that youths whose mothers worked more evening shifts on average spent less time with their mothers compared to youths whose mothers worked fewer evening shifts. Youths whose mothers worked more weekend shifts, however, spent more time with their mothers and exhibited less consistency in their time in all three activity domains compared to youths whose mothers worked fewer weekend shifts. Girls, not boys, spent less time alone on days when mothers worked weekend shifts than on days with standard shifts. Older but not younger adolescents spent more time hanging out with friends on evening and weekend shift days, and their unsupervised peer time was less consistent across days when mothers worked more evening shifts. These effects adjusted for sociodemographic and day characteristics, including school day, number of children in the household, mothers' marital status and work hours, and time with fathers. Our results illuminate the importance of the timing and day of mothers' work for youths' daily activities. Future interventions should consider how to increase mothers' resources to deal with constraints on parenting due to their work during nonstandard hours, with attention to child gender and age.

  11. Observations of pockmark flow structure in Belfast Bay, Maine, Part 2: evidence for cavity flow

    USGS Publications Warehouse

    Fandel, Christina L.; Lippmann, Thomas C.; Foster, Diane L.; Brothers, Laura L.

    2017-01-01

    Pockmark flow circulation patterns were investigated through current measurements along the rim and center of two pockmarks in Belfast Bay, Maine. Observed time-varying current profiles have a complex vertical and directional structure that rotates significantly with depth and is strongly dependent on the phase of the tide. Observations of the vertical profiles of horizontal velocities in relation to relative geometric parameters of the pockmark are consistent with circulation patterns described qualitatively by cavity flow models (Ashcroft and Zhang 2005). The time-mean behavior of the shear layer is typically used to characterize cavity flow, and was estimated using vorticity thickness to quantify the growth rate of the shear layer horizontally across the pockmark. Estimated positive vorticity thickness spreading rates are consistent with cavity flow predictions, and occur at largely different rates between the two pockmarks. Previously modeled flow (Brothers et al. 2011) and laboratory measurements (Pau et al. 2014) over pockmarks of similar geometry to those examined herein are also qualitatively consistent with cavity flow circulation, suggesting that cavity flow may be a good first-order flow model for pockmarks in general.

  12. A finite nonlinear hyper-viscoelastic model for soft biological tissues.

    PubMed

    Panda, Satish Kumar; Buist, Martin Lindsay

    2018-03-01

    Soft tissues exhibit highly nonlinear rate and time-dependent stress-strain behaviour. Strain and strain rate dependencies are often modelled using a hyperelastic model and a discrete (standard linear solid) or continuous spectrum (quasi-linear) viscoelastic model, respectively. However, these models are unable to properly capture the materials characteristics because hyperelastic models are unsuited for time-dependent events, whereas the common viscoelastic models are insufficient for the nonlinear and finite strain viscoelastic tissue responses. The convolution integral based models can demonstrate a finite viscoelastic response; however, their derivations are not consistent with the laws of thermodynamics. The aim of this work was to develop a three-dimensional finite hyper-viscoelastic model for soft tissues using a thermodynamically consistent approach. In addition, a nonlinear function, dependent on strain and strain rate, was adopted to capture the nonlinear variation of viscosity during a loading process. To demonstrate the efficacy and versatility of this approach, the model was used to recreate the experimental results performed on different types of soft tissues. In all the cases, the simulation results were well matched (R 2 ⩾0.99) with the experimental data. Copyright © 2018 Elsevier Ltd. All rights reserved.

  13. Analysis of the Relation between Academic Procrastination, Academic Rational/Irrational Beliefs, Time Preferences to Study for Exams, and Academic Achievement: A Structural Model

    ERIC Educational Resources Information Center

    Balkis, Murat; Duru, Erdinc; Bulus, Mustafa

    2013-01-01

    The purpose of this study was to investigate the relations between academic rational/irrational beliefs, academic procrastination, and time preferences to study for exams and academic achievement by using the structural equation model. The sample consisted of 281 undergraduate students who filled in questionnaires at the 7-week-long summer course.…

  14. Modeling the glass transition of amorphous networks for shape-memory behavior

    NASA Astrophysics Data System (ADS)

    Xiao, Rui; Choi, Jinwoo; Lakhera, Nishant; Yakacki, Christopher M.; Frick, Carl P.; Nguyen, Thao D.

    2013-07-01

    In this paper, a thermomechanical constitutive model was developed for the time-dependent behaviors of the glass transition of amorphous networks. The model used multiple discrete relaxation processes to describe the distribution of relaxation times for stress relaxation, structural relaxation, and stress-activated viscous flow. A non-equilibrium thermodynamic framework based on the fictive temperature was introduced to demonstrate the thermodynamic consistency of the constitutive theory. Experimental and theoretical methods were developed to determine the parameters describing the distribution of stress and structural relaxation times and the dependence of the relaxation times on temperature, structure, and driving stress. The model was applied to study the effects of deformation temperatures and physical aging on the shape-memory behavior of amorphous networks. The model was able to reproduce important features of the partially constrained recovery response observed in experiments. Specifically, the model demonstrated a strain-recovery overshoot for cases programmed below Tg and subjected to a constant mechanical load. This phenomenon was not observed for materials programmed above Tg. Physical aging, in which the material was annealed for an extended period of time below Tg, shifted the activation of strain recovery to higher temperatures and increased significantly the initial recovery rate. For fixed-strain recovery, the model showed a larger overshoot in the stress response for cases programmed below Tg, which was consistent with previous experimental observations. Altogether, this work demonstrates how an understanding of the time-dependent behaviors of the glass transition can be used to tailor the temperature and deformation history of the shape-memory programming process to achieve more complex shape recovery pathways, faster recovery responses, and larger activation stresses.

  15. The applicability of turbulence models to aerodynamic and propulsion flowfields at McDonnell-Douglas Aerospace

    NASA Technical Reports Server (NTRS)

    Kral, Linda D.; Ladd, John A.; Mani, Mori

    1995-01-01

    The objective of this viewgraph presentation is to evaluate turbulence models for integrated aircraft components such as the forebody, wing, inlet, diffuser, nozzle, and afterbody. The one-equation models have replaced the algebraic models as the baseline turbulence models. The Spalart-Allmaras one-equation model consistently performs better than the Baldwin-Barth model, particularly in the log-layer and free shear layers. Also, the Sparlart-Allmaras model is not grid dependent like the Baldwin-Barth model. No general turbulence model exists for all engineering applications. The Spalart-Allmaras one-equation model and the Chien k-epsilon models are the preferred turbulence models. Although the two-equation models often better predict the flow field, they may take from two to five times the CPU time. Future directions are in further benchmarking the Menter blended k-w/k-epsilon and algorithmic improvements to reduce CPU time of the two-equation model.

  16. Efficient full-chip SRAF placement using machine learning for best accuracy and improved consistency

    NASA Astrophysics Data System (ADS)

    Wang, Shibing; Baron, Stanislas; Kachwala, Nishrin; Kallingal, Chidam; Sun, Dezheng; Shu, Vincent; Fong, Weichun; Li, Zero; Elsaid, Ahmad; Gao, Jin-Wei; Su, Jing; Ser, Jung-Hoon; Zhang, Quan; Chen, Been-Der; Howell, Rafael; Hsu, Stephen; Luo, Larry; Zou, Yi; Zhang, Gary; Lu, Yen-Wen; Cao, Yu

    2018-03-01

    Various computational approaches from rule-based to model-based methods exist to place Sub-Resolution Assist Features (SRAF) in order to increase process window for lithography. Each method has its advantages and drawbacks, and typically requires the user to make a trade-off between time of development, accuracy, consistency and cycle time. Rule-based methods, used since the 90 nm node, require long development time and struggle to achieve good process window performance for complex patterns. Heuristically driven, their development is often iterative and involves significant engineering time from multiple disciplines (Litho, OPC and DTCO). Model-based approaches have been widely adopted since the 20 nm node. While the development of model-driven placement methods is relatively straightforward, they often become computationally expensive when high accuracy is required. Furthermore these methods tend to yield less consistent SRAFs due to the nature of the approach: they rely on a model which is sensitive to the pattern placement on the native simulation grid, and can be impacted by such related grid dependency effects. Those undesirable effects tend to become stronger when more iterations or complexity are needed in the algorithm to achieve required accuracy. ASML Brion has developed a new SRAF placement technique on the Tachyon platform that is assisted by machine learning and significantly improves the accuracy of full chip SRAF placement while keeping consistency and runtime under control. A Deep Convolutional Neural Network (DCNN) is trained using the target wafer layout and corresponding Continuous Transmission Mask (CTM) images. These CTM images have been fully optimized using the Tachyon inverse mask optimization engine. The neural network generated SRAF guidance map is then used to place SRAF on full-chip. This is different from our existing full-chip MB-SRAF approach which utilizes a SRAF guidance map (SGM) of mask sensitivity to improve the contrast of optical image at the target pattern edges. In this paper, we demonstrate that machine learning assisted SRAF placement can achieve a superior process window compared to the SGM model-based SRAF method, while keeping the full-chip runtime affordable, and maintain consistency of SRAF placement . We describe the current status of this machine learning assisted SRAF technique and demonstrate its application to full chip mask synthesis and discuss how it can extend the computational lithography roadmap.

  17. Examining Readability Estimates' Predictions of Students' Oral Reading Rate: Spache, Lexile, and Forcast

    ERIC Educational Resources Information Center

    Ardoin, Scott P.; Williams, Jessica C.; Christ, Theodore J.; Klubnik, Cynthia; Wellborn, Claire

    2010-01-01

    Beyond reliability and validity, measures used to model student growth must consist of multiple probes that are equivalent in level of difficulty to establish consistent measurement conditions across time. Although existing evidence supports the reliability of curriculum-based measurement in reading (CBMR), few studies have empirically evaluated…

  18. A consistent modelling methodology for secondary settling tanks: a reliable numerical method.

    PubMed

    Bürger, Raimund; Diehl, Stefan; Farås, Sebastian; Nopens, Ingmar; Torfs, Elena

    2013-01-01

    The consistent modelling methodology for secondary settling tanks (SSTs) leads to a partial differential equation (PDE) of nonlinear convection-diffusion type as a one-dimensional model for the solids concentration as a function of depth and time. This PDE includes a flux that depends discontinuously on spatial position modelling hindered settling and bulk flows, a singular source term describing the feed mechanism, a degenerating term accounting for sediment compressibility, and a dispersion term for turbulence. In addition, the solution itself is discontinuous. A consistent, reliable and robust numerical method that properly handles these difficulties is presented. Many constitutive relations for hindered settling, compression and dispersion can be used within the model, allowing the user to switch on and off effects of interest depending on the modelling goal as well as investigate the suitability of certain constitutive expressions. Simulations show the effect of the dispersion term on effluent suspended solids and total sludge mass in the SST. The focus is on correct implementation whereas calibration and validation are not pursued.

  19. A sequential sampling account of response bias and speed-accuracy tradeoffs in a conflict detection task.

    PubMed

    Vuckovic, Anita; Kwantes, Peter J; Humphreys, Michael; Neal, Andrew

    2014-03-01

    Signal Detection Theory (SDT; Green & Swets, 1966) is a popular tool for understanding decision making. However, it does not account for the time taken to make a decision, nor why response bias might change over time. Sequential sampling models provide a way of accounting for speed-accuracy trade-offs and response bias shifts. In this study, we test the validity of a sequential sampling model of conflict detection in a simulated air traffic control task by assessing whether two of its key parameters respond to experimental manipulations in a theoretically consistent way. Through experimental instructions, we manipulated participants' response bias and the relative speed or accuracy of their responses. The sequential sampling model was able to replicate the trends in the conflict responses as well as response time across all conditions. Consistent with our predictions, manipulating response bias was associated primarily with changes in the model's Criterion parameter, whereas manipulating speed-accuracy instructions was associated with changes in the Threshold parameter. The success of the model in replicating the human data suggests we can use the parameters of the model to gain an insight into the underlying response bias and speed-accuracy preferences common to dynamic decision-making tasks. © 2013 American Psychological Association

  20. Gravitational lens modelling in a citizen science context

    NASA Astrophysics Data System (ADS)

    Küng, Rafael; Saha, Prasenjit; More, Anupreeta; Baeten, Elisabeth; Coles, Jonathan; Cornen, Claude; Macmillan, Christine; Marshall, Phil; More, Surhud; Odermatt, Jonas; Verma, Aprajita; Wilcox, Julianne K.

    2015-03-01

    We develop a method to enable collaborative modelling of gravitational lenses and lens candidates, that could be used by non-professional lens enthusiasts. It uses an existing free-form modelling program (GLASS), but enables the input to this code to be provided in a novel way, via a user-generated diagram that is essentially a sketch of an arrival-time surface. We report on an implementation of this method, SpaghettiLens, which has been tested in a modelling challenge using 29 simulated lenses drawn from a larger set created for the Space Warps citizen science strong lens search. We find that volunteers from this online community asserted the image parities and time ordering consistently in some lenses, but made errors in other lenses depending on the image morphology. While errors in image parity and time ordering lead to large errors in the mass distribution, the enclosed mass was found to be more robust: the model-derived Einstein radii found by the volunteers were consistent with those produced by one of the professional team, suggesting that given the appropriate tools, gravitational lens modelling is a data analysis activity that can be crowd-sourced to good effect. Ideas for improvement are discussed; these include (a) overcoming the tendency of the models to be shallower than the correct answer in test cases, leading to systematic overestimation of the Einstein radius by 10 per cent at present, and (b) detailed modelling of arcs.

  1. Tool wear modeling using abductive networks

    NASA Astrophysics Data System (ADS)

    Masory, Oren

    1992-09-01

    A tool wear model based on Abductive Networks, which consists of a network of `polynomial' nodes, is described. The model relates the cutting parameters, components of the cutting force, and machining time to flank wear. Thus real time measurements of the cutting force can be used to monitor the machining process. The model is obtained by a training process in which the connectivity between the network's nodes and the polynomial coefficients of each node are determined by optimizing a performance criteria. Actual wear measurements of coated and uncoated carbide inserts were used for training and evaluating the established model.

  2. Indiana Emergent Bilingual Student Time to Reclassification: A Survival Analysis

    ERIC Educational Resources Information Center

    Burke, April M.; Morita-Mullaney, Trish; Singh, Malkeet

    2016-01-01

    In this study, we employed a discrete-time survival analysis model to examine Indiana emergent bilingual time to reclassification as fluent English proficient. The data consisted of five years of statewide English language proficiency scores. Indiana has a large and rapidly growing Spanish-speaking emergent bilingual population, and these students…

  3. Uniform California earthquake rupture forecast, version 2 (UCERF 2)

    USGS Publications Warehouse

    Field, E.H.; Dawson, T.E.; Felzer, K.R.; Frankel, A.D.; Gupta, V.; Jordan, T.H.; Parsons, T.; Petersen, M.D.; Stein, R.S.; Weldon, R.J.; Wills, C.J.

    2009-01-01

    The 2007 Working Group on California Earthquake Probabilities (WGCEP, 2007) presents the Uniform California Earthquake Rupture Forecast, Version 2 (UCERF 2). This model comprises a time-independent (Poisson-process) earthquake rate model, developed jointly with the National Seismic Hazard Mapping Program and a time-dependent earthquake-probability model, based on recent earthquake rates and stress-renewal statistics conditioned on the date of last event. The models were developed from updated statewide earthquake catalogs and fault deformation databases using a uniform methodology across all regions and implemented in the modular, extensible Open Seismic Hazard Analysis framework. The rate model satisfies integrating measures of deformation across the plate-boundary zone and is consistent with historical seismicity data. An overprediction of earthquake rates found at intermediate magnitudes (6.5 ??? M ???7.0) in previous models has been reduced to within the 95% confidence bounds of the historical earthquake catalog. A logic tree with 480 branches represents the epistemic uncertainties of the full time-dependent model. The mean UCERF 2 time-dependent probability of one or more M ???6.7 earthquakes in the California region during the next 30 yr is 99.7%; this probability decreases to 46% for M ???7.5 and to 4.5% for M ???8.0. These probabilities do not include the Cascadia subduction zone, largely north of California, for which the estimated 30 yr, M ???8.0 time-dependent probability is 10%. The M ???6.7 probabilities on major strike-slip faults are consistent with the WGCEP (2003) study in the San Francisco Bay Area and the WGCEP (1995) study in southern California, except for significantly lower estimates along the San Jacinto and Elsinore faults, owing to provisions for larger multisegment ruptures. Important model limitations are discussed.

  4. The Drainage of Thin, Vertical, Model Polyurethane Liquid Films

    NASA Astrophysics Data System (ADS)

    Snow, Steven; Pernisz, Udo; Braun, Richard; Naire, Shailesh

    1999-11-01

    We have successfully measured the drainage rate of thin, vertically-aligned, liquid films prepared from model polyurethane foam formulations. The pattern of interference fringes in these films was consistent with a wedge-shaped film profile. The time evolution of this wedge shape (the ``collapsing wedge") obeyed a power law relationship between fringe density s and time t of s = k t^m. Experimentally, m ranged from -0.47 to -0.92. The lower bound for m represented a case where the surface viscosity of the film was very high (a ``rigid" surface). Theoretical modeling of this case yielded m = -0.5, in excellent agreement with experiment. Instantaneous film drainage rate (dV/dt) could be extracted from the ``Collapsing Wedge" model. As expected, dV/dt scaled inversely with bulk viscosity. As surfactant concentration was varied at constant bulk viscosity, dV/dt passed through a maximum value, consistent with a model where the rigidity of the surface was a function of both the intensity of surface tension gradients and the surface viscosity of the film. The influence of surface viscosity on dV/dt was also modeled theoretically.

  5. A simple model for the dependence on local detonation speed of the product entropy

    NASA Astrophysics Data System (ADS)

    Hetherington, David C.; Whitworth, Nicholas J.

    2012-03-01

    The generation of a burn time field as a pre-processing step ahead of a hydrocode calculation has been mostly upgraded in the explosives modelling community from the historical model of singlespeed programmed burn to DSD/WBL (Detonation Shock Dynamics / Whitham Bdzil Lambourn). The problem with this advance is that the previously conventional approach to the hydrodynamic stage of the model results in the entropy of the detonation products (s) having the wrong correlation with detonation speed (D). Instead of being higher where D is lower, the conventional method leads to s being lower where D is lower, resulting in a completely fictitious enhancement of available energy where the burn is degraded! A technique is described which removes this deficiency of the historical model when used with a DSD-generated burn time field. By treating the conventional JWL equation as a semi-empirical expression for the local expansion isentrope, and constraining the local parameter set for consistency with D, it is possible to obtain the two desirable outcomes that the model of the detonation wave is internally consistent, and s is realistically correlated with D.

  6. A Simple Model for the Dependence on Local Detonation Speed (D) of the Product Entropy (S)

    NASA Astrophysics Data System (ADS)

    Hetherington, David

    2011-06-01

    The generation of a burn time field as a pre-processing step ahead of a hydrocode calculation has been mostly upgraded in the explosives modelling community from the historical model of single-speed programmed burn to DSD. However, with this advance has come the problem that the previously conventional approach to the hydrodynamic stage of the model results in S having the wrong correlation with D. Instead of being higher where the detonation speed is lower, i.e. where reaction occurs at lower compression, the conventional method leads to S being lower where D is lower, resulting in a completely fictitious enhancement of available energy where the burn is degraded! A technique is described which removes this deficiency of the historical model when used with a DSD-generated burn time field. By treating the conventional JWL equation as a semi-empirical expression for the local expansion isentrope, and constraining the local parameter set for consistency with D, it is possible to obtain the two desirable outcomes that the model of the detonation wave is internally consistent, and S is realistically correlated with D.

  7. Time Variable Gravity modeling for Precise Orbits Across the TOPEX/Poseidon, Jason-l and Jason-2 Missions

    NASA Technical Reports Server (NTRS)

    Zelensky, Nikita P.; Lemoine, Frank G.; Chinn, Douglas; Beckley, Brain D.; Melachroinos, Stavros; Rowlands, David D.; Luthcke, Scott B.

    2011-01-01

    Modeling of the Time Variable Gravity (TVG) is believed to constitute one of the the largest remaining source of orbit error for altimeter satellite POD. The GSFC operational TVG model consists of forward modeling the atmospheric gravity using ECMWF 6-hour pressure data, a GRACE derived 20x20 annual field to account for changes in the hydrology and ocean water mass, and linear rates for C20, C30, C40, based on 17 years of SLR data analysis (IERS 2003) using the EIGEN-GL04S1 (a GRACE+Lageos-based geopotential solution). Although the GSFC Operational model can be applied from 1987, there may be long-term variations not captured by these linear models, and more importantly the linear models may not be consistent with more recent surface mass trends due to global climate change, We have evaluated the impact of TVG in two different wavs: (1) by using the more recent EIGEN-6S gravity model developed by the GFZ/GRGS tearm, which consists of annual, semi-annual and secular changes in the coefficients to 50x50 determined over 8(?) years of GRACE+Lageos+GOCE data (2003-200?): (2) Application of 4x4 solutions developed from a multi satellite SLR+DORIS solution based on GGM03S that span the period from 1993 to 2011. We have evaluated the recently released EIGEN6s static and time-varying gravity field for Jason-2 (J2). Jason-I (J1), and TOPEX/Posiedon (TP) Precise Orbit Determination (POD) spanning 1993-2011. Although EIGEN6s shows significant improvement for J2POD spanning 2008 - 2011, it also shows significant degradation for TP POD from 1992. The GSFC 4x4 time SLR+DORIS-based series spans 1993 to mid 2011, and shows promise for POD. We evaluate the performance of the different TVG models based on analysis of tracking data residuals use of independent data such as altimeter crossovers, and through analysis of differences with internally-generated and externally generated orbits.

  8. Small area population forecasting: some experience with British models.

    PubMed

    Openshaw, S; Van Der Knaap, G A

    1983-01-01

    This study is concerned with the evaluation of the various models including time-series forecasts, extrapolation, and projection procedures, that have been developed to prepare population forecasts for planning purposes. These models are evaluated using data for the Netherlands. "As part of a research project at the Erasmus University, space-time population data has been assembled in a geographically consistent way for the period 1950-1979. These population time series are of sufficient length for the first 20 years to be used to build models and then evaluate the performance of the model for the next 10 years. Some 154 different forecasting models for 832 municipalities have been evaluated. It would appear that the best forecasts are likely to be provided by either a Holt-Winters model, or a ratio-correction model, or a low order exponential-smoothing model." excerpt

  9. On the Convenience of Using the Complete Linearization Method in Modelling the BLR of AGN

    NASA Astrophysics Data System (ADS)

    Patriarchi, P.; Perinotto, M.

    The Complete Linearization Method (Mihalas, 1978) consists in the determination of the radiation field (at a set of frequency points), atomic level populations, temperature, electron density etc., by resolving the system of radiative transfer, thermal equilibrium, statistical equilibrium equations simultaneously and self-consistently. Since the system is not linear, it must be solved by iteration after linearization, using a perturbative method, starting from an initial guess solution. Of course the Complete Linearization Method is more time consuming than the previous one. But how great can this disadvantage be in the age of supercomputers? It is possible to approximately evaluate the CPU time needed to run a model by computing the number of multiplications necessary to solve the system.

  10. Time-dependent density functional theory (TD-DFT) coupled with reference interaction site model self-consistent field explicitly including spatial electron density distribution (RISM-SCF-SEDD)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yokogawa, D., E-mail: d.yokogawa@chem.nagoya-u.ac.jp; Institute of Transformative Bio-Molecules

    2016-09-07

    Theoretical approach to design bright bio-imaging molecules is one of the most progressing ones. However, because of the system size and computational accuracy, the number of theoretical studies is limited to our knowledge. To overcome the difficulties, we developed a new method based on reference interaction site model self-consistent field explicitly including spatial electron density distribution and time-dependent density functional theory. We applied it to the calculation of indole and 5-cyanoindole at ground and excited states in gas and solution phases. The changes in the optimized geometries were clearly explained with resonance structures and the Stokes shift was correctly reproduced.

  11. Object selection costs in visual working memory: A diffusion model analysis of the focus of attention.

    PubMed

    Sewell, David K; Lilburn, Simon D; Smith, Philip L

    2016-11-01

    A central question in working memory research concerns the degree to which information in working memory is accessible to other cognitive processes (e.g., decision-making). Theories assuming that the focus of attention can only store a single object at a time require the focus to orient to a target representation before further processing can occur. The need to orient the focus of attention implies that single-object accounts typically predict response time costs associated with object selection even when working memory is not full (i.e., memory load is less than 4 items). For other theories that assume storage of multiple items in the focus of attention, predictions depend on specific assumptions about the way resources are allocated among items held in the focus, and how this affects the time course of retrieval of items from the focus. These broad theoretical accounts have been difficult to distinguish because conventional analyses fail to separate components of empirical response times related to decision-making from components related to selection and retrieval processes associated with accessing information in working memory. To better distinguish these response time components from one another, we analyze data from a probed visual working memory task using extensions of the diffusion decision model. Analysis of model parameters revealed that increases in memory load resulted in (a) reductions in the quality of the underlying stimulus representations in a manner consistent with a sample size model of visual working memory capacity and (b) systematic increases in the time needed to selectively access a probed representation in memory. The results are consistent with single-object theories of the focus of attention. The results are also consistent with a subset of theories that assume a multiobject focus of attention in which resource allocation diminishes both the quality and accessibility of the underlying representations. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  12. Assessing Videogrammetry for Static Aeroelastic Testing of a Wind-Tunnel Model

    NASA Technical Reports Server (NTRS)

    Spain, Charles V.; Heeg, Jennifer; Ivanco, Thomas G.; Barrows, Danny A.; Florance, James R.; Burner, Alpheus W.; DeMoss, Joshua; Lively, Peter S.

    2004-01-01

    The Videogrammetric Model Deformation (VMD) technique, developed at NASA Langley Research Center, was recently used to measure displacements and local surface angle changes on a static aeroelastic wind-tunnel model. The results were assessed for consistency, accuracy and usefulness. Vertical displacement measurements and surface angular deflections (derived from vertical displacements) taken at no-wind/no-load conditions were analyzed. For accuracy assessment, angular measurements were compared to those from a highly accurate accelerometer. Shewhart's Variables Control Charts were used in the assessment of consistency and uncertainty. Some bad data points were discovered, and it is shown that the measurement results at certain targets were more consistent than at other targets. Physical explanations for this lack of consistency have not been determined. However, overall the measurements were sufficiently accurate to be very useful in monitoring wind-tunnel model aeroelastic deformation and determining flexible stability and control derivatives. After a structural model component failed during a highly loaded condition, analysis of VMD data clearly indicated progressive structural deterioration as the wind-tunnel condition where failure occurred was approached. As a result, subsequent testing successfully incorporated near- real-time monitoring of VMD data in order to ensure structural integrity. The potential for higher levels of consistency and accuracy through the use of statistical quality control practices are discussed and recommended for future applications.

  13. Terminator field-aligned current system: A new finding from model-assimilated data set (MADS)

    NASA Astrophysics Data System (ADS)

    Zhu, L.; Schunk, R. W.; Scherliess, L.; Sojka, J. J.; Gardner, L. C.; Eccles, J. V.; Rice, D.

    2013-12-01

    Physics-based data assimilation models have been recognized by the space science community as the most accurate approach to specify and forecast the space weather of the solar-terrestrial environment. The model-assimilated data sets (MADS) produced by these models constitute an internally consistent time series of global three-dimensional fields whose accuracy can be estimated. Because of its internal consistency of physics and completeness of descriptions on the status of global systems, the MADS has also been a powerful tool to identify the systematic errors in measurements, reveal the missing physics in physical models, and discover the important dynamical physical processes that are inadequately observed or missed by measurements due to observational limitations. In the past years, we developed a data assimilation model for the high-latitude ionospheric plasma dynamics and electrodynamics. With a set of physical models, an ensemble Kalman filter, and the ingestion of data from multiple observations, the data assimilation model can produce a self-consistent time-series of the complete descriptions of the global high-latitude ionosphere, which includes the convection electric field, horizontal and field-aligned currents, conductivity, as well as 3-D plasma densities and temperatures, In this presentation, we will show a new field-aligned current system discovered from the analysis of the MADS produced by our data assimilation model. This new current system appears and develops near the ionospheric terminator. The dynamical features of this current system will be described and its connection to the active role of the ionosphere in the M-I coupling will be discussed.

  14. HyDE Framework for Stochastic and Hybrid Model-Based Diagnosis

    NASA Technical Reports Server (NTRS)

    Narasimhan, Sriram; Brownston, Lee

    2012-01-01

    Hybrid Diagnosis Engine (HyDE) is a general framework for stochastic and hybrid model-based diagnosis that offers flexibility to the diagnosis application designer. The HyDE architecture supports the use of multiple modeling paradigms at the component and system level. Several alternative algorithms are available for the various steps in diagnostic reasoning. This approach is extensible, with support for the addition of new modeling paradigms as well as diagnostic reasoning algorithms for existing or new modeling paradigms. HyDE is a general framework for stochastic hybrid model-based diagnosis of discrete faults; that is, spontaneous changes in operating modes of components. HyDE combines ideas from consistency-based and stochastic approaches to model- based diagnosis using discrete and continuous models to create a flexible and extensible architecture for stochastic and hybrid diagnosis. HyDE supports the use of multiple paradigms and is extensible to support new paradigms. HyDE generates candidate diagnoses and checks them for consistency with the observations. It uses hybrid models built by the users and sensor data from the system to deduce the state of the system over time, including changes in state indicative of faults. At each time step when observations are available, HyDE checks each existing candidate for continued consistency with the new observations. If the candidate is consistent, it continues to remain in the candidate set. If it is not consistent, then the information about the inconsistency is used to generate successor candidates while discarding the candidate that was inconsistent. The models used by HyDE are similar to simulation models. They describe the expected behavior of the system under nominal and fault conditions. The model can be constructed in modular and hierarchical fashion by building component/subsystem models (which may themselves contain component/ subsystem models) and linking them through shared variables/parameters. The component model is expressed as operating modes of the component and conditions for transitions between these various modes. Faults are modeled as transitions whose conditions for transitions are unknown (and have to be inferred through the reasoning process). Finally, the behavior of the components is expressed as a set of variables/ parameters and relations governing the interaction between the variables. The hybrid nature of the systems being modeled is captured by a combination of the above transitional model and behavioral model. Stochasticity is captured as probabilities associated with transitions (indicating the likelihood of that transition being taken), as well as noise on the sensed variables.

  15. Consistency of multi-time Dirac equations with general interaction potentials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deckert, Dirk-André, E-mail: deckert@math.lmu.de; Nickel, Lukas, E-mail: nickel@math.lmu.de

    In 1932, Dirac proposed a formulation in terms of multi-time wave functions as candidate for relativistic many-particle quantum mechanics. A well-known consistency condition that is necessary for existence of solutions strongly restricts the possible interaction types between the particles. It was conjectured by Petrat and Tumulka that interactions described by multiplication operators are generally excluded by this condition, and they gave a proof of this claim for potentials without spin-coupling. Under suitable assumptions on the differentiability of possible solutions, we show that there are potentials which are admissible, give an explicit example, however, show that none of them fulfills themore » physically desirable Poincaré invariance. We conclude that in this sense, Dirac’s multi-time formalism does not allow to model interaction by multiplication operators, and briefly point out several promising approaches to interacting models one can instead pursue.« less

  16. Contributions of Genes and Environment to Developmental Change in Alcohol Use.

    PubMed

    Long, E C; Verhulst, B; Aggen, S H; Kendler, K S; Gillespie, N A

    2017-09-01

    The precise nature of how genetic and environmental risk factors influence changes in alcohol use (AU) over time has not yet been investigated. Therefore, the aim of the present study is to examine the nature of longitudinal changes in these risk factors to AU from mid-adolescence through young adulthood. Using a large sample of male twins, we compared five developmental models that each makes different predictions regarding the longitudinal changes in genetic and environmental risks for AU. The best-fitting model indicated that genetic influences were consistent with a gradual growth in the liability to AU, whereas unique environmental risk factors were consistent with an accumulation of risks across time. These results imply that two distinct processes influence adolescent AU between the ages of 15-25. Genetic effects influence baseline levels of AU and rates of change across time, while unique environmental effects are more cumulative.

  17. Centralized Networks to Generate Human Body Motions

    PubMed Central

    Vakulenko, Sergei; Radulescu, Ovidiu; Morozov, Ivan

    2017-01-01

    We consider continuous-time recurrent neural networks as dynamical models for the simulation of human body motions. These networks consist of a few centers and many satellites connected to them. The centers evolve in time as periodical oscillators with different frequencies. The center states define the satellite neurons’ states by a radial basis function (RBF) network. To simulate different motions, we adjust the parameters of the RBF networks. Our network includes a switching module that allows for turning from one motion to another. Simulations show that this model allows us to simulate complicated motions consisting of many different dynamical primitives. We also use the model for learning human body motion from markers’ trajectories. We find that center frequencies can be learned from a small number of markers and can be transferred to other markers, such that our technique seems to be capable of correcting for missing information resulting from sparse control marker settings. PMID:29240694

  18. Activated desorption at heterogeneous interfaces and long-time kinetics of hydrocarbon recovery from nanoporous media.

    PubMed

    Lee, Thomas; Bocquet, Lydéric; Coasne, Benoit

    2016-06-21

    Hydrocarbon recovery from unconventional reservoirs (shale gas) is debated due to its environmental impact and uncertainties on its predictability. But a lack of scientific knowledge impedes the proposal of reliable alternatives. The requirement of hydrofracking, fast recovery decay and ultra-low permeability-inherent to their nanoporosity-are specificities of these reservoirs, which challenge existing frameworks. Here we use molecular simulation and statistical models to show that recovery is hampered by interfacial effects at the wet kerogen surface. Recovery is shown to be thermally activated with an energy barrier modelled from the interface wetting properties. We build a statistical model of the recovery kinetics with a two-regime decline that is consistent with published data: a short time decay, consistent with Darcy description, followed by a fast algebraic decay resulting from increasingly unreachable energy barriers. Replacing water by CO2 or propane eliminates the barriers, therefore raising hopes for clean/efficient recovery.

  19. Centralized Networks to Generate Human Body Motions.

    PubMed

    Vakulenko, Sergei; Radulescu, Ovidiu; Morozov, Ivan; Weber, Andres

    2017-12-14

    We consider continuous-time recurrent neural networks as dynamical models for the simulation of human body motions. These networks consist of a few centers and many satellites connected to them. The centers evolve in time as periodical oscillators with different frequencies. The center states define the satellite neurons' states by a radial basis function (RBF) network. To simulate different motions, we adjust the parameters of the RBF networks. Our network includes a switching module that allows for turning from one motion to another. Simulations show that this model allows us to simulate complicated motions consisting of many different dynamical primitives. We also use the model for learning human body motion from markers' trajectories. We find that center frequencies can be learned from a small number of markers and can be transferred to other markers, such that our technique seems to be capable of correcting for missing information resulting from sparse control marker settings.

  20. How Structure Shapes Dynamics: Knowledge Development in Wikipedia - A Network Multilevel Modeling Approach

    PubMed Central

    Halatchliyski, Iassen; Cress, Ulrike

    2014-01-01

    Using a longitudinal network analysis approach, we investigate the structural development of the knowledge base of Wikipedia in order to explain the appearance of new knowledge. The data consists of the articles in two adjacent knowledge domains: psychology and education. We analyze the development of networks of knowledge consisting of interlinked articles at seven snapshots from 2006 to 2012 with an interval of one year between them. Longitudinal data on the topological position of each article in the networks is used to model the appearance of new knowledge over time. Thus, the structural dimension of knowledge is related to its dynamics. Using multilevel modeling as well as eigenvector and betweenness measures, we explain the significance of pivotal articles that are either central within one of the knowledge domains or boundary-crossing between the two domains at a given point in time for the future development of new knowledge in the knowledge base. PMID:25365319

  1. Establishment of a rotor model basis

    NASA Technical Reports Server (NTRS)

    Mcfarland, R. E.

    1982-01-01

    Radial-dimension computations in the RSRA's blade-element model are modified for both the acquisition of extensive baseline data and for real-time simulation use. The baseline data, which are for the evaluation of model changes, use very small increments and are of high quality. The modifications to the real-time simulation model are for accuracy improvement, especially when a minimal number of blade segments is required for real-time synchronization. An accurate technique for handling tip loss in discrete blade models is developed. The mathematical consistency and convergence properties of summation algorithms for blade forces and moments are examined and generalized integration coefficients are applied to equal-annuli midpoint spacing. Rotor conditions identified as 'constrained' and 'balanced' are used and the propagation of error is analyzed.

  2. Is ET often oversimplified in hydrologic models? Using long records to elucidate unaccounted for controls on ET

    NASA Astrophysics Data System (ADS)

    Kelleher, Christa A.; Shaw, Stephen B.

    2018-02-01

    Recent research has found that hydrologic modeling over decadal time periods often requires time variant model parameters. Most prior work has focused on assessing time variance in model parameters conceptualizing watershed features and functions. In this paper, we assess whether adding a time variant scalar to potential evapotranspiration (PET) can be used in place of time variant parameters. Using the HBV hydrologic model and four different simple but common PET methods (Hamon, Priestly-Taylor, Oudin, and Hargreaves), we simulated 60+ years of daily discharge on four rivers in New York state. Allowing all ten model parameters to vary in time achieved good model fits in terms of daily NSE and long-term water balance. However, allowing single model parameters to vary in time - including a scalar on PET - achieved nearly equivalent model fits across PET methods. Overall, varying a PET scalar in time is likely more physically consistent with known biophysical controls on PET as compared to varying parameters conceptualizing innate watershed properties related to soil properties such as wilting point and field capacity. This work suggests that the seeming need for time variance in innate watershed parameters may be due to overly simple evapotranspiration formulations that do not account for all factors controlling evapotranspiration over long time periods.

  3. Evaluation of Thompson-type trend and monthly weather data models for corn yields in Iowa, Illinois, and Indiana

    NASA Technical Reports Server (NTRS)

    French, V. (Principal Investigator)

    1982-01-01

    An evaluation was made of Thompson-Type models which use trend terms (as a surrogate for technology), meteorological variables based on monthly average temperature, and total precipitation to forecast and estimate corn yields in Iowa, Illinois, and Indiana. Pooled and unpooled Thompson-type models were compared. Neither was found to be consistently superior to the other. Yield reliability indicators show that the models are of limited use for large area yield estimation. The models are objective and consistent with scientific knowledge. Timely yield forecasts and estimates can be made during the growing season by using normals or long range weather forecasts. The models are not costly to operate and are easy to use and understand. The model standard errors of prediction do not provide a useful current measure of modeled yield reliability.

  4. Performance evaluation of the croissant production line with reparable machines

    NASA Astrophysics Data System (ADS)

    Tsarouhas, Panagiotis H.

    2015-03-01

    In this study, the analytical probability models for an automated serial production system, bufferless that consists of n-machines in series with common transfer mechanism and control system was developed. Both time to failure and time to repair a failure are assumed to follow exponential distribution. Applying those models, the effect of system parameters on system performance in actual croissant production line was studied. The production line consists of six workstations with different numbers of reparable machines in series. Mathematical models of the croissant production line have been developed using Markov process. The strength of this study is in the classification of the whole system in states, representing failures of different machines. Failure and repair data from the actual production environment have been used to estimate reliability and maintainability for each machine, workstation, and the entire line is based on analytical models. The analysis provides a useful insight into the system's behaviour, helps to find design inherent faults and suggests optimal modifications to upgrade the system and improve its performance.

  5. Linking time-series of single-molecule experiments with molecular dynamics simulations by machine learning

    PubMed Central

    Matsunaga, Yasuhiro

    2018-01-01

    Single-molecule experiments and molecular dynamics (MD) simulations are indispensable tools for investigating protein conformational dynamics. The former provide time-series data, such as donor-acceptor distances, whereas the latter give atomistic information, although this information is often biased by model parameters. Here, we devise a machine-learning method to combine the complementary information from the two approaches and construct a consistent model of conformational dynamics. It is applied to the folding dynamics of the formin-binding protein WW domain. MD simulations over 400 μs led to an initial Markov state model (MSM), which was then "refined" using single-molecule Förster resonance energy transfer (FRET) data through hidden Markov modeling. The refined or data-assimilated MSM reproduces the FRET data and features hairpin one in the transition-state ensemble, consistent with mutation experiments. The folding pathway in the data-assimilated MSM suggests interplay between hydrophobic contacts and turn formation. Our method provides a general framework for investigating conformational transitions in other proteins. PMID:29723137

  6. General Mechanism of Two-State Protein Folding Kinetics

    PubMed Central

    Rollins, Geoffrey C.; Dill, Ken A.

    2016-01-01

    We describe here a general model of the kinetic mechanism of protein folding. In the Foldon Funnel Model, proteins fold in units of secondary structures, which form sequentially along the folding pathway, stabilized by tertiary interactions. The model predicts that the free energy landscape has a volcano shape, rather than a simple funnel, that folding is two-state (single-exponential) when secondary structures are intrinsically unstable, and that each structure along the folding path is a transition state for the previous structure. It shows how sequential pathways are consistent with multiple stochastic routes on funnel landscapes, and it gives good agreement with the 9 order of magnitude dependence of folding rates on protein size for a set of 93 proteins, at the same time it is consistent with the near independence of folding equilibrium constant on size. This model gives estimates of folding rates of proteomes, leading to a median folding time in Escherichia coli of about 5 s. PMID:25056406

  7. Self-consistent modelling of the polar thermosphere and ionosphere to magnetospheric convection and precipitation (invited review)

    NASA Technical Reports Server (NTRS)

    Rees, D.; Fuller-Rowell, T.; Quegan, S.; Moffett, R.

    1986-01-01

    It has recently been demonstrated that the dramatic effects of plasma precipitation and convection on the composition and dynamics of the polar thermosphere and ionosphere include a number of strong interactive, or feedback, processes. To aid the evaluation of these feedback processes, a joint three dimensional time dependent global model of the Earth's thermosphere and ionosphere was developed in a collaboration between University College London and Sheffield University. This model includes self consistent coupling between the thermosphere and the ionosphere in the polar regions. Some of the major features in the polar ionosphere, which the initial simulations indicate are due to the strong coupling of ions and neutrals in the presence of strong electric fields and energetic electron precipitation are reviewed. The model is also able to simulate seasonal and Universal time variations in the polar thermosphere and ionospheric regions which are due to the variations of solar photoionization in specific geomagnetic regions such as the cusp and polar cap.

  8. Linking time-series of single-molecule experiments with molecular dynamics simulations by machine learning.

    PubMed

    Matsunaga, Yasuhiro; Sugita, Yuji

    2018-05-03

    Single-molecule experiments and molecular dynamics (MD) simulations are indispensable tools for investigating protein conformational dynamics. The former provide time-series data, such as donor-acceptor distances, whereas the latter give atomistic information, although this information is often biased by model parameters. Here, we devise a machine-learning method to combine the complementary information from the two approaches and construct a consistent model of conformational dynamics. It is applied to the folding dynamics of the formin-binding protein WW domain. MD simulations over 400 μs led to an initial Markov state model (MSM), which was then "refined" using single-molecule Förster resonance energy transfer (FRET) data through hidden Markov modeling. The refined or data-assimilated MSM reproduces the FRET data and features hairpin one in the transition-state ensemble, consistent with mutation experiments. The folding pathway in the data-assimilated MSM suggests interplay between hydrophobic contacts and turn formation. Our method provides a general framework for investigating conformational transitions in other proteins. © 2018, Matsunaga et al.

  9. Nonadiabatic Dynamics for Electrons at Second-Order: Real-Time TDDFT and OSCF2.

    PubMed

    Nguyen, Triet S; Parkhill, John

    2015-07-14

    We develop a new model to simulate nonradiative relaxation and dephasing by combining real-time Hartree-Fock and density functional theory (DFT) with our recent open-systems theory of electronic dynamics. The approach has some key advantages: it has been systematically derived and properly relaxes noninteracting electrons to a Fermi-Dirac distribution. This paper combines the new dissipation theory with an atomistic, all-electron quantum chemistry code and an atom-centered model of the thermal environment. The environment is represented nonempirically and is dependent on molecular structure in a nonlocal way. A production quality, O(N(3)) closed-shell implementation of our theory applicable to realistic molecular systems is presented, including timing information. This scaling implies that the added cost of our nonadiabatic relaxation model, time-dependent open self-consistent field at second order (OSCF2), is computationally inexpensive, relative to adiabatic propagation of real-time time-dependent Hartree-Fock (TDHF) or time-dependent density functional theory (TDDFT). Details of the implementation and numerical algorithm, including factorization and efficiency, are discussed. We demonstrate that OSCF2 approaches the stationary self-consistent field (SCF) ground state when the gap is large relative to k(b)T. The code is used to calculate linear-response spectra including the effects of bath dynamics. Finally, we show how our theory of finite-temperature relaxation can be used to correct ground-state DFT calculations.

  10. Estimating True Short-Term Consistency in Vocational Interests: A Longitudinal SEM Approach

    ERIC Educational Resources Information Center

    Gaudron, Jean-Philippe; Vautier, Stephane

    2007-01-01

    This study aimed at estimating the correlation between true scores (true consistency) of vocational interest over a short time span in a sample of 1089 adults. Participants were administered 54 items assessing vocational, family, and leisure interests twice over a 1-month period. Responses were analyzed with a multitrait (MT) model, which supposes…

  11. Using Multimodal Learning Analytics to Model Student Behaviour: A Systematic Analysis of Behavioural Framing

    ERIC Educational Resources Information Center

    Andrade, Alejandro; Delandshere, Ginette; Danish, Joshua A.

    2016-01-01

    One of the challenges many learning scientists face is the laborious task of coding large amounts of video data and consistently identifying social actions, which is time consuming and difficult to accomplish in a systematic and consistent manner. It is easier to catalog observable behaviours (e.g., body motions or gaze) without explicitly…

  12. Exploring the effect of anisotropy on body-wave tomography models: Rollback and subduction of the Alboran slab

    NASA Astrophysics Data System (ADS)

    Lee, H.; Bezada, M.

    2017-12-01

    Teleseismic P-wave tomography models often show low-velocity anomalies behind subducted slabs (i.e. opposite the direction of subduction). One such anomaly, behind the Alboran slab in the westernmost Mediterranean, requires partial melt in the mantle if taken at face-value. However, mantle anisotropy can cause low-velocity anomalies in tomographic models that assume isotropy. In fact, results from SKS splitting suggest rollback-induced anisotropy within the low-velocity region, and we investigate if this anisotropy can explain the sub-slab anomaly. We include anisotropy as an a priori constraint on the inversion and test different magnitudes, azimuths, and dips within the low-velocity region. We find that a range of anisotropic models can fit the travel time data as well as the isotropic models while significantly reducing or eliminating the low-velocity anomaly behind the slab. We conclude that this alternative interpretation (delays are caused by anisotropic structure) is as consistent with the travel time data as an isotropic low-velocity anomaly, and more consistent with SKS splitting observations and the known history of rollback. In addition, we find that models that include anisotropy with steeply dipping fast axes, meant to simulate the effect of downgoing entrained mantle, provide a poorer fit to the travel times than all the other models. This suggests that the slab may no longer be actively subducting.

  13. A Composite View of Ozone Evolution in the 1995-1996 Northern Winter Polar Vortex Developed from Airborne Lidar and Satellite Observations

    NASA Technical Reports Server (NTRS)

    Douglass, A. R.; Schoeberl, M. R.; Kawa, S. R.; Browell, E. V.

    2000-01-01

    The processes which contribute to the ozone evolution in the high latitude northern lower stratosphere are evaluated using a three dimensional model simulation and ozone observations. The model uses winds and temperatures from the Goddard Earth Observing System Data Assimilation System. The simulation results are compared with ozone observations from three platforms: the differential absorption lidar (DIAL) which was flown on the NASA DC-8 as part of the Vortex Ozone Transport Experiment; the Microwave Limb Sounder (MLS); the Polar Ozone and Aerosol Measurement (POAM II) solar occultation instrument. Time series for the different data sets are consistent with each other, and diverge from model time series during December and January. The model ozone in December and January is shown to be much less sensitive to the model photochemistry than to the model vertical transport, which depends on the model vertical motion as well as the model vertical gradient. We evaluate the dependence of model ozone evolution on the model ozone gradient by comparing simulations with different initial conditions for ozone. The modeled ozone throughout December and January most closely resembles observed ozone when the vertical profiles between 12 and 20 km within the polar vortex closely match December DIAL observations. We make a quantitative estimate of the uncertainty in the vertical advection using diabatic trajectory calculations. The net transport uncertainty is significant, and should be accounted for when comparing observations with model ozone. The observed and modeled ozone time series during December and January are consistent when these transport uncertainties are taken into account.

  14. Combined electrochemical, heat generation, and thermal model for large prismatic lithium-ion batteries in real-time applications

    NASA Astrophysics Data System (ADS)

    Farag, Mohammed; Sweity, Haitham; Fleckenstein, Matthias; Habibi, Saeid

    2017-08-01

    Real-time prediction of the battery's core temperature and terminal voltage is very crucial for an accurate battery management system. In this paper, a combined electrochemical, heat generation, and thermal model is developed for large prismatic cells. The proposed model consists of three sub-models, an electrochemical model, heat generation model, and thermal model which are coupled together in an iterative fashion through physicochemical temperature dependent parameters. The proposed parameterization cycles identify the sub-models' parameters separately by exciting the battery under isothermal and non-isothermal operating conditions. The proposed combined model structure shows accurate terminal voltage and core temperature prediction at various operating conditions while maintaining a simple mathematical structure, making it ideal for real-time BMS applications. Finally, the model is validated against both isothermal and non-isothermal drive cycles, covering a broad range of C-rates, and temperature ranges [-25 °C to 45 °C].

  15. Self-consistent one dimension in space and three dimension in velocity kinetic trajectory simulation model of magnetized plasma-wall transition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chalise, Roshan, E-mail: plasma.roshan@gmail.com; Khanal, Raju

    2015-11-15

    We have developed a self-consistent 1d3v (one dimension in space and three dimension in velocity) Kinetic Trajectory Simulation (KTS) model, which can be used for modeling various situations of interest and yields results of high accuracy. Exact ion trajectories are followed, to calculate along them the ion distribution function, assuming an arbitrary injection ion distribution. The electrons, on the other hand, are assumed to have a cut-off Maxwellian velocity distribution at injection and their density distribution is obtained analytically. Starting from an initial guess, the potential profile is iterated towards the final time-independent self-consistent state. We have used it tomore » study plasma sheath region formed in presence of an oblique magnetic field. Our results agree well with previous works from other models, and hence, we expect our 1d3v KTS model to provide a basis for the studying of all types of magnetized plasmas, yielding more accurate results.« less

  16. Diversification rates have declined in the Malagasy herpetofauna.

    PubMed

    Scantlebury, Daniel P

    2013-09-07

    The evolutionary origins of Madagascar's biodiversity remain mysterious despite the fact that relative to land area, there is no other place with consistently high levels of species richness and endemism across a range of taxonomic levels. Most efforts to explain diversification on the island have focused on geographical models of speciation, but recent studies have begun to address the island's accumulation of species through time, although with conflicting results. Prevailing hypotheses for diversification on the island involve either constant diversification rates or scenarios where rates decline through time. Using relative-time-calibrated phylogenies for seven endemic vertebrate clades and a model-fitting framework, I find evidence that diversification rates have declined through time on Madagascar. I show that diversification rates have clearly declined throughout the history of each clade, and models invoking diversity-dependent reductions to diversification rates best explain the diversification histories for each clade. These results are consistent with the ecological theory of adaptive radiation, and, coupled with ancillary observations about ecomorphological and life-history evolution, strongly suggest that adaptive radiation was an important formative process for one of the most species-rich regions on the Earth. These results cast the Malagasy biota in a new light and provide macroevolutionary justification for conservation initiatives.

  17. Spatiotemporal Segmentation and Modeling of the Mitral Valve in Real-Time 3D Echocardiographic Images.

    PubMed

    Pouch, Alison M; Aly, Ahmed H; Lai, Eric K; Yushkevich, Natalie; Stoffers, Rutger H; Gorman, Joseph H; Cheung, Albert T; Gorman, Joseph H; Gorman, Robert C; Yushkevich, Paul A

    2017-09-01

    Transesophageal echocardiography is the primary imaging modality for preoperative assessment of mitral valves with ischemic mitral regurgitation (IMR). While there are well known echocardiographic insights into the 3D morphology of mitral valves with IMR, such as annular dilation and leaflet tethering, less is understood about how quantification of valve dynamics can inform surgical treatment of IMR or predict short-term recurrence of the disease. As a step towards filling this knowledge gap, we present a novel framework for 4D segmentation and geometric modeling of the mitral valve in real-time 3D echocardiography (rt-3DE). The framework integrates multi-atlas label fusion and template-based medial modeling to generate quantitatively descriptive models of valve dynamics. The novelty of this work is that temporal consistency in the rt-3DE segmentations is enforced during both the segmentation and modeling stages with the use of groupwise label fusion and Kalman filtering. The algorithm is evaluated on rt-3DE data series from 10 patients: five with normal mitral valve morphology and five with severe IMR. In these 10 data series that total 207 individual 3DE images, each 3DE segmentation is validated against manual tracing and temporal consistency between segmentations is demonstrated. The ultimate goal is to generate accurate and consistent representations of valve dynamics that can both visually and quantitatively provide insight into normal and pathological valve function.

  18. Self-consistent large- N analytical solutions of inhomogeneous condensates in quantum ℂP N - 1 model

    NASA Astrophysics Data System (ADS)

    Nitta, Muneto; Yoshii, Ryosuke

    2017-12-01

    We give, for the first time, self-consistent large- N analytical solutions of inhomogeneous condensates in the quantum ℂP N - 1 model in the large- N limit. We find a map from a set of gap equations of the ℂP N - 1 model to those of the Gross-Neveu (GN) model (or the gap equation and the Bogoliubov-de Gennes equation), which enables us to find the self-consistent solutions. We find that the Higgs field of the ℂP N - 1 model is given as a zero mode of solutions of the GN model, and consequently only topologically non-trivial solutions of the GN model yield nontrivial solutions of the ℂP N - 1 model. A stable single soliton is constructed from an anti-kink of the GN model and has a broken (Higgs) phase inside its core, in which ℂP N - 1 modes are localized, with a symmetric (confining) phase outside. We further find a stable periodic soliton lattice constructed from a real kink crystal in the GN model, while the Ablowitz-Kaup-Newell-Segur hierarchy yields multiple solitons at arbitrary separations.

  19. A robust real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Wenyang; Cheung, Yam; Sawant, Amit

    2016-05-15

    Purpose: To develop a robust and real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system. Methods: The authors have developed a robust and fast surface reconstruction method on point clouds acquired by the photogrammetry system, without explicitly solving the partial differential equation required by a typical variational approach. Taking advantage of the overcomplete nature of the acquired point clouds, their method solves and propagates a sparse linear relationship from the point cloud manifold to the surface manifold, assuming both manifolds share similar local geometry. With relatively consistent point cloud acquisitions, the authors propose a sparsemore » regression (SR) model to directly approximate the target point cloud as a sparse linear combination from the training set, assuming that the point correspondences built by the iterative closest point (ICP) is reasonably accurate and have residual errors following a Gaussian distribution. To accommodate changing noise levels and/or presence of inconsistent occlusions during the acquisition, the authors further propose a modified sparse regression (MSR) model to model the potentially large and sparse error built by ICP with a Laplacian prior. The authors evaluated the proposed method on both clinical point clouds acquired under consistent acquisition conditions and on point clouds with inconsistent occlusions. The authors quantitatively evaluated the reconstruction performance with respect to root-mean-squared-error, by comparing its reconstruction results against that from the variational method. Results: On clinical point clouds, both the SR and MSR models have achieved sub-millimeter reconstruction accuracy and reduced the reconstruction time by two orders of magnitude to a subsecond reconstruction time. On point clouds with inconsistent occlusions, the MSR model has demonstrated its advantage in achieving consistent and robust performance despite the introduced occlusions. Conclusions: The authors have developed a fast and robust surface reconstruction method on point clouds captured from a 3D surface photogrammetry system, with demonstrated sub-millimeter reconstruction accuracy and subsecond reconstruction time. It is suitable for real-time motion tracking in radiotherapy, with clear surface structures for better quantifications.« less

  20. A robust real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system.

    PubMed

    Liu, Wenyang; Cheung, Yam; Sawant, Amit; Ruan, Dan

    2016-05-01

    To develop a robust and real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system. The authors have developed a robust and fast surface reconstruction method on point clouds acquired by the photogrammetry system, without explicitly solving the partial differential equation required by a typical variational approach. Taking advantage of the overcomplete nature of the acquired point clouds, their method solves and propagates a sparse linear relationship from the point cloud manifold to the surface manifold, assuming both manifolds share similar local geometry. With relatively consistent point cloud acquisitions, the authors propose a sparse regression (SR) model to directly approximate the target point cloud as a sparse linear combination from the training set, assuming that the point correspondences built by the iterative closest point (ICP) is reasonably accurate and have residual errors following a Gaussian distribution. To accommodate changing noise levels and/or presence of inconsistent occlusions during the acquisition, the authors further propose a modified sparse regression (MSR) model to model the potentially large and sparse error built by ICP with a Laplacian prior. The authors evaluated the proposed method on both clinical point clouds acquired under consistent acquisition conditions and on point clouds with inconsistent occlusions. The authors quantitatively evaluated the reconstruction performance with respect to root-mean-squared-error, by comparing its reconstruction results against that from the variational method. On clinical point clouds, both the SR and MSR models have achieved sub-millimeter reconstruction accuracy and reduced the reconstruction time by two orders of magnitude to a subsecond reconstruction time. On point clouds with inconsistent occlusions, the MSR model has demonstrated its advantage in achieving consistent and robust performance despite the introduced occlusions. The authors have developed a fast and robust surface reconstruction method on point clouds captured from a 3D surface photogrammetry system, with demonstrated sub-millimeter reconstruction accuracy and subsecond reconstruction time. It is suitable for real-time motion tracking in radiotherapy, with clear surface structures for better quantifications.

  1. A robust real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system

    PubMed Central

    Liu, Wenyang; Cheung, Yam; Sawant, Amit; Ruan, Dan

    2016-01-01

    Purpose: To develop a robust and real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system. Methods: The authors have developed a robust and fast surface reconstruction method on point clouds acquired by the photogrammetry system, without explicitly solving the partial differential equation required by a typical variational approach. Taking advantage of the overcomplete nature of the acquired point clouds, their method solves and propagates a sparse linear relationship from the point cloud manifold to the surface manifold, assuming both manifolds share similar local geometry. With relatively consistent point cloud acquisitions, the authors propose a sparse regression (SR) model to directly approximate the target point cloud as a sparse linear combination from the training set, assuming that the point correspondences built by the iterative closest point (ICP) is reasonably accurate and have residual errors following a Gaussian distribution. To accommodate changing noise levels and/or presence of inconsistent occlusions during the acquisition, the authors further propose a modified sparse regression (MSR) model to model the potentially large and sparse error built by ICP with a Laplacian prior. The authors evaluated the proposed method on both clinical point clouds acquired under consistent acquisition conditions and on point clouds with inconsistent occlusions. The authors quantitatively evaluated the reconstruction performance with respect to root-mean-squared-error, by comparing its reconstruction results against that from the variational method. Results: On clinical point clouds, both the SR and MSR models have achieved sub-millimeter reconstruction accuracy and reduced the reconstruction time by two orders of magnitude to a subsecond reconstruction time. On point clouds with inconsistent occlusions, the MSR model has demonstrated its advantage in achieving consistent and robust performance despite the introduced occlusions. Conclusions: The authors have developed a fast and robust surface reconstruction method on point clouds captured from a 3D surface photogrammetry system, with demonstrated sub-millimeter reconstruction accuracy and subsecond reconstruction time. It is suitable for real-time motion tracking in radiotherapy, with clear surface structures for better quantifications. PMID:27147347

  2. Quantum clocks and the foundations of relativity

    NASA Astrophysics Data System (ADS)

    Davies, Paul C. W.

    2004-05-01

    The conceptual foundations of the special and general theories of relativity differ greatly from those of quantum mechanics. Yet in all cases investigated so far, quantum mechanics seems to be consistent with the principles of relativity theory, when interpreted carefully. In this paper I report on a new investigation of this consistency using a model of a quantum clock to measure time intervals; a topic central to all metric theories of gravitation, and to cosmology. Results are presented for two important scenarios related to the foundations of relativity theory: the speed of light as a limiting velocity and the weak equivalence principle (WEP). These topics are investigated in the light of claims of superluminal propagation in quantum tunnelling and possible violations of WEP. Special attention is given to the role of highly non-classical states. I find that by using a definition of time intervals based on a precise model of a quantum clock, ambiguities are avoided and, at least in the scenarios investigated, there is consistency with the theory of relativity, albeit with some subtleties.

  3. Nature of solidification of nanoconfined organic liquid layers.

    PubMed

    Lang, X Y; Zhu, Y F; Jiang, Q

    2007-01-30

    A simple model is established for solidification of a nanoconfined liquid under nonequilibrium conditions. In terms of this model, the nature of solidification is the conjunct finite size and interface effects, which is directly related to the cooling rate or the relaxation time of the undercooled liquid. The model predictions are consistent with available experimental results.

  4. A prospective earthquake forecast experiment for Japan

    NASA Astrophysics Data System (ADS)

    Yokoi, Sayoko; Nanjo, Kazuyoshi; Tsuruoka, Hiroshi; Hirata, Naoshi

    2013-04-01

    One major focus of the current Japanese earthquake prediction research program (2009-2013) is to move toward creating testable earthquake forecast models. For this purpose we started an experiment of forecasting earthquake activity in Japan under the framework of the Collaboratory for the Study of Earthquake Predictability (CSEP) through an international collaboration. We established the CSEP Testing Centre, an infrastructure to encourage researchers to develop testable models for Japan, and to conduct verifiable prospective tests of their model performance. On 1 November in 2009, we started the 1st earthquake forecast testing experiment for the Japan area. We use the unified JMA catalogue compiled by the Japan Meteorological Agency as authorized catalogue. The experiment consists of 12 categories, with 4 testing classes with different time spans (1 day, 3 months, 1 year, and 3 years) and 3 testing regions called All Japan, Mainland, and Kanto. A total of 91 models were submitted to CSEP-Japan, and are evaluated with the CSEP official suite of tests about forecast performance. In this presentation, we show the results of the experiment of the 3-month testing class for 5 rounds. HIST-ETAS7pa, MARFS and RI10K models corresponding to the All Japan, Mainland and Kanto regions showed the best score based on the total log-likelihood. It is also clarified that time dependency of model parameters is no effective factor to pass the CSEP consistency tests for the 3-month testing class in all regions. Especially, spatial distribution in the All Japan region was too difficult to pass consistency test due to multiple events at a bin. Number of target events for a round in the Mainland region tended to be smaller than model's expectation during all rounds, which resulted in rejections of consistency test because of overestimation. In the Kanto region, pass ratios of consistency tests in each model showed more than 80%, which was associated with good balanced forecasting of event number and spatial distribution. Due to the multiple rounds of the experiment, we are now understanding the stability of models, robustness of model selection and earthquake predictability in each region beyond stochastic fluctuations of seismicity. We plan to use the results for design of 3 dimensional earthquake forecasting model in Kanto region, which is supported by the special project for reducing vulnerability for urban mega earthquake disasters from Ministy of Education, Culture, Sports and Technology of Japan.

  5. Common sense reasoning about petroleum flow

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rosenberg, S.

    1981-02-01

    This paper describes an expert system for understanding and Reasoning in a petroleum resources domain. A basic model is implemented in FRL (Frame Representation Language). Expertise is encoded as rule frames. The model consists of a set of episodic contexts which are sequentially generated over time. Reasoning occurs in separate reasoning contexts consisting of a buffer frame and packets of rules. These function similar to small production systems. reasoning is linked to the model through an interface of Sentinels (instance driven demons) which notice anomalous conditions. Heuristics and metaknowledge are used through the creation of further reasoning contexts which overlaymore » the simpler ones.« less

  6. Baryon octet electromagnetic form factors in a confining NJL model

    NASA Astrophysics Data System (ADS)

    Carrillo-Serrano, Manuel E.; Bentz, Wolfgang; Cloët, Ian C.; Thomas, Anthony W.

    2016-08-01

    Electromagnetic form factors of the baryon octet are studied using a Nambu-Jona-Lasinio model which utilizes the proper-time regularization scheme to simulate aspects of colour confinement. In addition, the model also incorporates corrections to the dressed quarks from vector meson correlations in the t-channel and the pion cloud. Comparison with recent chiral extrapolations of lattice QCD results shows a remarkable level of consistency. For the charge radii we find the surprising result that rEp < rEΣ+ and | rEn | < | rEΞ0 |, whereas the magnetic radii have a pattern largely consistent with a naive expectation based on the dressed quark masses.

  7. 40 CFR 86.1725-99 - Maintenance.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...) through (e) and subsequent model year provisions. (b) Manufacturers of series hybrid electric vehicles and... the first time the minimum performance level is observed for all battery system components. Possible... system consisting of a light that shall illuminate the first time the battery system is unable to achieve...

  8. 40 CFR 86.1725-99 - Maintenance.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...) through (e) and subsequent model year provisions. (b) Manufacturers of series hybrid electric vehicles and... the first time the minimum performance level is observed for all battery system components. Possible... system consisting of a light that shall illuminate the first time the battery system is unable to achieve...

  9. 40 CFR 86.1725-99 - Maintenance.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...) through (e) and subsequent model year provisions. (b) Manufacturers of series hybrid electric vehicles and... the first time the minimum performance level is observed for all battery system components. Possible... system consisting of a light that shall illuminate the first time the battery system is unable to achieve...

  10. 40 CFR 86.1725-99 - Maintenance.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...) through (e) and subsequent model year provisions. (b) Manufacturers of series hybrid electric vehicles and... the first time the minimum performance level is observed for all battery system components. Possible... system consisting of a light that shall illuminate the first time the battery system is unable to achieve...

  11. Significant motions between GPS sites in the New Madrid region: implications for seismic hazard

    USGS Publications Warehouse

    Frankel, Arthur; Smalley, Robert; Paul, J.

    2012-01-01

    Position time series from Global Positioning System (GPS) stations in the New Madrid region were differenced to determine the relative motions between stations. Uncertainties in rates were estimated using a three‐component noise model consisting of white, flicker, and random walk noise, following the methodology of Langbein, 2004. Significant motions of 0.37±0.07 (one standard error) mm/yr were found between sites PTGV and STLE, for which the baseline crosses the inferred deep portion of the Reelfoot fault. Baselines between STLE and three other sites also show significant motion. Site MCTY (adjacent to STLE) also exhibits significant motion with respect to PTGV. These motions are consistent with a model of interseismic slip of about 4  mm/yr on the Reelfoot fault at depths between 12 and 20 km. If constant over time, this rate of slip produces sufficient slip for an M 7.3 earthquake on the shallow portion of the Reelfoot fault, using the geologically derived recurrence time of 500 years. This model assumes that the shallow portion of the fault has been previously loaded by the intraplate stress. A GPS site near Little Rock, Arkansas, shows significant southward motion of 0.3–0.4  mm/yr (±0.08  mm/yr) relative to three sites to the north, indicating strain consistent with focal mechanisms of earthquake swarms in northern Arkansas.

  12. Time Series Modeling of Nano-Gold Immunochromatographic Assay via Expectation Maximization Algorithm.

    PubMed

    Zeng, Nianyin; Wang, Zidong; Li, Yurong; Du, Min; Cao, Jie; Liu, Xiaohui

    2013-12-01

    In this paper, the expectation maximization (EM) algorithm is applied to the modeling of the nano-gold immunochromatographic assay (nano-GICA) via available time series of the measured signal intensities of the test and control lines. The model for the nano-GICA is developed as the stochastic dynamic model that consists of a first-order autoregressive stochastic dynamic process and a noisy measurement. By using the EM algorithm, the model parameters, the actual signal intensities of the test and control lines, as well as the noise intensity can be identified simultaneously. Three different time series data sets concerning the target concentrations are employed to demonstrate the effectiveness of the introduced algorithm. Several indices are also proposed to evaluate the inferred models. It is shown that the model fits the data very well.

  13. Validating the short measure of the Effort-Reward Imbalance Questionnaire in older workers in the context of New Zealand.

    PubMed

    Li, Jian; Herr, Raphael M; Allen, Joanne; Stephens, Christine; Alpass, Fiona

    2017-11-25

    The objective of this study was to validate a short version of the Effort-Reward-Imbalance (ERI) questionnaire in the context of New Zealand among older full-time and part-time employees. Data were collected from 1694 adults aged 48-83 years (mean 60 years, 53% female) who reported being in full- or part-time paid employment in the 2010 wave of the New Zealand Health, Work and Retirement study. Scale reliability was evaluated by item-total correlations and Cronbach's alpha. Factorial validity was assessed using multi-group confirmatory factor analyses assessing nested models of configural, metric, scalar and strict invariance across full- and part-time employment groups. Logistic regressions estimated associations of effort-reward ratio and over-commitment with poor physical/mental health, and depressive symptoms. Internal consistency of ERI scales was high across employment groups: effort 0.78-0.76; reward 0.81-0.77, and over-commitment 0.83-0.80. The three-factor model displayed acceptable fit in the overall sample (X 2 /df = 10.31; CFI = 0.95; TLI = 0.94; RMSEA = 0.075), and decrements in model fit indices provided evidence for strict invariance of the three-factor ERI model across full-time and part-time employment groups. High effort-reward ratio scores were consistently associated with poor mental health and depressive symptoms for both employment groups. High over-commitment was associated with poor mental health and depressive symptoms in both groups and also with poor physical health in the full-time employment group. The short ERI questionnaire appears to be a valid instrument to assess adverse psychosocial work characteristics in old full-time and part-time employees in New Zealand.

  14. Validating the short measure of the Effort-Reward Imbalance Questionnaire in older workers in the context of New Zealand

    PubMed Central

    Li, Jian; Herr, Raphael M.; Allen, Joanne; Stephens, Christine; Alpass, Fiona

    2017-01-01

    Objectives: The objective of this study was to validate a short version of the Effort-Reward-Imbalance (ERI) questionnaire in the context of New Zealand among older full-time and part-time employees. Methods: Data were collected from 1694 adults aged 48-83 years (mean 60 years, 53% female) who reported being in full- or part-time paid employment in the 2010 wave of the New Zealand Health, Work and Retirement study. Scale reliability was evaluated by item-total correlations and Cronbach's alpha. Factorial validity was assessed using multi-group confirmatory factor analyses assessing nested models of configural, metric, scalar and strict invariance across full- and part-time employment groups. Logistic regressions estimated associations of effort-reward ratio and over-commitment with poor physical/mental health, and depressive symptoms. Results: Internal consistency of ERI scales was high across employment groups: effort 0.78-0.76; reward 0.81-0.77, and over-commitment 0.83-0.80. The three-factor model displayed acceptable fit in the overall sample (X2/df = 10.31; CFI = 0.95; TLI = 0.94; RMSEA = 0.075), and decrements in model fit indices provided evidence for strict invariance of the three-factor ERI model across full-time and part-time employment groups. High effort-reward ratio scores were consistently associated with poor mental health and depressive symptoms for both employment groups. High over-commitment was associated with poor mental health and depressive symptoms in both groups and also with poor physical health in the full-time employment group. Conclusions: The short ERI questionnaire appears to be a valid instrument to assess adverse psychosocial work characteristics in old full-time and part-time employees in New Zealand. PMID:28835574

  15. Process-time Optimization of Vacuum Degassing Using a Genetic Alloy Design Approach

    PubMed Central

    Dilner, David; Lu, Qi; Mao, Huahai; Xu, Wei; van der Zwaag, Sybrand; Selleby, Malin

    2014-01-01

    This paper demonstrates the use of a new model consisting of a genetic algorithm in combination with thermodynamic calculations and analytical process models to minimize the processing time during a vacuum degassing treatment of liquid steel. The model sets multiple simultaneous targets for final S, N, O, Si and Al levels and uses the total slag mass, the slag composition, the steel composition and the start temperature as optimization variables. The predicted optimal conditions agree well with industrial practice. For those conditions leading to the shortest process time the target compositions for S, N and O are reached almost simultaneously. PMID:28788286

  16. A comparison of the stochastic and machine learning approaches in hydrologic time series forecasting

    NASA Astrophysics Data System (ADS)

    Kim, T.; Joo, K.; Seo, J.; Heo, J. H.

    2016-12-01

    Hydrologic time series forecasting is an essential task in water resources management and it becomes more difficult due to the complexity of runoff process. Traditional stochastic models such as ARIMA family has been used as a standard approach in time series modeling and forecasting of hydrological variables. Due to the nonlinearity in hydrologic time series data, machine learning approaches has been studied with the advantage of discovering relevant features in a nonlinear relation among variables. This study aims to compare the predictability between the traditional stochastic model and the machine learning approach. Seasonal ARIMA model was used as the traditional time series model, and Random Forest model which consists of decision tree and ensemble method using multiple predictor approach was applied as the machine learning approach. In the application, monthly inflow data from 1986 to 2015 of Chungju dam in South Korea were used for modeling and forecasting. In order to evaluate the performances of the used models, one step ahead and multi-step ahead forecasting was applied. Root mean squared error and mean absolute error of two models were compared.

  17. Time-Dependent Moment Tensors of the First Four Source Physics Experiments (SPE) Explosions

    NASA Astrophysics Data System (ADS)

    Yang, X.

    2015-12-01

    We use mainly vertical-component geophone data within 2 km from the epicenter to invert for time-dependent moment tensors of the first four SPE explosions: SPE-1, SPE-2, SPE-3 and SPE-4Prime. We employ a one-dimensional (1D) velocity model developed from P- and Rg-wave travel times for Green's function calculations. The attenuation structure of the model is developed from P- and Rg-wave amplitudes. We select data for the inversion based on the criterion that they show consistent travel times and amplitude behavior as those predicted by the 1D model. Due to limited azimuthal coverage of the sources and the mostly vertical-component-only nature of the dataset, only long-period, diagonal components of the moment tensors are well constrained. Nevertheless, the moment tensors, particularly their isotropic components, provide reasonable estimates of the long-period source amplitudes as well as estimates of corner frequencies, albeit with larger uncertainties. The estimated corner frequencies, however, are consistent with estimates from ratios of seismogram spectra from different explosions. These long-period source amplitudes and corner frequencies cannot be fit by classical P-wave explosion source models. The results motivate the development of new P-wave source models suitable for these chemical explosions. To that end, we fit inverted moment-tensor spectra by modifying the classical explosion model using regressions of estimated source parameters. Although the number of data points used in the regression is small, the approach suggests a way for the new-model development when more data are collected.

  18. Detecting changes in dynamic and complex acoustic environments

    PubMed Central

    Boubenec, Yves; Lawlor, Jennifer; Górska, Urszula; Shamma, Shihab; Englitz, Bernhard

    2017-01-01

    Natural sounds such as wind or rain, are characterized by the statistical occurrence of their constituents. Despite their complexity, listeners readily detect changes in these contexts. We here address the neural basis of statistical decision-making using a combination of psychophysics, EEG and modelling. In a texture-based, change-detection paradigm, human performance and reaction times improved with longer pre-change exposure, consistent with improved estimation of baseline statistics. Change-locked and decision-related EEG responses were found in a centro-parietal scalp location, whose slope depended on change size, consistent with sensory evidence accumulation. The potential's amplitude scaled with the duration of pre-change exposure, suggesting a time-dependent decision threshold. Auditory cortex-related potentials showed no response to the change. A dual timescale, statistical estimation model accounted for subjects' performance. Furthermore, a decision-augmented auditory cortex model accounted for performance and reaction times, suggesting that the primary cortical representation requires little post-processing to enable change-detection in complex acoustic environments. DOI: http://dx.doi.org/10.7554/eLife.24910.001 PMID:28262095

  19. Unifying inflation with ΛCDM epoch in modified f(R) gravity consistent with Solar System tests

    NASA Astrophysics Data System (ADS)

    Nojiri, Shin'ichi; Odintsov, Sergei D.

    2007-12-01

    We suggest two realistic f(R) and one F(G) modified gravities which are consistent with local tests and cosmological bounds. The typical property of such theories is the presence of the effective cosmological constant epochs in such a way that early-time inflation and late-time cosmic acceleration are naturally unified within single model. It is shown that classical instability does not appear here and Newton law is respected. Some discussion of possible anti-gravity regime appearance and related modification of the theory is done.

  20. Correlation effects in superconducting quantum dot systems

    NASA Astrophysics Data System (ADS)

    Pokorný, Vladislav; Žonda, Martin

    2018-05-01

    We study the effect of electron correlations on a system consisting of a single-level quantum dot with local Coulomb interaction attached to two superconducting leads. We use the single-impurity Anderson model with BCS superconducting baths to study the interplay between the proximity induced electron pairing and the local Coulomb interaction. We show how to solve the model using the continuous-time hybridization-expansion quantum Monte Carlo method. The results obtained for experimentally relevant parameters are compared with results of self-consistent second order perturbation theory as well as with the numerical renormalization group method.

  1. An Observation Analysis Tool for time-series analysis and sensor management in the FREEWAT GIS environment for water resources management

    NASA Astrophysics Data System (ADS)

    Cannata, Massimiliano; Neumann, Jakob; Cardoso, Mirko; Rossetto, Rudy; Foglia, Laura; Borsi, Iacopo

    2017-04-01

    In situ time-series are an important aspect of environmental modelling, especially with the advancement of numerical simulation techniques and increased model complexity. In order to make use of the increasing data available through the requirements of the EU Water Framework Directive, the FREEWAT GIS environment incorporates the newly developed Observation Analysis Tool for time-series analysis. The tool is used to import time-series data into QGIS from local CSV files, online sensors using the istSOS service, or MODFLOW model result files and enables visualisation, pre-processing of data for model development, and post-processing of model results. OAT can be used as a pre-processor for calibration observations, integrating the creation of observations for calibration directly from sensor time-series. The tool consists in an expandable Python library of processing methods and an interface integrated in the QGIS FREEWAT plug-in which includes a large number of modelling capabilities, data management tools and calibration capacity.

  2. Efficient Learning of Continuous-Time Hidden Markov Models for Disease Progression

    PubMed Central

    Liu, Yu-Ying; Li, Shuang; Li, Fuxin; Song, Le; Rehg, James M.

    2016-01-01

    The Continuous-Time Hidden Markov Model (CT-HMM) is an attractive approach to modeling disease progression due to its ability to describe noisy observations arriving irregularly in time. However, the lack of an efficient parameter learning algorithm for CT-HMM restricts its use to very small models or requires unrealistic constraints on the state transitions. In this paper, we present the first complete characterization of efficient EM-based learning methods for CT-HMM models. We demonstrate that the learning problem consists of two challenges: the estimation of posterior state probabilities and the computation of end-state conditioned statistics. We solve the first challenge by reformulating the estimation problem in terms of an equivalent discrete time-inhomogeneous hidden Markov model. The second challenge is addressed by adapting three approaches from the continuous time Markov chain literature to the CT-HMM domain. We demonstrate the use of CT-HMMs with more than 100 states to visualize and predict disease progression using a glaucoma dataset and an Alzheimer’s disease dataset. PMID:27019571

  3. A real-time ionospheric model based on GNSS Precise Point Positioning

    NASA Astrophysics Data System (ADS)

    Tu, Rui; Zhang, Hongping; Ge, Maorong; Huang, Guanwen

    2013-09-01

    This paper proposes a method of real-time monitoring and modeling the ionospheric Total Electron Content (TEC) by Precise Point Positioning (PPP). Firstly, the ionospheric TEC and receiver’s Differential Code Biases (DCB) are estimated with the undifferenced raw observation in real-time, then the ionospheric TEC model is established based on the Single Layer Model (SLM) assumption and the recovered ionospheric TEC. In this study, phase observations with high precision are directly used instead of phase smoothed code observations. In addition, the DCB estimation is separated from the establishment of the ionospheric model which will limit the impacts of the SLM assumption impacts. The ionospheric model is established at every epoch for real time application. The method is validated with three different GNSS networks on a local, regional, and global basis. The results show that the method is feasible and effective, the real-time ionosphere and DCB results are very consistent with the IGS final products, with a bias of 1-2 TECU and 0.4 ns respectively.

  4. Modeling Geodetic Processes with Levy α-Stable Distribution and FARIMA

    NASA Astrophysics Data System (ADS)

    Montillet, Jean-Philippe; Yu, Kegen

    2015-04-01

    Over the last years the scientific community has been using the auto regressive moving average (ARMA) model in the modeling of the noise in global positioning system (GPS) time series (daily solution). This work starts with the investigation of the limit of the ARMA model which is widely used in signal processing when the measurement noise is white. Since a typical GPS time series consists of geophysical signals (e.g., seasonal signal) and stochastic processes (e.g., coloured and white noise), the ARMA model may be inappropriate. Therefore, the application of the fractional auto-regressive integrated moving average (FARIMA) model is investigated. The simulation results using simulated time series as well as real GPS time series from a few selected stations around Australia show that the FARIMA model fits the time series better than other models when the coloured noise is larger than the white noise. The second fold of this work focuses on fitting the GPS time series with the family of Levy α-stable distributions. Using this distribution, a hypothesis test is developed to eliminate effectively coarse outliers from GPS time series, achieving better performance than using the rule of thumb of n standard deviations (with n chosen empirically).

  5. Target Acquisition Involving Multiple Unmanned Air Vehicles: Interfaces for Small Unmanned Air Systems (ISUS) Program

    DTIC Science & Technology

    2009-03-01

    model locations, time of day, and video size. The models in the scene consisted of three-dimensional representations of common civilian automobiles in...oats, wheat). Identify automobiles as sedans or station wagons. Identify individual telephone/electric poles in residential neighborhoods. Detect

  6. Phenomenology of stochastic exponential growth

    NASA Astrophysics Data System (ADS)

    Pirjol, Dan; Jafarpour, Farshid; Iyer-Biswas, Srividya

    2017-06-01

    Stochastic exponential growth is observed in a variety of contexts, including molecular autocatalysis, nuclear fission, population growth, inflation of the universe, viral social media posts, and financial markets. Yet literature on modeling the phenomenology of these stochastic dynamics has predominantly focused on one model, geometric Brownian motion (GBM), which can be described as the solution of a Langevin equation with linear drift and linear multiplicative noise. Using recent experimental results on stochastic exponential growth of individual bacterial cell sizes, we motivate the need for a more general class of phenomenological models of stochastic exponential growth, which are consistent with the observation that the mean-rescaled distributions are approximately stationary at long times. We show that this behavior is not consistent with GBM, instead it is consistent with power-law multiplicative noise with positive fractional powers. Therefore, we consider this general class of phenomenological models for stochastic exponential growth, provide analytical solutions, and identify the important dimensionless combination of model parameters, which determines the shape of the mean-rescaled distribution. We also provide a prescription for robustly inferring model parameters from experimentally observed stochastic growth trajectories.

  7. Bayesian spatiotemporal crash frequency models with mixture components for space-time interactions.

    PubMed

    Cheng, Wen; Gill, Gurdiljot Singh; Zhang, Yongping; Cao, Zhong

    2018-03-01

    The traffic safety research has developed spatiotemporal models to explore the variations in the spatial pattern of crash risk over time. Many studies observed notable benefits associated with the inclusion of spatial and temporal correlation and their interactions. However, the safety literature lacks sufficient research for the comparison of different temporal treatments and their interaction with spatial component. This study developed four spatiotemporal models with varying complexity due to the different temporal treatments such as (I) linear time trend; (II) quadratic time trend; (III) Autoregressive-1 (AR-1); and (IV) time adjacency. Moreover, the study introduced a flexible two-component mixture for the space-time interaction which allows greater flexibility compared to the traditional linear space-time interaction. The mixture component allows the accommodation of global space-time interaction as well as the departures from the overall spatial and temporal risk patterns. This study performed a comprehensive assessment of mixture models based on the diverse criteria pertaining to goodness-of-fit, cross-validation and evaluation based on in-sample data for predictive accuracy of crash estimates. The assessment of model performance in terms of goodness-of-fit clearly established the superiority of the time-adjacency specification which was evidently more complex due to the addition of information borrowed from neighboring years, but this addition of parameters allowed significant advantage at posterior deviance which subsequently benefited overall fit to crash data. The Base models were also developed to study the comparison between the proposed mixture and traditional space-time components for each temporal model. The mixture models consistently outperformed the corresponding Base models due to the advantages of much lower deviance. For cross-validation comparison of predictive accuracy, linear time trend model was adjudged the best as it recorded the highest value of log pseudo marginal likelihood (LPML). Four other evaluation criteria were considered for typical validation using the same data for model development. Under each criterion, observed crash counts were compared with three types of data containing Bayesian estimated, normal predicted, and model replicated ones. The linear model again performed the best in most scenarios except one case of using model replicated data and two cases involving prediction without including random effects. These phenomena indicated the mediocre performance of linear trend when random effects were excluded for evaluation. This might be due to the flexible mixture space-time interaction which can efficiently absorb the residual variability escaping from the predictable part of the model. The comparison of Base and mixture models in terms of prediction accuracy further bolstered the superiority of the mixture models as the mixture ones generated more precise estimated crash counts across all four models, suggesting that the advantages associated with mixture component at model fit were transferable to prediction accuracy. Finally, the residual analysis demonstrated the consistently superior performance of random effect models which validates the importance of incorporating the correlation structures to account for unobserved heterogeneity. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Connecting white light to in situ observations of 22 coronal mass ejections from the Sun to 1 AU

    NASA Astrophysics Data System (ADS)

    Moestl, C.; Amla, K.; Farrugia, C. J.; Hall, J. R.; Liewer, P. C.; De Jong, E.; Colaninno, R. C.; Vourlidas, A.; Veronig, A. M.; Rollett, T.; Temmer, M.; Peinhart, V.; Davies, J.; Lugaz, N.; Liu, Y. D.; McEnulty, T.; Luhmann, J. G.; Galvin, A. B.

    2013-12-01

    We study the feasibility of using a Heliospheric Imager (HI) instrument, such as STEREO/HI, for unambiguously connecting remote images to in situ observations of coronal mass ejection (CMEs). Our goal is to develop and test methods to predict CME parameters from heliospheric images, but our dataset can actually be used to benchmark any ICME propagation model. The results are of interest concerning future missions such as Solar Orbiter, or a dedicated space weather mission at the Sun-Earth L5 point (e.g. EASCO mission concept). We compare the predictions for speed and arrival time for 22 CME events (between 2008-2012), each observed remotely by one STEREO spacecraft, to the interplanetary coronal mass ejection (ICME) speed and arrival time observed at in situ observatories (STEREO PLASTIC/IMPACT, Wind SWE/MFI). We use forward modeling for STEREO-COR2, and geometrical models for STEREO-HII, assuming different CME front shapes (Fixed-Phi, Harmonic Mean, Self-similar expansion), and fit them to the CME time-elongation functions with the SolarSoft SATPLOT tool, assuming constant CME speed and direction. The arrival times derived from imaging match the in situ ones +/- 8 hours, and speeds are consistent within +/-300 km/s, including CME apex/flank effects. We find no preference in the predictive capability for any of the 3 geometries used on the full dataset, consisting of front- and backsided, slow and fast CMEs (up to 2700 km/s). We search for new empirical relations between the predicted and observed speeds and arrival times, enhancing the HI predictive capabilities. Additionally, for very fast and back-sided CMEs, strong differences between the results of the HI models arise, consistent with theoretical expectations by Lugaz and Kintner (2013, Solar Physics). This work has received funding from the European Commission FP7 Project COMESEP (263252).

  9. Reducing bias and analyzing variability in the time-left procedure.

    PubMed

    Trujano, R Emmanuel; Orduña, Vladimir

    2015-04-01

    The time-left procedure was designed to evaluate the psychophysical function for time. Although previous results indicated a linear relationship, it is not clear what role the observed bias toward the time-left option plays in this procedure and there are no reports of how variability changes with predicted indifference. The purposes of this experiment were to reduce bias experimentally, and to contrast the difference limen (a measure of variability around indifference) with predictions from scalar expectancy theory (linear timing) and behavioral economic model (logarithmic timing). A control group of 6 rats performed the original time-left procedure with C=60 s and S=5, 10,…, 50, 55 s, whereas a no-bias group of 6 rats performed the same conditions in a modified time-left procedure in which only a single response per choice trial was allowed. Results showed that bias was reduced for the no-bias group, observed indifference grew linearly with predicted indifference for both groups, and difference limen and Weber ratios decreased as expected indifference increased for the control group, which is consistent with linear timing, whereas for the no-bias group they remained constant, consistent with logarithmic timing. Therefore, the time-left procedure generates results consistent with logarithmic perceived time once bias is experimentally reduced. Copyright © 2015 Elsevier B.V. All rights reserved.

  10. Assessment of Current Estimates of Global and Regional Mean Sea Level from the TOPEX/Poseidon, Jason-1, and OSTM 17-Year Record

    NASA Technical Reports Server (NTRS)

    Beckley, Brian D.; Ray, Richard D.; Lemoine, Frank G.; Zelensky, N. P.; Holmes, S. A.; Desal, Shailen D.; Brown, Shannon; Mitchum, G. T.; Jacob, Samuel; Luthcke, Scott B.

    2010-01-01

    The science value of satellite altimeter observations has grown dramatically over time as enabling models and technologies have increased the value of data acquired on both past and present missions. With the prospect of an observational time series extending into several decades from TOPEX/Poseidon through Jason-1 and the Ocean Surface Topography Mission (OSTM), and further in time with a future set of operational altimeters, researchers are pushing the bounds of current technology and modeling capability in order to monitor global sea level rate at an accuracy of a few tenths of a mm/yr. The measurement of mean sea-level change from satellite altimetry requires an extreme stability of the altimeter measurement system since the signal being measured is at the level of a few mm/yr. This means that the orbit and reference frame within which the altimeter measurements are situated, and the associated altimeter corrections, must be stable and accurate enough to permit a robust MSL estimate. Foremost, orbit quality and consistency are critical to satellite altimeter measurement accuracy. The orbit defines the altimeter reference frame, and orbit error directly affects the altimeter measurement. Orbit error remains a major component in the error budget of all past and present altimeter missions. For example, inconsistencies in the International Terrestrial Reference Frame (ITRF) used to produce the precision orbits at different times cause systematic inconsistencies to appear in the multimission time-frame between TOPEX and Jason-1, and can affect the intermission calibration of these data. In an effort to adhere to cross mission consistency, we have generated the full time series of orbits for TOPEX/Poseidon (TP), Jason-1, and OSTM based on recent improvements in the satellite force models, reference systems, and modeling strategies. The recent release of the entire revised Jason-1 Geophysical Data Records, and recalibration of the microwave radiometer correction also require the further re-examination of inter-mission consistency issues. Here we present an assessment of these recent improvements to the accuracy of the 17 -year sea surface height time series, and evaluate the subsequent impact on global and regional mean sea level estimates.

  11. Bringing consistency to simulation of population models--Poisson simulation as a bridge between micro and macro simulation.

    PubMed

    Gustafsson, Leif; Sternad, Mikael

    2007-10-01

    Population models concern collections of discrete entities such as atoms, cells, humans, animals, etc., where the focus is on the number of entities in a population. Because of the complexity of such models, simulation is usually needed to reproduce their complete dynamic and stochastic behaviour. Two main types of simulation models are used for different purposes, namely micro-simulation models, where each individual is described with its particular attributes and behaviour, and macro-simulation models based on stochastic differential equations, where the population is described in aggregated terms by the number of individuals in different states. Consistency between micro- and macro-models is a crucial but often neglected aspect. This paper demonstrates how the Poisson Simulation technique can be used to produce a population macro-model consistent with the corresponding micro-model. This is accomplished by defining Poisson Simulation in strictly mathematical terms as a series of Poisson processes that generate sequences of Poisson distributions with dynamically varying parameters. The method can be applied to any population model. It provides the unique stochastic and dynamic macro-model consistent with a correct micro-model. The paper also presents a general macro form for stochastic and dynamic population models. In an appendix Poisson Simulation is compared with Markov Simulation showing a number of advantages. Especially aggregation into state variables and aggregation of many events per time-step makes Poisson Simulation orders of magnitude faster than Markov Simulation. Furthermore, you can build and execute much larger and more complicated models with Poisson Simulation than is possible with the Markov approach.

  12. Carbon cycle confidence and uncertainty: Exploring variation among soil biogeochemical models

    DOE PAGES

    Wieder, William R.; Hartman, Melannie D.; Sulman, Benjamin N.; ...

    2017-11-09

    Emerging insights into factors responsible for soil organic matter stabilization and decomposition are being applied in a variety of contexts, but new tools are needed to facilitate the understanding, evaluation, and improvement of soil biogeochemical theory and models at regional to global scales. To isolate the effects of model structural uncertainty on the global distribution of soil carbon stocks and turnover times we developed a soil biogeochemical testbed that forces three different soil models with consistent climate and plant productivity inputs. The models tested here include a first-order, microbial implicit approach (CASA-CNP), and two recently developed microbially explicit models thatmore » can be run at global scales (MIMICS and CORPSE). When forced with common environmental drivers, the soil models generated similar estimates of initial soil carbon stocks (roughly 1,400 Pg C globally, 0–100 cm), but each model shows a different functional relationship between mean annual temperature and inferred turnover times. Subsequently, the models made divergent projections about the fate of these soil carbon stocks over the 20th century, with models either gaining or losing over 20 Pg C globally between 1901 and 2010. Single-forcing experiments with changed inputs, tem- perature, and moisture suggest that uncertainty associated with freeze-thaw processes as well as soil textural effects on soil carbon stabilization were larger than direct temper- ature uncertainties among models. Finally, the models generated distinct projections about the timing and magnitude of seasonal heterotrophic respiration rates, again reflecting structural uncertainties that were related to environmental sensitivities and assumptions about physicochemical stabilization of soil organic matter. Here, by providing a computationally tractable and numerically consistent framework to evaluate models we aim to better understand uncertainties among models and generate insights about fac- tors regulating the turnover of soil organic matter.« less

  13. Carbon cycle confidence and uncertainty: Exploring variation among soil biogeochemical models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wieder, William R.; Hartman, Melannie D.; Sulman, Benjamin N.

    Emerging insights into factors responsible for soil organic matter stabilization and decomposition are being applied in a variety of contexts, but new tools are needed to facilitate the understanding, evaluation, and improvement of soil biogeochemical theory and models at regional to global scales. To isolate the effects of model structural uncertainty on the global distribution of soil carbon stocks and turnover times we developed a soil biogeochemical testbed that forces three different soil models with consistent climate and plant productivity inputs. The models tested here include a first-order, microbial implicit approach (CASA-CNP), and two recently developed microbially explicit models thatmore » can be run at global scales (MIMICS and CORPSE). When forced with common environmental drivers, the soil models generated similar estimates of initial soil carbon stocks (roughly 1,400 Pg C globally, 0–100 cm), but each model shows a different functional relationship between mean annual temperature and inferred turnover times. Subsequently, the models made divergent projections about the fate of these soil carbon stocks over the 20th century, with models either gaining or losing over 20 Pg C globally between 1901 and 2010. Single-forcing experiments with changed inputs, tem- perature, and moisture suggest that uncertainty associated with freeze-thaw processes as well as soil textural effects on soil carbon stabilization were larger than direct temper- ature uncertainties among models. Finally, the models generated distinct projections about the timing and magnitude of seasonal heterotrophic respiration rates, again reflecting structural uncertainties that were related to environmental sensitivities and assumptions about physicochemical stabilization of soil organic matter. Here, by providing a computationally tractable and numerically consistent framework to evaluate models we aim to better understand uncertainties among models and generate insights about fac- tors regulating the turnover of soil organic matter.« less

  14. A dataset of future daily weather data for crop modelling over Europe derived from climate change scenarios

    NASA Astrophysics Data System (ADS)

    Duveiller, G.; Donatelli, M.; Fumagalli, D.; Zucchini, A.; Nelson, R.; Baruth, B.

    2017-02-01

    Coupled atmosphere-ocean general circulation models (GCMs) simulate different realizations of possible future climates at global scale under contrasting scenarios of land-use and greenhouse gas emissions. Such data require several additional processing steps before it can be used to drive impact models. Spatial downscaling, typically by regional climate models (RCM), and bias-correction are two such steps that have already been addressed for Europe. Yet, the errors in resulting daily meteorological variables may be too large for specific model applications. Crop simulation models are particularly sensitive to these inconsistencies and thus require further processing of GCM-RCM outputs. Moreover, crop models are often run in a stochastic manner by using various plausible weather time series (often generated using stochastic weather generators) to represent climate time scale for a period of interest (e.g. 2000 ± 15 years), while GCM simulations typically provide a single time series for a given emission scenario. To inform agricultural policy-making, data on near- and medium-term decadal time scale is mostly requested, e.g. 2020 or 2030. Taking a sample of multiple years from these unique time series to represent time horizons in the near future is particularly problematic because selecting overlapping years may lead to spurious trends, creating artefacts in the results of the impact model simulations. This paper presents a database of consolidated and coherent future daily weather data for Europe that addresses these problems. Input data consist of daily temperature and precipitation from three dynamically downscaled and bias-corrected regional climate simulations of the IPCC A1B emission scenario created within the ENSEMBLES project. Solar radiation is estimated from temperature based on an auto-calibration procedure. Wind speed and relative air humidity are collected from historical series. From these variables, reference evapotranspiration and vapour pressure deficit are estimated ensuring consistency within daily records. The weather generator ClimGen is then used to create 30 synthetic years of all variables to characterize the time horizons of 2000, 2020 and 2030, which can readily be used for crop modelling studies.

  15. State estimation improves prospects for ocean research

    NASA Astrophysics Data System (ADS)

    Stammer, Detlef; Wunsch, C.; Fukumori, I.; Marshall, J.

    Rigorous global ocean state estimation methods can now be used to produce dynamically consistent time-varying model/data syntheses, the results of which are being used to study a variety of important scientific problems. Figure 1 shows a schematic of a complete ocean observing and synthesis system that includes global observations and state-of-the-art ocean general circulation models (OGCM) run on modern computer platforms. A global observing system is described in detail in Smith and Koblinsky [2001],and the present status of ocean modeling and anticipated improvements are addressed by Griffies et al. [2001]. Here, the focus is on the third component of state estimation: the synthesis of the observations and a model into a unified, dynamically consistent estimate.

  16. A leader-return-stroke consistent macroscopic model for calculations of return stroke current and its optical and electromagnetic emissions

    NASA Astrophysics Data System (ADS)

    Cai, Shuyao; Chen, Mingli; Du, Yaping; Qin, Zilong

    2017-08-01

    A downward lightning flash usually starts with a downward leader and an upward connecting leader followed by an upward return stroke. It is the preceding leader that governs the following return stroke property. Besides, the return stroke property evolves with height and time. These two aspects, however, are not well addressed in most existing return stroke models. In this paper, we present a leader-return stroke consistent model based on the time domain electric field integral equation, which is a growth and modification of Kumar's macroscopic model. The model is further extended to simulate the optical and electromagnetic emissions of a return stroke by introducing a set of equations relating the return stroke current and conductance to the optical and electromagnetic emissions. With a presumed leader initiation potential, the model can then simulate the temporal and spatial evolution of the current, charge transfer, channel size, and conductance of the return stroke, furthermore the optical and electromagnetic emissions. The model is tested with different leader initiation potentials ranging from -10 to -140 MV, resulting in different return stroke current peaks ranging from 2.6 to 209 kA with different return stroke speed peaks ranging from 0.2 to 0.8 speed of light and different optical power peaks ranging from 4.76 to 248 MW/m. The larger of the leader initiation potential, the larger of the return stroke current and speed. Both the return stroke current and speed attenuate exponentially as it propagates upward. All these results are qualitatively consistent with those reported in the literature.

  17. Theory for Transitions Between Exponential and Stationary Phases: Universal Laws for Lag Time

    NASA Astrophysics Data System (ADS)

    Himeoka, Yusuke; Kaneko, Kunihiko

    2017-04-01

    The quantitative characterization of bacterial growth has attracted substantial attention since Monod's pioneering study. Theoretical and experimental works have uncovered several laws for describing the exponential growth phase, in which the number of cells grows exponentially. However, microorganism growth also exhibits lag, stationary, and death phases under starvation conditions, in which cell growth is highly suppressed, for which quantitative laws or theories are markedly underdeveloped. In fact, the models commonly adopted for the exponential phase that consist of autocatalytic chemical components, including ribosomes, can only show exponential growth or decay in a population; thus, phases that halt growth are not realized. Here, we propose a simple, coarse-grained cell model that includes an extra class of macromolecular components in addition to the autocatalytic active components that facilitate cellular growth. These extra components form a complex with the active components to inhibit the catalytic process. Depending on the nutrient condition, the model exhibits typical transitions among the lag, exponential, stationary, and death phases. Furthermore, the lag time needed for growth recovery after starvation follows the square root of the starvation time and is inversely related to the maximal growth rate. This is in agreement with experimental observations, in which the length of time of cell starvation is memorized in the slow accumulation of molecules. Moreover, the lag time distributed among cells is skewed with a long time tail. If the starvation time is longer, an exponential tail appears, which is also consistent with experimental data. Our theory further predicts a strong dependence of lag time on the speed of substrate depletion, which can be tested experimentally. The present model and theoretical analysis provide universal growth laws beyond the exponential phase, offering insight into how cells halt growth without entering the death phase.

  18. Extremal inversion of lunar travel time data. [seismic velocity structure

    NASA Technical Reports Server (NTRS)

    Burkhard, N.; Jackson, D. D.

    1975-01-01

    The tau method, developed by Bessonova et al. (1974), of inversion of travel times is applied to lunar P-wave travel time data to find limits on the velocity structure of the moon. Tau is the singular solution to the Clairaut equation. Models with low-velocity zones, with low-velocity zones at differing depths, and without low-velocity zones, were found to be consistent with data and within the determined limits. Models with and without a discontinuity at about 25-km depth have been found which agree with all travel time data to within two standard deviations. In other words, the existence of the discontinuity and its size and location have not been uniquely resolved. Models with low-velocity channels are also possible.

  19. Task allocation model for minimization of completion time in distributed computer systems

    NASA Astrophysics Data System (ADS)

    Wang, Jai-Ping; Steidley, Carl W.

    1993-08-01

    A task in a distributed computing system consists of a set of related modules. Each of the modules will execute on one of the processors of the system and communicate with some other modules. In addition, precedence relationships may exist among the modules. Task allocation is an essential activity in distributed-software design. This activity is of importance to all phases of the development of a distributed system. This paper establishes task completion-time models and task allocation models for minimizing task completion time. Current work in this area is either at the experimental level or without the consideration of precedence relationships among modules. The development of mathematical models for the computation of task completion time and task allocation will benefit many real-time computer applications such as radar systems, navigation systems, industrial process control systems, image processing systems, and artificial intelligence oriented systems.

  20. Ecology of West Nile virus across four European countries: empirical modelling of the Culex pipiens abundance dynamics as a function of weather.

    PubMed

    Groen, Thomas A; L'Ambert, Gregory; Bellini, Romeo; Chaskopoulou, Alexandra; Petric, Dusan; Zgomba, Marija; Marrama, Laurence; Bicout, Dominique J

    2017-10-26

    Culex pipiens is the major vector of West Nile virus in Europe, and is causing frequent outbreaks throughout the southern part of the continent. Proper empirical modelling of the population dynamics of this species can help in understanding West Nile virus epidemiology, optimizing vector surveillance and mosquito control efforts. But modelling results may differ from place to place. In this study we look at which type of models and weather variables can be consistently used across different locations. Weekly mosquito trap collections from eight functional units located in France, Greece, Italy and Serbia for several years were combined. Additionally, rainfall, relative humidity and temperature were recorded. Correlations between lagged weather conditions and Cx. pipiens dynamics were analysed. Also seasonal autoregressive integrated moving-average (SARIMA) models were fitted to describe the temporal dynamics of Cx. pipiens and to check whether the weather variables could improve these models. Correlations were strongest between mean temperatures at short time lags, followed by relative humidity, most likely due to collinearity. Precipitation alone had weak correlations and inconsistent patterns across sites. SARIMA models could also make reasonable predictions, especially when longer time series of Cx. pipiens observations are available. Average temperature was a consistently good predictor across sites. When only short time series (~ < 4 years) of observations are available, average temperature can therefore be used to model Cx. pipiens dynamics. When longer time series (~ > 4 years) are available, SARIMAs can provide better statistical descriptions of Cx. pipiens dynamics, without the need for further weather variables. This suggests that density dependence is also an important determinant of Cx. pipiens dynamics.

  1. A Longitudinal Investigation of Syndemic Conditions Among Young Gay, Bisexual, and Other MSM: The P18 Cohort Study.

    PubMed

    Halkitis, Perry N; Kapadia, Farzana; Bub, Kristen L; Barton, Staci; Moreira, Alvaro D; Stults, Christopher B

    2015-06-01

    The persistence of disparities in STI/HIV risk among a new generation of emerging adult gay, bisexual, and other men who have sex with men (YMSM) warrant holistic frameworks and new methodologies for investigating the behaviors related to STI/HIV in this group. In order to better understand the continued existence of these disparities in STI/HIV risk among YMSM, the present study evaluated the presence and persistence of syndemic conditions among YMSM by examining the co-occurrence of alcohol and drug use, unprotected sexual behavior, and mental health burden over time. Four waves of data, collected over the first 18 months of a 7 wave, 36-month prospective cohort study of YMSM (n=600) were used to examine the extent to which measurement models of drug use, unprotected sexual behavior, and mental health burden remained consistent across time using latent class modeling. Health challenges persisted across time as these YMSM emerged into young adulthood and the measurement models for the latent constructs of drug use and unprotected sexual behavior were essentially consistent across time whereas models for mental health burden varied over time. In addition to confirming the the robustness of our measurement models which capture a more holistic understandings of the health conditions of drug use, unprotected sex, and mental health burden, these findings underscore the ongoing health challenges YMSM face as they mature into young adulthood. These ongoing health challenges, which have been understood as forming a syndemic, persist over time, and add further evidence to support ongoing and vigilant comprehensive health programming for sexual minority men that move beyond a sole focus on HIV.

  2. A longitudinal investigation of syndemic conditions among young gay, bisexual, and other MSM: The P18 Cohort Study

    PubMed Central

    Halkitis, Perry N.; Kapadia, Farzana; Bub, Kristen L.; Barton, Staci; Moreira, Alvaro D.; Stults, Christopher B.

    2014-01-01

    The persistence of disparities in STI/HIV risk among a new generation of emerging adult gay, bisexual, and other men who have sex with men (YMSM) warrant holistic frameworks and new methodologies for investigating the behaviors related to STI/HIV in this group. In order to better understand the continued existence of these disparities in STI/HIV risk among YMSM, the present study evaluated the presence and persistence of syndemic conditions among YMSM by examining the co-occurrence of alcohol and drug use, unprotected sexual behavior, and mental health burden over time. Four waves of data, collected over the first 18 months of a 7 wave, 36-month prospective cohort study of YMSM (n=598) were used to examine the extent to which measurement models of drug use, unprotected sexual behavior, and mental health burden remained consistent across time using latent class modeling. Health challenges persisted across time as these YMSM emerged into young adulthood and the measurement models for the latent constructs of drug use and unprotected sexual behavior were essentially consistent across time whereas models for mental health burden varied over time. In addition to confirming the the robustness of our measurement models which capture a more holistic understandings of the health conditions of drug use, unprotected sex, and mental health burden, these findings underscore the ongoing health challenges YMSM face as they mature into young adulthood. These ongoing health challenges, which have been understood as forming a syndemic, persist over time, and add further evidence to support ongoing and vigilant comprehensive health programming for sexual minority men that move beyond a sole focus on HIV. PMID:25192900

  3. Reinforcement Learning Using a Continuous Time Actor-Critic Framework with Spiking Neurons

    PubMed Central

    Frémaux, Nicolas; Sprekeler, Henning; Gerstner, Wulfram

    2013-01-01

    Animals repeat rewarded behaviors, but the physiological basis of reward-based learning has only been partially elucidated. On one hand, experimental evidence shows that the neuromodulator dopamine carries information about rewards and affects synaptic plasticity. On the other hand, the theory of reinforcement learning provides a framework for reward-based learning. Recent models of reward-modulated spike-timing-dependent plasticity have made first steps towards bridging the gap between the two approaches, but faced two problems. First, reinforcement learning is typically formulated in a discrete framework, ill-adapted to the description of natural situations. Second, biologically plausible models of reward-modulated spike-timing-dependent plasticity require precise calculation of the reward prediction error, yet it remains to be shown how this can be computed by neurons. Here we propose a solution to these problems by extending the continuous temporal difference (TD) learning of Doya (2000) to the case of spiking neurons in an actor-critic network operating in continuous time, and with continuous state and action representations. In our model, the critic learns to predict expected future rewards in real time. Its activity, together with actual rewards, conditions the delivery of a neuromodulatory TD signal to itself and to the actor, which is responsible for action choice. In simulations, we show that such an architecture can solve a Morris water-maze-like navigation task, in a number of trials consistent with reported animal performance. We also use our model to solve the acrobot and the cartpole problems, two complex motor control tasks. Our model provides a plausible way of computing reward prediction error in the brain. Moreover, the analytically derived learning rule is consistent with experimental evidence for dopamine-modulated spike-timing-dependent plasticity. PMID:23592970

  4. Reinforcement learning using a continuous time actor-critic framework with spiking neurons.

    PubMed

    Frémaux, Nicolas; Sprekeler, Henning; Gerstner, Wulfram

    2013-04-01

    Animals repeat rewarded behaviors, but the physiological basis of reward-based learning has only been partially elucidated. On one hand, experimental evidence shows that the neuromodulator dopamine carries information about rewards and affects synaptic plasticity. On the other hand, the theory of reinforcement learning provides a framework for reward-based learning. Recent models of reward-modulated spike-timing-dependent plasticity have made first steps towards bridging the gap between the two approaches, but faced two problems. First, reinforcement learning is typically formulated in a discrete framework, ill-adapted to the description of natural situations. Second, biologically plausible models of reward-modulated spike-timing-dependent plasticity require precise calculation of the reward prediction error, yet it remains to be shown how this can be computed by neurons. Here we propose a solution to these problems by extending the continuous temporal difference (TD) learning of Doya (2000) to the case of spiking neurons in an actor-critic network operating in continuous time, and with continuous state and action representations. In our model, the critic learns to predict expected future rewards in real time. Its activity, together with actual rewards, conditions the delivery of a neuromodulatory TD signal to itself and to the actor, which is responsible for action choice. In simulations, we show that such an architecture can solve a Morris water-maze-like navigation task, in a number of trials consistent with reported animal performance. We also use our model to solve the acrobot and the cartpole problems, two complex motor control tasks. Our model provides a plausible way of computing reward prediction error in the brain. Moreover, the analytically derived learning rule is consistent with experimental evidence for dopamine-modulated spike-timing-dependent plasticity.

  5. Entanglement in a model for Hawking radiation: An application of quadratic algebras

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bambah, Bindu A., E-mail: bbsp@uohyd.ernet.in; Mukku, C., E-mail: mukku@iiit.ac.in; Shreecharan, T., E-mail: shreecharan@gmail.com

    2013-03-15

    Quadratic polynomially deformed su(1,1) and su(2) algebras are utilized in model Hamiltonians to show how the gravitational system consisting of a black hole, infalling radiation and outgoing (Hawking) radiation can be solved exactly. The models allow us to study the long-time behaviour of the black hole and its outgoing modes. In particular, we calculate the bipartite entanglement entropies of subsystems consisting of (a) infalling plus outgoing modes and (b) black hole modes plus the infalling modes, using the Janus-faced nature of the model. The long-time behaviour also gives us glimpses of modifications in the character of Hawking radiation. Finally, wemore » study the phenomenon of superradiance in our model in analogy with atomic Dicke superradiance. - Highlights: Black-Right-Pointing-Pointer We examine a toy model for Hawking radiation with quantized black hole modes. Black-Right-Pointing-Pointer We use quadratic polynomially deformed su(1,1) algebras to study its entanglement properties. Black-Right-Pointing-Pointer We study the 'Dicke Superradiance' in black hole radiation using quadratically deformed su(2) algebras. Black-Right-Pointing-Pointer We study the modification of the thermal character of Hawking radiation due to quantized black hole modes.« less

  6. A Two-Stage Process Model of Sensory Discrimination: An Alternative to Drift-Diffusion

    PubMed Central

    Landy, Michael S.

    2016-01-01

    Discrimination of the direction of motion of a noisy stimulus is an example of sensory discrimination under uncertainty. For stimuli that are extended in time, reaction time is quicker for larger signal values (e.g., discrimination of opposite directions of motion compared with neighboring orientations) and larger signal strength (e.g., stimuli with higher contrast or motion coherence, that is, lower noise). The standard model of neural responses (e.g., in lateral intraparietal cortex) and reaction time for discrimination is drift-diffusion. This model makes two clear predictions. (1) The effects of signal strength and value on reaction time should interact multiplicatively because the diffusion process depends on the signal-to-noise ratio. (2) If the diffusion process is interrupted, as in a cued-response task, the time to decision after the cue should be independent of the strength of accumulated sensory evidence. In two experiments with human participants, we show that neither prediction holds. A simple alternative model is developed that is consistent with the results. In this estimate-then-decide model, evidence is accumulated until estimation precision reaches a threshold value. Then, a decision is made with duration that depends on the signal-to-noise ratio achieved by the first stage. SIGNIFICANCE STATEMENT Sensory decision-making under uncertainty is usually modeled as the slow accumulation of noisy sensory evidence until a threshold amount of evidence supporting one of the possible decision outcomes is reached. Furthermore, it has been suggested that this accumulation process is reflected in neural responses, e.g., in lateral intraparietal cortex. We derive two behavioral predictions of this model and show that neither prediction holds. We introduce a simple alternative model in which evidence is accumulated until a sufficiently precise estimate of the stimulus is achieved, and then that estimate is used to guide the discrimination decision. This model is consistent with the behavioral data. PMID:27807167

  7. A Two-Stage Process Model of Sensory Discrimination: An Alternative to Drift-Diffusion.

    PubMed

    Sun, Peng; Landy, Michael S

    2016-11-02

    Discrimination of the direction of motion of a noisy stimulus is an example of sensory discrimination under uncertainty. For stimuli that are extended in time, reaction time is quicker for larger signal values (e.g., discrimination of opposite directions of motion compared with neighboring orientations) and larger signal strength (e.g., stimuli with higher contrast or motion coherence, that is, lower noise). The standard model of neural responses (e.g., in lateral intraparietal cortex) and reaction time for discrimination is drift-diffusion. This model makes two clear predictions. (1) The effects of signal strength and value on reaction time should interact multiplicatively because the diffusion process depends on the signal-to-noise ratio. (2) If the diffusion process is interrupted, as in a cued-response task, the time to decision after the cue should be independent of the strength of accumulated sensory evidence. In two experiments with human participants, we show that neither prediction holds. A simple alternative model is developed that is consistent with the results. In this estimate-then-decide model, evidence is accumulated until estimation precision reaches a threshold value. Then, a decision is made with duration that depends on the signal-to-noise ratio achieved by the first stage. Sensory decision-making under uncertainty is usually modeled as the slow accumulation of noisy sensory evidence until a threshold amount of evidence supporting one of the possible decision outcomes is reached. Furthermore, it has been suggested that this accumulation process is reflected in neural responses, e.g., in lateral intraparietal cortex. We derive two behavioral predictions of this model and show that neither prediction holds. We introduce a simple alternative model in which evidence is accumulated until a sufficiently precise estimate of the stimulus is achieved, and then that estimate is used to guide the discrimination decision. This model is consistent with the behavioral data. Copyright © 2016 the authors 0270-6474/16/3611259-16$15.00/0.

  8. One-month validation of the Space Weather Modeling Framework geospace model

    NASA Astrophysics Data System (ADS)

    Haiducek, J. D.; Welling, D. T.; Ganushkina, N. Y.; Morley, S.; Ozturk, D. S.

    2017-12-01

    The Space Weather Modeling Framework (SWMF) geospace model consists of a magnetohydrodynamic (MHD) simulation coupled to an inner magnetosphere model and an ionosphere model. This provides a predictive capability for magnetopsheric dynamics, including ground-based and space-based magnetic fields, geomagnetic indices, currents and densities throughout the magnetosphere, cross-polar cap potential, and magnetopause and bow shock locations. The only inputs are solar wind parameters and F10.7 radio flux. We have conducted a rigorous validation effort consisting of a continuous simulation covering the month of January, 2005 using three different model configurations. This provides a relatively large dataset for assessment of the model's predictive capabilities. We find that the model does an excellent job of predicting the Sym-H index, and performs well at predicting Kp and CPCP during active times. Dayside magnetopause and bow shock positions are also well predicted. The model tends to over-predict Kp and CPCP during quiet times and under-predicts the magnitude of AL during disturbances. The model under-predicts the magnitude of night-side geosynchronous Bz, and over-predicts the radial distance to the flank magnetopause and bow shock. This suggests that the model over-predicts stretching of the magnetotail and the overall size of the magnetotail. With the exception of the AL index and the nightside geosynchronous magnetic field, we find the results to be insensitive to grid resolution.

  9. Hot as You Like It: Models of the Long-term Temperature History of Earth Under Different Geological Assumptions

    NASA Astrophysics Data System (ADS)

    Domagal-Goldman, S.; Sheldon, N. D.

    2012-12-01

    The long-term temperature history of the Earth is a subject of continued, vigorous debate. Past models of the climate of early Earth that utilize paleosol contraints on carbon dioxide struggle to maintain temperatures significantly greater than 0°C. In these models, the incoming stellar radiation is much lower than today, consistent with an expectation that the Sun was significantly fainter at that time. In contrast to these models, many proxies for ancient temperatures suggest much warmer conditions. The surface of the planet seems to have been generally free of glaciers throughout this period, other than a brief glaciation at ~2.9 billion years ago and extensive glaciation at ~2.4 billion years ago. Such glacier-free conditions suggest mean surface temperatures greater than 15°C. Measurements of oxygen isotopes in phosphates are consistent with temperatures in the range of 20-30°C; and similar measurements in cherts suggest temperatures over 50°C. This sets up a paradox. Models constrained by one set of geological proxies cannot reproduce the warm temperatures consistent with another set of geological proxies. In this presentation, we explore several potential resolutions to this paradox. First, we model the early Earth under modern-day conditions, but with the lower solar luminosity expected at the time. The next simulation allows carbon dioxide concentrations to increase up to the limits provided by paleosol constraints. Next, we lower the planet's surface albedo in a manner consistent with greater ocean coverage prior to the complete growth of continents. Finally, we remove all constraints on carbon dioxide and attempt to maximize surface temperatures without any geological constraints on model parameters. This set of experiments will allow us to set up potential resolutions to the paradox, and to drive a conversation on which solutions are capable of incorporating the greatest number of geological and geochemical constraints.

  10. Prospectively Evaluating the Collaboratory for the Study of Earthquake Predictability: An Evaluation of the UCERF2 and Updated Five-Year RELM Forecasts

    NASA Astrophysics Data System (ADS)

    Strader, Anne; Schneider, Max; Schorlemmer, Danijel; Liukis, Maria

    2016-04-01

    The Collaboratory for the Study of Earthquake Predictability (CSEP) was developed to rigorously test earthquake forecasts retrospectively and prospectively through reproducible, completely transparent experiments within a controlled environment (Zechar et al., 2010). During 2006-2011, thirteen five-year time-invariant prospective earthquake mainshock forecasts developed by the Regional Earthquake Likelihood Models (RELM) working group were evaluated through the CSEP testing center (Schorlemmer and Gerstenberger, 2007). The number, spatial, and magnitude components of the forecasts were compared to the respective observed seismicity components using a set of consistency tests (Schorlemmer et al., 2007, Zechar et al., 2010). In the initial experiment, all but three forecast models passed every test at the 95% significance level, with all forecasts displaying consistent log-likelihoods (L-test) and magnitude distributions (M-test) with the observed seismicity. In the ten-year RELM experiment update, we reevaluate these earthquake forecasts over an eight-year period from 2008-2016, to determine the consistency of previous likelihood testing results over longer time intervals. Additionally, we test the Uniform California Earthquake Rupture Forecast (UCERF2), developed by the U.S. Geological Survey (USGS), and the earthquake rate model developed by the California Geological Survey (CGS) and the USGS for the National Seismic Hazard Mapping Program (NSHMP) against the RELM forecasts. Both the UCERF2 and NSHMP forecasts pass all consistency tests, though the Helmstetter et al. (2007) and Shen et al. (2007) models exhibit greater information gain per earthquake according to the T- and W- tests (Rhoades et al., 2011). Though all but three RELM forecasts pass the spatial likelihood test (S-test), multiple forecasts fail the M-test due to overprediction of the number of earthquakes during the target period. Though there is no significant difference between the UCERF2 and NSHMP models, residual scores show that the NSHMP model is preferred in locations with earthquake occurrence, due to the lower seismicity rates forecasted by the UCERF2 model.

  11. Supernova Driving. IV. The Star-formation Rate of Molecular Clouds

    NASA Astrophysics Data System (ADS)

    Padoan, Paolo; Haugbølle, Troels; Nordlund, Åke; Frimann, Søren

    2017-05-01

    We compute the star-formation rate (SFR) in molecular clouds (MCs) that originate ab initio in a new, higher-resolution simulation of supernova-driven turbulence. Because of the large number of well-resolved clouds with self-consistent boundary and initial conditions, we obtain a large range of cloud physical parameters with realistic statistical distributions, which is an unprecedented sample of star-forming regions to test SFR models and to interpret observational surveys. We confirm the dependence of the SFR per free-fall time, SFRff, on the virial parameter, α vir, found in previous simulations, and compare a revised version of our turbulent fragmentation model with the numerical results. The dependences on Mach number, { M }, gas to magnetic pressure ratio, β, and compressive to solenoidal power ratio, χ at fixed α vir are not well constrained, because of random scatter due to time and cloud-to-cloud variations in SFRff. We find that SFRff in MCs can take any value in the range of 0 ≤ SFRff ≲ 0.2, and its probability distribution peaks at a value of SFRff ≈ 0.025, consistent with observations. The values of SFRff and the scatter in the SFRff-α vir relation are consistent with recent measurements in nearby MCs and in clouds near the Galactic center. Although not explicitly modeled by the theory, the scatter is consistent with the physical assumptions of our revised model and may also result in part from a lack of statistical equilibrium of the turbulence, due to the transient nature of MCs.

  12. Viscosity, relaxation time, and dynamics within a model asphalt of larger molecules

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Derek D.; Greenfield, Michael L., E-mail: greenfield@egr.uri.edu

    2014-01-21

    The dynamics properties of a new “next generation” model asphalt system that represents SHRP AAA-1 asphalt using larger molecules than past models is studied using molecular simulation. The system contains 72 molecules distributed over 12 molecule types that range from nonpolar branched alkanes to polar resins and asphaltenes. Molecular weights range from 290 to 890 g/mol. All-atom molecular dynamics simulations conducted at six temperatures from 298.15 to 533.15 K provide a wealth of correlation data. The modified Kohlrausch-Williams-Watts equation was regressed to reorientation time correlation functions and extrapolated to calculate average rotational relaxation times for individual molecules. The rotational relaxationmore » rate of molecules decreased significantly with increasing size and decreasing temperature. Translational self-diffusion coefficients followed an Arrhenius dependence. Similar activation energies of ∼42 kJ/mol were found for all 12 molecules in the model system, while diffusion prefactors spanned an order of magnitude. Viscosities calculated directly at 533.15 K and estimated at lower temperatures using the Debye-Stokes-Einstein relationship were consistent with experimental data for asphalts. The product of diffusion coefficient and rotational relaxation time showed only small changes with temperature above 358.15 K, indicating rotation and translation that couple self-consistently with viscosity. At lower temperatures, rotation slowed more than diffusion.« less

  13. The effects of noise on binocular rivalry waves: a stochastic neural field model

    NASA Astrophysics Data System (ADS)

    Webber, Matthew A.; Bressloff, Paul C.

    2013-03-01

    We analyze the effects of extrinsic noise on traveling waves of visual perception in a competitive neural field model of binocular rivalry. The model consists of two one-dimensional excitatory neural fields, whose activity variables represent the responses to left-eye and right-eye stimuli, respectively. The two networks mutually inhibit each other, and slow adaptation is incorporated into the model by taking the network connections to exhibit synaptic depression. We first show how, in the absence of any noise, the system supports a propagating composite wave consisting of an invading activity front in one network co-moving with a retreating front in the other network. Using a separation of time scales and perturbation methods previously developed for stochastic reaction-diffusion equations, we then show how extrinsic noise in the activity variables leads to a diffusive-like displacement (wandering) of the composite wave from its uniformly translating position at long time scales, and fluctuations in the wave profile around its instantaneous position at short time scales. We use our analysis to calculate the first-passage-time distribution for a stochastic rivalry wave to travel a fixed distance, which we find to be given by an inverse Gaussian. Finally, we investigate the effects of noise in the depression variables, which under an adiabatic approximation lead to quenched disorder in the neural fields during propagation of a wave.

  14. Infrasound Predictions Using the Weather Research and Forecasting Model: Atmospheric Green's Functions for the Source Physics Experiments 1-6.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Poppeliers, Christian; Aur, Katherine Anderson; Preston, Leiph

    This report shows the results of constructing predictive atmospheric models for the Source Physics Experiments 1-6. Historic atmospheric data are combined with topography to construct an atmo- spheric model that corresponds to the predicted (or actual) time of a given SPE event. The models are ultimately used to construct atmospheric Green's functions to be used for subsequent analysis. We present three atmospheric models for each SPE event: an average model based on ten one- hour snap shots of the atmosphere and two extrema models corresponding to the warmest, coolest, windiest, etc. atmospheric snap shots. The atmospheric snap shots consist ofmore » wind, temperature, and pressure profiles of the atmosphere for a one-hour time window centered at the time of the predicted SPE event, as well as nine additional snap shots for each of the nine preceding years, centered at the time and day of the SPE event.« less

  15. CTPPL: A Continuous Time Probabilistic Programming Language

    DTIC Science & Technology

    2009-07-01

    recent years there has been a flurry of interest in continuous time models, mostly focused on continuous time Bayesian networks ( CTBNs ) [Nodelman, 2007... CTBNs are built on homogenous Markov processes. A homogenous Markov pro- cess is a finite state, continuous time process, consisting of an initial...q1 : xn()] ... Some state transitions can produce emissions. In a CTBN , each variable has a conditional inten- sity matrix Qu for every combination of

  16. Discretized Streams: A Fault-Tolerant Model for Scalable Stream Processing

    DTIC Science & Technology

    2012-12-14

    Discretized Streams: A Fault-Tolerant Model for Scalable Stream Processing Matei Zaharia Tathagata Das Haoyuan Li Timothy Hunter Scott Shenker Ion...SUBTITLE Discretized Streams: A Fault-Tolerant Model for Scalable Stream Processing 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER...time. However, current programming models for distributed stream processing are relatively low-level often leaving the user to worry about consistency of

  17. Starobinsky-like inflation and neutrino masses in a no-scale SO(10) model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ellis, John; Theoretical Physics Department, CERN,CH-1211 Geneva 23; Garcia, Marcos A.G.

    2016-11-08

    Using a no-scale supergravity framework, we construct an SO(10) model that makes predictions for cosmic microwave background observables similar to those of the Starobinsky model of inflation, and incorporates a double-seesaw model for neutrino masses consistent with oscillation experiments and late-time cosmology. We pay particular attention to the behaviour of the scalar fields during inflation and the subsequent reheating.

  18. Regression analysis of sparse asynchronous longitudinal data.

    PubMed

    Cao, Hongyuan; Zeng, Donglin; Fine, Jason P

    2015-09-01

    We consider estimation of regression models for sparse asynchronous longitudinal observations, where time-dependent responses and covariates are observed intermittently within subjects. Unlike with synchronous data, where the response and covariates are observed at the same time point, with asynchronous data, the observation times are mismatched. Simple kernel-weighted estimating equations are proposed for generalized linear models with either time invariant or time-dependent coefficients under smoothness assumptions for the covariate processes which are similar to those for synchronous data. For models with either time invariant or time-dependent coefficients, the estimators are consistent and asymptotically normal but converge at slower rates than those achieved with synchronous data. Simulation studies evidence that the methods perform well with realistic sample sizes and may be superior to a naive application of methods for synchronous data based on an ad hoc last value carried forward approach. The practical utility of the methods is illustrated on data from a study on human immunodeficiency virus.

  19. New directions: Time for a new approach to modeling surface-atmosphere exchanges in air quality models?

    NASA Astrophysics Data System (ADS)

    Saylor, Rick D.; Hicks, Bruce B.

    2016-03-01

    Just as the exchange of heat, moisture and momentum between the Earth's surface and the atmosphere are critical components of meteorological and climate models, the surface-atmosphere exchange of many trace gases and aerosol particles is a vitally important process in air quality (AQ) models. Current state-of-the-art AQ models treat the emission and deposition of most gases and particles as separate model parameterizations, even though evidence has accumulated over time that the emission and deposition processes of many constituents are often two sides of the same coin, with the upward (emission) or downward (deposition) flux over a landscape depending on a range of environmental, seasonal and biological variables. In this note we argue that the time has come to integrate the treatment of these processes in AQ models to provide biological, physical and chemical consistency and improved predictions of trace gases and particles.

  20. Assessment of hemoglobin responsiveness to epoetin alfa in patients on hemodialysis using a population pharmacokinetic pharmacodynamic model.

    PubMed

    Wu, Liviawati; Mould, Diane R; Perez Ruixo, Juan Jose; Doshi, Sameer

    2015-10-01

    A population pharmacokinetic pharmacodynamic (PK/PD) model describing the effect of epoetin alfa on hemoglobin (Hb) response in hemodialysis patients was developed. Epoetin alfa pharmacokinetics was described using a linear 2-compartment model. PK parameter estimates were similar to previously reported values. A maturation-structured cytokinetic model consisting of 5 compartments linked in a catenary fashion by first-order cell transfer rates following a zero-order input process described the Hb time course. The PD model described 2 subpopulations, one whose Hb response reflected epoetin alfa dosing and a second whose response was unrelated to epoetin alfa dosing. Parameter estimates from the PK/PD model were physiologically reasonable and consistent with published reports. Numerical and visual predictive checks using data from 2 studies were performed. The PK and PD of epoetin alfa were well described by the model. © 2015, The American College of Clinical Pharmacology.

  1. Immortal time bias in observational studies of time-to-event outcomes.

    PubMed

    Jones, Mark; Fowler, Robert

    2016-12-01

    The purpose of the study is to show, through simulation and example, the magnitude and direction of immortal time bias when an inappropriate analysis is used. We compare 4 methods of analysis for observational studies of time-to-event outcomes: logistic regression, standard Cox model, landmark analysis, and time-dependent Cox model using an example data set of patients critically ill with influenza and a simulation study. For the example data set, logistic regression, standard Cox model, and landmark analysis all showed some evidence that treatment with oseltamivir provides protection from mortality in patients critically ill with influenza. However, when the time-dependent nature of treatment exposure is taken account of using a time-dependent Cox model, there is no longer evidence of a protective effect of treatment. The simulation study showed that, under various scenarios, the time-dependent Cox model consistently provides unbiased treatment effect estimates, whereas standard Cox model leads to bias in favor of treatment. Logistic regression and landmark analysis may also lead to bias. To minimize the risk of immortal time bias in observational studies of survival outcomes, we strongly suggest time-dependent exposures be included as time-dependent variables in hazard-based analyses. Copyright © 2016 Elsevier Inc. All rights reserved.

  2. Detecting and modelling delayed density-dependence in abundance time series of a small mammal (Didelphis aurita)

    NASA Astrophysics Data System (ADS)

    Brigatti, E.; Vieira, M. V.; Kajin, M.; Almeida, P. J. A. L.; de Menezes, M. A.; Cerqueira, R.

    2016-02-01

    We study the population size time series of a Neotropical small mammal with the intent of detecting and modelling population regulation processes generated by density-dependent factors and their possible delayed effects. The application of analysis tools based on principles of statistical generality are nowadays a common practice for describing these phenomena, but, in general, they are more capable of generating clear diagnosis rather than granting valuable modelling. For this reason, in our approach, we detect the principal temporal structures on the bases of different correlation measures, and from these results we build an ad-hoc minimalist autoregressive model that incorporates the main drivers of the dynamics. Surprisingly our model is capable of reproducing very well the time patterns of the empirical series and, for the first time, clearly outlines the importance of the time of attaining sexual maturity as a central temporal scale for the dynamics of this species. In fact, an important advantage of this analysis scheme is that all the model parameters are directly biologically interpretable and potentially measurable, allowing a consistency check between model outputs and independent measurements.

  3. Nonlinear optimal control policies for buoyancy-driven flows in the built environment

    NASA Astrophysics Data System (ADS)

    Nabi, Saleh; Grover, Piyush; Caulfield, Colm

    2017-11-01

    We consider optimal control of turbulent buoyancy-driven flows in the built environment, focusing on a model test case of displacement ventilation with a time-varying heat source. The flow is modeled using the unsteady Reynolds-averaged equations (URANS). To understand the stratification dynamics better, we derive a low-order partial-mixing ODE model extending the buoyancy-driven emptying filling box problem to the case of where both the heat source and the (controlled) inlet flow are time-varying. In the limit of a single step-change in the heat source strength, our model is consistent with that of Bower et al.. Our model considers the dynamics of both `filling' and `intruding' added layers due to a time-varying source and inlet flow. A nonlinear direct-adjoint-looping optimal control formulation yields time-varying values of temperature and velocity of the inlet flow that lead to `optimal' time-averaged temperature relative to appropriate objective functionals in a region of interest.

  4. Baryon octet electromagnetic form factors in a confining NJL model

    DOE PAGES

    Carrillo-Serrano, Manuel E.; Bentz, Wolfgang; Cloet, Ian C.; ...

    2016-05-25

    Electromagnetic form factors of the baryon octet are studied using a Nambu–Jona-Lasinio model which utilizes the proper-time regularization scheme to simulate aspects of colour confinement. In addition, the model also incorporates corrections to the dressed quarks from vector meson correlations in the t-channel and the pion cloud. Here, comparison with recent chiral extrapolations of lattice QCD results shows a remarkable level of consistency. For the charge radii we find the surprising result that r p E < r Σ+ E and |r n E| < |r Ξ0 E|, whereas the magnetic radii have a pattern largely consistent with a naivemore » expectation based on the dressed quark masses.« less

  5. Unsteady Aerodynamic Interaction in a Closely Coupled Turbine Consistent with Contra-Rotation

    DTIC Science & Technology

    2014-08-01

    data on the blade required three instrumentation patches due to slip ring channel limitations. TRF blowdowns designated as experiments 280100...measurements from sensors on the rotating hardware due to slip ring limitations. The experimental data was compared to time-accurate simulations modeling...AFRL-RQ-WP-TR-2014-0195 UNSTEADY AERODYNAMIC INTERACTION IN A CLOSELY COUPLED TURBINE CONSISTENT WITH CONTRA-ROTATION Michael Kenneth

  6. GPU-based real-time soft tissue deformation with cutting and haptic feedback.

    PubMed

    Courtecuisse, Hadrien; Jung, Hoeryong; Allard, Jérémie; Duriez, Christian; Lee, Doo Yong; Cotin, Stéphane

    2010-12-01

    This article describes a series of contributions in the field of real-time simulation of soft tissue biomechanics. These contributions address various requirements for interactive simulation of complex surgical procedures. In particular, this article presents results in the areas of soft tissue deformation, contact modelling, simulation of cutting, and haptic rendering, which are all relevant to a variety of medical interventions. The contributions described in this article share a common underlying model of deformation and rely on GPU implementations to significantly improve computation times. This consistency in the modelling technique and computational approach ensures coherent results as well as efficient, robust and flexible solutions. Copyright © 2010 Elsevier Ltd. All rights reserved.

  7. Self-consistent molecular dynamics calculation of diffusion in higher n-alkanes.

    PubMed

    Kondratyuk, Nikolay D; Norman, Genri E; Stegailov, Vladimir V

    2016-11-28

    Diffusion is one of the key subjects of molecular modeling and simulation studies. However, there is an unresolved lack of consistency between Einstein-Smoluchowski (E-S) and Green-Kubo (G-K) methods for diffusion coefficient calculations in systems of complex molecules. In this paper, we analyze this problem for the case of liquid n-triacontane. The non-conventional long-time tails of the velocity autocorrelation function (VACF) are found for this system. Temperature dependence of the VACF tail decay exponent is defined. The proper inclusion of the long-time tail contributions to the diffusion coefficient calculation results in the consistency between G-K and E-S methods. Having considered the major factors influencing the precision of the diffusion rate calculations in comparison with experimental data (system size effects and force field parameters), we point to hydrogen nuclear quantum effects as, presumably, the last obstacle to fully consistent n-alkane description.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Glascoe, Lee; Gowardhan, Akshay; Lennox, Kristin

    In the interest of promoting the international exchange of technical expertise, the US Department of Energy’s Office of Emergency Operations (NA-40) and the French Commissariat à l'Energie Atomique et aux énergies alternatives (CEA) requested that the National Atmospheric Release Advisory Center (NARAC) of Lawrence Livermore National Laboratory (LLNL) in Livermore, California host a joint table top exercise with experts in emergency management and atmospheric transport modeling. In this table top exercise, LLNL and CEA compared each other’s flow and dispersion models. The goal of the comparison is to facilitate the exchange of knowledge, capabilities, and practices, and to demonstrate themore » utility of modeling dispersal at different levels of computational fidelity. Two modeling approaches were examined, a regional scale modeling approach, appropriate for simple terrain and/or very large releases, and an urban scale modeling approach, appropriate for small releases in a city environment. This report is a summary of LLNL and CEA modeling efforts from this exercise. Two different types of LLNL and CEA models were employed in the analysis: urban-scale models (Aeolus CFD at LLNL/NARAC and Parallel- Micro-SWIFT-SPRAY, PMSS, at CEA) for analysis of a 5,000 Ci radiological release and Lagrangian Particle Dispersion Models (LODI at LLNL/NARAC and PSPRAY at CEA) for analysis of a much larger (500,000 Ci) regional radiological release. Two densely-populated urban locations were chosen: Chicago with its high-rise skyline and gridded street network and Paris with its more consistent, lower building height and complex unaligned street network. Each location was considered under early summer daytime and nighttime conditions. Different levels of fidelity were chosen for each scale: (1) lower fidelity mass-consistent diagnostic, intermediate fidelity Navier-Stokes RANS models, and higher fidelity Navier-Stokes LES for urban-scale analysis, and (2) lower-fidelity single-profile meteorology versus higher-fidelity three-dimensional gridded weather forecast for regional-scale analysis. Tradeoffs between computation time and the fidelity of the results are discussed for both scales. LES, for example, requires nearly 100 times more processor time than the mass-consistent diagnostic model or the RANS model, and seems better able to capture flow entrainment behind tall buildings. As anticipated, results obtained by LLNL and CEA at regional scale around Chicago and Paris look very similar in terms of both atmospheric dispersion of the radiological release and total effective dose. Both LLNL and CEA used the same meteorological data, Lagrangian particle dispersion models, and the same dose coefficients. LLNL and CEA urban-scale modeling results show consistent phenomenological behavior and predict similar impacted areas even though the detailed 3D flow patterns differ, particularly for the Chicago cases where differences in vertical entrainment behind tall buildings are particularly notable. Although RANS and LES (LLNL) models incorporate more detailed physics than do mass-consistent diagnostic flow models (CEA), it is not possible to reach definite conclusions about the prediction fidelity of the various models as experimental measurements were not available for comparison. Stronger conclusions about the relative performances of the models involved and evaluation of the tradeoffs involved in model simplification could be made with a systematic benchmarking of urban-scale modeling. This could be the purpose of a future US / French collaborative exercise.« less

  9. Wide-band profile domain pulsar timing analysis

    NASA Astrophysics Data System (ADS)

    Lentati, L.; Kerr, M.; Dai, S.; Hobson, M. P.; Shannon, R. M.; Hobbs, G.; Bailes, M.; Bhat, N. D. Ramesh; Burke-Spolaor, S.; Coles, W.; Dempsey, J.; Lasky, P. D.; Levin, Y.; Manchester, R. N.; Osłowski, S.; Ravi, V.; Reardon, D. J.; Rosado, P. A.; Spiewak, R.; van Straten, W.; Toomey, L.; Wang, J.; Wen, L.; You, X.; Zhu, X.

    2017-04-01

    We extend profile domain pulsar timing to incorporate wide-band effects such as frequency-dependent profile evolution and broad-band shape variation in the pulse profile. We also incorporate models for temporal variations in both pulse width and in the separation in phase of the main pulse and interpulse. We perform the analysis with both nested sampling and Hamiltonian Monte Carlo methods. In the latter case, we introduce a new parametrization of the posterior that is extremely efficient in the low signal-to-noise regime and can be readily applied to a wide range of scientific problems. We apply this methodology to a series of simulations, and to between seven and nine years of observations for PSRs J1713+0747, J1744-1134 and J1909-3744 with frequency coverage that spans 700-3600 Mhz. We use a smooth model for profile evolution across the full frequency range, and compare smooth and piecewise models for the temporal variations in dispersion measure (DM). We find that the profile domain framework consistently results in improved timing precision compared to the standard analysis paradigm by as much as 40 per cent for timing parameters. Incorporating smoothness in the DM variations into the model further improves timing precision by as much as 30 per cent. For PSR J1713+0747, we also detect pulse shape variation uncorrelated between epochs, which we attribute to variation intrinsic to the pulsar at a level consistent with previously published analyses. Not accounting for this shape variation biases the measured arrival times at the level of ˜30 ns, the same order of magnitude as the expected shift due to gravitational waves in the pulsar timing band.

  10. Research Vitality as Sustained Excellence: What Keeps the Plates Spinning?

    ERIC Educational Resources Information Center

    Gilstrap, J. Bruce; Harvey, Jaron; Novicevic, Milorad M.; Buckley, M. Ronald

    2011-01-01

    Purpose: Research vitality addresses the perseverance that faculty members in the organization sciences experience in maintaining their research quantity and quality over an extended period of time. The purpose of this paper is to offer a theoretical model of research vitality. Design/methodology/approach: The authors propose a model consisting of…

  11. Towards a fully self-consistent inversion combining historical and paleomagnetic data for geomagnetic field reconstructions

    NASA Astrophysics Data System (ADS)

    Arneitz, P.; Leonhardt, R.; Fabian, K.; Egli, R.

    2017-12-01

    Historical and paleomagnetic data are the two main sources of information about the long-term geomagnetic field evolution. Historical observations extend to the late Middle Ages, and prior to the 19th century, they consisted mainly of pure declination measurements from navigation and orientation logs. Field reconstructions going back further in time rely solely on magnetization acquired by rocks, sediments, and archaeological artefacts. The combined dataset is characterized by a strongly inhomogeneous spatio-temporal distribution and highly variable data reliability and quality. Therefore, an adequate weighting of the data that correctly accounts for data density, type, and realistic error estimates represents the major challenge for an inversion approach. Until now, there has not been a fully self-consistent geomagnetic model that correctly recovers the variation of the geomagnetic dipole together with the higher-order spherical harmonics. Here we present a new geomagnetic field model for the last 4 kyrs based on historical, archeomagnetic and volcanic records. The iterative Bayesian inversion approach targets the implementation of reliable error treatment, which allows different record types to be combined in a fully self-consistent way. Modelling results will be presented along with a thorough analysis of model limitations, validity and sensitivity.

  12. Dynamics in a one-dimensional ferrogel model: relaxation, pairing, shock-wave propagation.

    PubMed

    Goh, Segun; Menzel, Andreas M; Löwen, Hartmut

    2018-05-23

    Ferrogels are smart soft materials, consisting of a polymeric network and embedded magnetic particles. Novel phenomena, such as the variation of the overall mechanical properties by external magnetic fields, emerge consequently. However, the dynamic behavior of ferrogels remains largely unveiled. In this paper, we consider a one-dimensional chain consisting of magnetic dipoles and elastic springs between them as a simple model for ferrogels. The model is evaluated by corresponding simulations. To probe the dynamics theoretically, we investigate a continuum limit of the energy governing the system and the corresponding equation of motion. We provide general classification scenarios for the dynamics, elucidating the touching/detachment dynamics of the magnetic particles along the chain. In particular, it is verified in certain cases that the long-time relaxation corresponds to solutions of shock-wave propagation, while formations of particle pairs underlie the initial stage of the dynamics. We expect that these results will provide insight into the understanding of the dynamics of more realistic models with randomness in parameters and time-dependent magnetic fields.

  13. Impact of the time scale of model sensitivity response on coupled model parameter estimation

    NASA Astrophysics Data System (ADS)

    Liu, Chang; Zhang, Shaoqing; Li, Shan; Liu, Zhengyu

    2017-11-01

    That a model has sensitivity responses to parameter uncertainties is a key concept in implementing model parameter estimation using filtering theory and methodology. Depending on the nature of associated physics and characteristic variability of the fluid in a coupled system, the response time scales of a model to parameters can be different, from hourly to decadal. Unlike state estimation, where the update frequency is usually linked with observational frequency, the update frequency for parameter estimation must be associated with the time scale of the model sensitivity response to the parameter being estimated. Here, with a simple coupled model, the impact of model sensitivity response time scales on coupled model parameter estimation is studied. The model includes characteristic synoptic to decadal scales by coupling a long-term varying deep ocean with a slow-varying upper ocean forced by a chaotic atmosphere. Results show that, using the update frequency determined by the model sensitivity response time scale, both the reliability and quality of parameter estimation can be improved significantly, and thus the estimated parameters make the model more consistent with the observation. These simple model results provide a guideline for when real observations are used to optimize the parameters in a coupled general circulation model for improving climate analysis and prediction initialization.

  14. Matching time and spatial scales of rapid solidification: dynamic TEM experiments coupled to CALPHAD-informed phase-field simulations

    NASA Astrophysics Data System (ADS)

    Perron, Aurelien; Roehling, John D.; Turchi, Patrice E. A.; Fattebert, Jean-Luc; McKeown, Joseph T.

    2018-01-01

    A combination of dynamic transmission electron microscopy (DTEM) experiments and CALPHAD-informed phase-field simulations was used to study rapid solidification in Cu-Ni thin-film alloys. Experiments—conducted in the DTEM—consisted of in situ laser melting and determination of the solidification kinetics by monitoring the solid-liquid interface and the overall microstructure evolution (time-resolved measurements) during the solidification process. Modelling of the Cu-Ni alloy microstructure evolution was based on a phase-field model that included realistic Gibbs energies and diffusion coefficients from the CALPHAD framework (thermodynamic and mobility databases). DTEM and post mortem experiments highlighted the formation of microsegregation-free columnar grains with interface velocities varying from ˜0.1 to ˜0.6 m s-1. After an ‘incubation’ time, the velocity of the planar solid-liquid interface accelerated until solidification was complete. In addition, a decrease of the temperature gradient induced a decrease in the interface velocity. The modelling strategy permitted the simulation (in 1D and 2D) of the solidification process from the initially diffusion-controlled to the nearly partitionless regimes. Finally, results of DTEM experiments and phase-field simulations (grain morphology, solute distribution, and solid-liquid interface velocity) were consistent at similar time (μs) and spatial scales (μm).

  15. Matching time and spatial scales of rapid solidification: Dynamic TEM experiments coupled to CALPHAD-informed phase-field simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perron, Aurelien; Roehling, John D.; Turchi, Patrice E. A.

    A combination of dynamic transmission electron microscopy (DTEM) experiments and CALPHAD-informed phase-field simulations was used to study rapid solidification in Cu–Ni thin-film alloys. Experiments—conducted in the DTEM—consisted of in situ laser melting and determination of the solidification kinetics by monitoring the solid–liquid interface and the overall microstructure evolution (time-resolved measurements) during the solidification process. Modelling of the Cu–Ni alloy microstructure evolution was based on a phase-field model that included realistic Gibbs energies and diffusion coefficients from the CALPHAD framework (thermodynamic and mobility databases). DTEM and post mortem experiments highlighted the formation of microsegregation-free columnar grains with interface velocities varying frommore » ~0.1 to ~0.6 m s –1. After an 'incubation' time, the velocity of the planar solid–liquid interface accelerated until solidification was complete. In addition, a decrease of the temperature gradient induced a decrease in the interface velocity. The modelling strategy permitted the simulation (in 1D and 2D) of the solidification process from the initially diffusion-controlled to the nearly partitionless regimes. Lastly, results of DTEM experiments and phase-field simulations (grain morphology, solute distribution, and solid–liquid interface velocity) were consistent at similar time (μs) and spatial scales (μm).« less

  16. Diversification rates have declined in the Malagasy herpetofauna

    PubMed Central

    Scantlebury, Daniel P.

    2013-01-01

    The evolutionary origins of Madagascar's biodiversity remain mysterious despite the fact that relative to land area, there is no other place with consistently high levels of species richness and endemism across a range of taxonomic levels. Most efforts to explain diversification on the island have focused on geographical models of speciation, but recent studies have begun to address the island's accumulation of species through time, although with conflicting results. Prevailing hypotheses for diversification on the island involve either constant diversification rates or scenarios where rates decline through time. Using relative-time-calibrated phylogenies for seven endemic vertebrate clades and a model-fitting framework, I find evidence that diversification rates have declined through time on Madagascar. I show that diversification rates have clearly declined throughout the history of each clade, and models invoking diversity-dependent reductions to diversification rates best explain the diversification histories for each clade. These results are consistent with the ecological theory of adaptive radiation, and, coupled with ancillary observations about ecomorphological and life-history evolution, strongly suggest that adaptive radiation was an important formative process for one of the most species-rich regions on the Earth. These results cast the Malagasy biota in a new light and provide macroevolutionary justification for conservation initiatives. PMID:23843388

  17. Matching time and spatial scales of rapid solidification: Dynamic TEM experiments coupled to CALPHAD-informed phase-field simulations

    DOE PAGES

    Perron, Aurelien; Roehling, John D.; Turchi, Patrice E. A.; ...

    2017-12-05

    A combination of dynamic transmission electron microscopy (DTEM) experiments and CALPHAD-informed phase-field simulations was used to study rapid solidification in Cu–Ni thin-film alloys. Experiments—conducted in the DTEM—consisted of in situ laser melting and determination of the solidification kinetics by monitoring the solid–liquid interface and the overall microstructure evolution (time-resolved measurements) during the solidification process. Modelling of the Cu–Ni alloy microstructure evolution was based on a phase-field model that included realistic Gibbs energies and diffusion coefficients from the CALPHAD framework (thermodynamic and mobility databases). DTEM and post mortem experiments highlighted the formation of microsegregation-free columnar grains with interface velocities varying frommore » ~0.1 to ~0.6 m s –1. After an 'incubation' time, the velocity of the planar solid–liquid interface accelerated until solidification was complete. In addition, a decrease of the temperature gradient induced a decrease in the interface velocity. The modelling strategy permitted the simulation (in 1D and 2D) of the solidification process from the initially diffusion-controlled to the nearly partitionless regimes. Lastly, results of DTEM experiments and phase-field simulations (grain morphology, solute distribution, and solid–liquid interface velocity) were consistent at similar time (μs) and spatial scales (μm).« less

  18. Animal Study on Primary Dysmenorrhoea Treatment at Different Administration Times

    PubMed Central

    Pu, Bao-Chan; Fang, Ling; Gao, Li-Na; Liu, Rui; Li, Ai-zhu

    2015-01-01

    The new methods of different administration times for the treatment of primary dysmenorrhea are more widely used clinically; however, no obvious mechanism has been reported. Therefore, an animal model which is closer to clinical evaluation is indispensable. A novel animal experiment with different administration times, based on the mice oestrous cycle, for primary dysmenorrhoea treatment was explored in this study. Mice were randomly divided into two parts (one-cycle and three-cycle part) and each part includes five groups (12 mice per group), namely, Jingqian Zhitong Fang (JQF) 6-day group, JQF last 3-day group, Yuanhu Zhitong tablet group, model control group, and normal control group. According to the one-way ANOVAs, results (writhing reaction, and PGF2α, PGE2, NO, and calcium ions analysis by ELISA) of the JQF cycle group were in accordance with those of JQF last 3-day group. Similarly, results of three-cycle continuous administration were consistent with those of one-cycle treatment. In conclusion, the consistency of the experimental results illustrated that the novel animal model based on mice oestrous cycle with different administration times is more reasonable and feasible and can be used to explore in-depth mechanism of drugs for the treatment of primary dysmenorrhoea in future. PMID:25705236

  19. A Simulation Model Of A Picture Archival And Communication System

    NASA Astrophysics Data System (ADS)

    D'Silva, Vijay; Perros, Harry; Stockbridge, Chris

    1988-06-01

    A PACS architecture was simulated to quantify its performance. The model consisted of reading stations, acquisition nodes, communication links, a database management system, and a storage system consisting of magnetic and optical disks. Two levels of storage were simulated, a high-speed magnetic disk system for short term storage, and optical disk jukeboxes for long term storage. The communications link was a single bus via which image data were requested and delivered. Real input data to the simulation model were obtained from surveys of radiology procedures (Bowman Gray School of Medicine). From these the following inputs were calculated: - the size of short term storage necessary - the amount of long term storage required - the frequency of access of each store, and - the distribution of the number of films requested per diagnosis. The performance measures obtained were - the mean retrieval time for an image, - mean queue lengths, and - the utilization of each device. Parametric analysis was done for - the bus speed, - the packet size for the communications link, - the record size on the magnetic disk, - compression ratio, - influx of new images, - DBMS time, and - diagnosis think times. Plots give the optimum values for those values of input speed and device performance which are sufficient to achieve subsecond image retrieval times

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Skillman, Evan D.; Hidalgo, Sebastian L.; Monelli, Matteo

    We present an analysis of the star formation history (SFH) of a field near the half-light radius in the Local Group dwarf irregular galaxy IC 1613 based on deep Hubble Space Telescope Advanced Camera for Surveys imaging. Our observations reach the oldest main sequence turn-off, allowing a time resolution at the oldest ages of ∼1 Gyr. Our analysis shows that the SFH of the observed field in IC 1613 is consistent with being constant over the entire lifetime of the galaxy. These observations rule out an early dominant episode of star formation in IC 1613. We compare the SFH ofmore » IC 1613 with expectations from cosmological models. Since most of the mass is in place at early times for low-mass halos, a naive expectation is that most of the star formation should have taken place at early times. Models in which star formation follows mass accretion result in too many stars formed early and gas mass fractions that are too low today (the 'over-cooling problem'). The depth of the present photometry of IC 1613 shows that, at a resolution of ∼1 Gyr, the star formation rate is consistent with being constant, at even the earliest times, which is difficult to achieve in models where star formation follows mass assembly.« less

  1. Rise time and response measurements on a LiSOCl2 cell

    NASA Technical Reports Server (NTRS)

    Bastien, Caroline; Lecomte, Eric J.

    1992-01-01

    Dynamic impedance tests were performed on a 180 Ah LiSOCl2 cell in the frame of a short term work contract awarded by Aerospatiale as part of the Hermes Space Plane development work. These tests consisted of rise time and response measurements. The rise time test was performed to show the ability to deliver 4 KW, in the nominal voltage range (75-115 V), within less than 100 microseconds, and after a period at rest of 13 days. The response measurements test consisted of step response and frequency response tests. The frequency response test was performed to characterize the response of the LiSOCl2 cell to a positive or negative load step of 10 A starting from various currents. The test was performed for various depths of discharge and various temperatures. The test results were used to build a mathematical, electrical model of the LiSOCl2 cell which are also presented. The test description, test results, electrical modelization description, and conclusions are presented.

  2. Timing performance of the silicon PET insert probe

    PubMed Central

    Studen, A.; Burdette, D.; Chesi, E.; Cindro, V.; Clinthorne, N. H.; Cochran, E.; Grošičar, B.; Kagan, H.; Lacasta, C.; Linhart, V.; Mikuž, M.; Stankova, V.; Weilhammer, P.; Žontar, D.

    2010-01-01

    Simulation indicates that PET image could be improved by upgrading a conventional ring with a probe placed close to the imaged object. In this paper, timing issues related to a PET probe using high-resistivity silicon as a detector material are addressed. The final probe will consist of several (four to eight) 1-mm thick layers of silicon detectors, segmented into 1 × 1 mm2 pads, each pad equivalent to an independent p + nn+ diode. A proper matching of events in silicon with events of the external ring can be achieved with a good timing resolution. To estimate the timing performance, measurements were performed on a simplified model probe, consisting of a single 1-mm thick detector with 256 square pads (1.4 mm side), coupled with two VATAGP7s, application-specific integrated circuits. The detector material and electronics are the same that will be used for the final probe. The model was exposed to 511 keV annihilation photons from an 22Na source, and a scintillator (LYSO)–PMT assembly was used as a timing reference. Results were compared with the simulation, consisting of four parts: (i) GEANT4 implemented realistic tracking of electrons excited by annihilation photon interactions in silicon, (ii) calculation of propagation of secondary ionisation (electron–hole pairs) in the sensor, (iii) estimation of the shape of the current pulse induced on surface electrodes and (iv) simulation of the first electronics stage. A very good agreement between the simulation and the measurements were found. Both indicate reliable performance of the final probe at timing windows down to 20 ns. PMID:20215445

  3. Timing performance of the silicon PET insert probe.

    PubMed

    Studen, A; Burdette, D; Chesi, E; Cindro, V; Clinthorne, N H; Cochran, E; Grosicar, B; Kagan, H; Lacasta, C; Linhart, V; Mikuz, M; Stankova, V; Weilhammer, P; Zontar, D

    2010-01-01

    Simulation indicates that PET image could be improved by upgrading a conventional ring with a probe placed close to the imaged object. In this paper, timing issues related to a PET probe using high-resistivity silicon as a detector material are addressed. The final probe will consist of several (four to eight) 1-mm thick layers of silicon detectors, segmented into 1 x 1 mm(2) pads, each pad equivalent to an independent p + nn+ diode. A proper matching of events in silicon with events of the external ring can be achieved with a good timing resolution. To estimate the timing performance, measurements were performed on a simplified model probe, consisting of a single 1-mm thick detector with 256 square pads (1.4 mm side), coupled with two VATAGP7s, application-specific integrated circuits. The detector material and electronics are the same that will be used for the final probe. The model was exposed to 511 keV annihilation photons from an (22)Na source, and a scintillator (LYSO)-PMT assembly was used as a timing reference. Results were compared with the simulation, consisting of four parts: (i) GEANT4 implemented realistic tracking of electrons excited by annihilation photon interactions in silicon, (ii) calculation of propagation of secondary ionisation (electron-hole pairs) in the sensor, (iii) estimation of the shape of the current pulse induced on surface electrodes and (iv) simulation of the first electronics stage. A very good agreement between the simulation and the measurements were found. Both indicate reliable performance of the final probe at timing windows down to 20 ns.

  4. Analysis of longitudinal marginal structural models.

    PubMed

    Bryan, Jenny; Yu, Zhuo; Van Der Laan, Mark J

    2004-07-01

    In this article we construct and study estimators of the causal effect of a time-dependent treatment on survival in longitudinal studies. We employ a particular marginal structural model (MSM), proposed by Robins (2000), and follow a general methodology for constructing estimating functions in censored data models. The inverse probability of treatment weighted (IPTW) estimator of Robins et al. (2000) is used as an initial estimator and forms the basis for an improved, one-step estimator that is consistent and asymptotically linear when the treatment mechanism is consistently estimated. We extend these methods to handle informative censoring. The proposed methodology is employed to estimate the causal effect of exercise on mortality in a longitudinal study of seniors in Sonoma County. A simulation study demonstrates the bias of naive estimators in the presence of time-dependent confounders and also shows the efficiency gain of the IPTW estimator, even in the absence such confounding. The efficiency gain of the improved, one-step estimator is demonstrated through simulation.

  5. Tests of neutrino interaction models with the MicroBooNE detector

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rafique, Aleena

    2018-01-01

    I measure a large set of observables in inclusive charged current muon neutrino scattering on argon with the MicroBooNE liquid argon time projection chamber operating at Fermilab. I evaluate three neutrino interaction models based on the widely used GENIE event generator using these observables. The measurement uses a data set consisting of neutrino interactions with a final state muon candidate fully contained within the MicroBooNE detector. These data were collected in 2016 with the Fermilab Booster Neutrino Beam, which has an average neutrino energy ofmore » $800$ MeV, using an exposure corresponding to $$5.0\\times10^{19}$$ protons-on-target. The analysis employs fully automatic event selection and charged particle track reconstruction and uses a data-driven technique to separate neutrino interactions from cosmic ray background events. I find that GENIE models consistently describe the shapes of a large number of kinematic distributions for fixed observed multiplicity, but I show an indication that the observed multiplicity fractions deviate from GENIE expectations.« less

  6. Activated desorption at heterogeneous interfaces and long-time kinetics of hydrocarbon recovery from nanoporous media

    PubMed Central

    Lee, Thomas; Bocquet, Lydéric; Coasne, Benoit

    2016-01-01

    Hydrocarbon recovery from unconventional reservoirs (shale gas) is debated due to its environmental impact and uncertainties on its predictability. But a lack of scientific knowledge impedes the proposal of reliable alternatives. The requirement of hydrofracking, fast recovery decay and ultra-low permeability—inherent to their nanoporosity—are specificities of these reservoirs, which challenge existing frameworks. Here we use molecular simulation and statistical models to show that recovery is hampered by interfacial effects at the wet kerogen surface. Recovery is shown to be thermally activated with an energy barrier modelled from the interface wetting properties. We build a statistical model of the recovery kinetics with a two-regime decline that is consistent with published data: a short time decay, consistent with Darcy description, followed by a fast algebraic decay resulting from increasingly unreachable energy barriers. Replacing water by CO2 or propane eliminates the barriers, therefore raising hopes for clean/efficient recovery. PMID:27327254

  7. A stochastically forced time delay solar dynamo model: Self-consistent recovery from a maunder-like grand minimum necessitates a mean-field alpha effect

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hazra, Soumitra; Nandy, Dibyendu; Passos, Dário, E-mail: s.hazra@iiserkol.ac.in, E-mail: dariopassos@ist.utl.pt, E-mail: dnandi@iiserkol.ac.in

    Fluctuations in the Sun's magnetic activity, including episodes of grand minima such as the Maunder minimum have important consequences for space and planetary environments. However, the underlying dynamics of such extreme fluctuations remain ill-understood. Here, we use a novel mathematical model based on stochastically forced, non-linear delay differential equations to study solar cycle fluctuations in which time delays capture the physics of magnetic flux transport between spatially segregated dynamo source regions in the solar interior. Using this model, we explicitly demonstrate that the Babcock-Leighton poloidal field source based on dispersal of tilted bipolar sunspot flux, alone, cannot recover the sunspotmore » cycle from a grand minimum. We find that an additional poloidal field source effective on weak fields—e.g., the mean-field α effect driven by helical turbulence—is necessary for self-consistent recovery of the sunspot cycle from grand minima episodes.« less

  8. Self-Consistent and Time-Dependent Solar Wind Models

    NASA Technical Reports Server (NTRS)

    Ong, K. K.; Musielak, Z. E.; Rosner, R.; Suess, S. T.; Sulkanen, M. E.

    1997-01-01

    We describe the first results from a self-consistent study of Alfven waves for the time-dependent, single-fluid magnetohydrodynamic (MHD) solar wind equations, using a modified version of the ZEUS MHD code. The wind models we examine are radially symmetrical and magnetized; the initial outflow is described by the standard Parker wind solution. Our study focuses on the effects of Alfven waves on the outflow and is based on solving the full set of the ideal nonlinear MHD equations. In contrast to previous studies, no assumptions regarding wave linearity, wave damping, and wave-flow interaction are made; thus, the models naturally account for the back-reaction of the wind on the waves, as well as for the nonlinear interaction between different types of MHD waves. Our results clearly demonstrate when momentum deposition by Alfven waves in the solar wind can be sufficient to explain the origin of fast streams in solar coronal holes; we discuss the range of wave amplitudes required to obtained such fast stream solutions.

  9. Semi-automatic motion compensation of contrast-enhanced ultrasound images from abdominal organs for perfusion analysis.

    PubMed

    Schäfer, Sebastian; Nylund, Kim; Sævik, Fredrik; Engjom, Trond; Mézl, Martin; Jiřík, Radovan; Dimcevski, Georg; Gilja, Odd Helge; Tönnies, Klaus

    2015-08-01

    This paper presents a system for correcting motion influences in time-dependent 2D contrast-enhanced ultrasound (CEUS) images to assess tissue perfusion characteristics. The system consists of a semi-automatic frame selection method to find images with out-of-plane motion as well as a method for automatic motion compensation. Translational and non-rigid motion compensation is applied by introducing a temporal continuity assumption. A study consisting of 40 clinical datasets was conducted to compare the perfusion with simulated perfusion using pharmacokinetic modeling. Overall, the proposed approach decreased the mean average difference between the measured perfusion and the pharmacokinetic model estimation. It was non-inferior for three out of four patient cohorts to a manual approach and reduced the analysis time by 41% compared to manual processing. Copyright © 2014 Elsevier Ltd. All rights reserved.

  10. Cortical Spiking Network Interfaced with Virtual Musculoskeletal Arm and Robotic Arm.

    PubMed

    Dura-Bernal, Salvador; Zhou, Xianlian; Neymotin, Samuel A; Przekwas, Andrzej; Francis, Joseph T; Lytton, William W

    2015-01-01

    Embedding computational models in the physical world is a critical step towards constraining their behavior and building practical applications. Here we aim to drive a realistic musculoskeletal arm model using a biomimetic cortical spiking model, and make a robot arm reproduce the same trajectories in real time. Our cortical model consisted of a 3-layered cortex, composed of several hundred spiking model-neurons, which display physiologically realistic dynamics. We interconnected the cortical model to a two-joint musculoskeletal model of a human arm, with realistic anatomical and biomechanical properties. The virtual arm received muscle excitations from the neuronal model, and fed back proprioceptive information, forming a closed-loop system. The cortical model was trained using spike timing-dependent reinforcement learning to drive the virtual arm in a 2D reaching task. Limb position was used to simultaneously control a robot arm using an improved network interface. Virtual arm muscle activations responded to motoneuron firing rates, with virtual arm muscles lengths encoded via population coding in the proprioceptive population. After training, the virtual arm performed reaching movements which were smoother and more realistic than those obtained using a simplistic arm model. This system provided access to both spiking network properties and to arm biophysical properties, including muscle forces. The use of a musculoskeletal virtual arm and the improved control system allowed the robot arm to perform movements which were smoother than those reported in our previous paper using a simplistic arm. This work provides a novel approach consisting of bidirectionally connecting a cortical model to a realistic virtual arm, and using the system output to drive a robotic arm in real time. Our techniques are applicable to the future development of brain neuroprosthetic control systems, and may enable enhanced brain-machine interfaces with the possibility for finer control of limb prosthetics.

  11. Exploring the optimal economic timing for crop tree release treatments in hardwoods: results from simulation

    Treesearch

    Chris B. LeDoux; Gary W. Miller

    2008-01-01

    In this study we used data from 16 Appalachian hardwood stands, a growth and yield computer simulation model, and stump-to-mill logging cost-estimating software to evaluate the optimal economic timing of crop tree release (CTR) treatments. The simulated CTR treatments consisted of one-time logging operations at stand age 11, 23, 31, or 36 years, with the residual...

  12. A computer-based time study system for timber harvesting operations

    Treesearch

    Jingxin Wang; Joe McNeel; John Baumgras

    2003-01-01

    A computer-based time study system was developed for timber harvesting operations. Object-oriented techniques were used to model and design the system. The front-end of the time study system resides on the MS Windows CE and the back-end is supported by MS Access. The system consists of three major components: a handheld system, data transfer interface, and data storage...

  13. Self-consistent approach for neutral community models with speciation

    NASA Astrophysics Data System (ADS)

    Haegeman, Bart; Etienne, Rampal S.

    2010-03-01

    Hubbell’s neutral model provides a rich theoretical framework to study ecological communities. By incorporating both ecological and evolutionary time scales, it allows us to investigate how communities are shaped by speciation processes. The speciation model in the basic neutral model is particularly simple, describing speciation as a point-mutation event in a birth of a single individual. The stationary species abundance distribution of the basic model, which can be solved exactly, fits empirical data of distributions of species’ abundances surprisingly well. More realistic speciation models have been proposed such as the random-fission model in which new species appear by splitting up existing species. However, no analytical solution is available for these models, impeding quantitative comparison with data. Here, we present a self-consistent approximation method for neutral community models with various speciation modes, including random fission. We derive explicit formulas for the stationary species abundance distribution, which agree very well with simulations. We expect that our approximation method will be useful to study other speciation processes in neutral community models as well.

  14. Putting it all together: Exhumation histories from a formal combination of heat flow and a suite of thermochronometers

    USGS Publications Warehouse

    d'Alessio, M. A.; Williams, C.F.

    2007-01-01

    A suite of new techniques in thermochronometry allow analysis of the thermal history of a sample over a broad range of temperature sensitivities. New analysis tools must be developed that fully and formally integrate these techniques, allowing a single geologic interpretation of the rate and timing of exhumation and burial events consistent with all data. We integrate a thermal model of burial and exhumation, (U-Th)/He age modeling, and fission track age and length modeling. We then use a genetic algorithm to efficiently explore possible time-exhumation histories of a vertical sample profile (such as a borehole), simultaneously solving for exhumation and burial rates as well as changes in background heat flow. We formally combine all data in a rigorous statistical fashion. By parameterizing the model in terms of exhumation rather than time-temperature paths (as traditionally done in fission track modeling), we can ensure that exhumation histories result in a sedimentary basin whose thickness is consistent with the observed basin, a physically based constraint that eliminates otherwise acceptable thermal histories. We apply the technique to heat flow and thermochronometry data from the 2.1 -km-deep San Andreas Fault Observatory at Depth pilot hole near the San Andreas fault, California. We find that the site experienced <1 km of exhumation or burial since the onset of San Andreas fault activity ???30 Ma.

  15. Robust inference in discrete hazard models for randomized clinical trials.

    PubMed

    Nguyen, Vinh Q; Gillen, Daniel L

    2012-10-01

    Time-to-event data in which failures are only assessed at discrete time points are common in many clinical trials. Examples include oncology studies where events are observed through periodic screenings such as radiographic scans. When the survival endpoint is acknowledged to be discrete, common methods for the analysis of observed failure times include the discrete hazard models (e.g., the discrete-time proportional hazards and the continuation ratio model) and the proportional odds model. In this manuscript, we consider estimation of a marginal treatment effect in discrete hazard models where the constant treatment effect assumption is violated. We demonstrate that the estimator resulting from these discrete hazard models is consistent for a parameter that depends on the underlying censoring distribution. An estimator that removes the dependence on the censoring mechanism is proposed and its asymptotic distribution is derived. Basing inference on the proposed estimator allows for statistical inference that is scientifically meaningful and reproducible. Simulation is used to assess the performance of the presented methodology in finite samples.

  16. Reduced-Volume Fracture Toughness Characterization for Transparent Polymers

    DTIC Science & Technology

    2015-03-21

    Caruthers et al. (2004) developed a thermodynamically consistent, nonlinear viscoelastic bulk constitutive model based on a potential energy clock ( PEC ...except that relaxation times change. Because of its formulation, the PEC model predicts mechanical yield as a natural consequence of relaxation...softening type of behavior, but hysteresis effects are not naturally accounted for. Adolf et al. (2009) developed a method of simplifying the PEC model

  17. Time series modeling by a regression approach based on a latent process.

    PubMed

    Chamroukhi, Faicel; Samé, Allou; Govaert, Gérard; Aknin, Patrice

    2009-01-01

    Time series are used in many domains including finance, engineering, economics and bioinformatics generally to represent the change of a measurement over time. Modeling techniques may then be used to give a synthetic representation of such data. A new approach for time series modeling is proposed in this paper. It consists of a regression model incorporating a discrete hidden logistic process allowing for activating smoothly or abruptly different polynomial regression models. The model parameters are estimated by the maximum likelihood method performed by a dedicated Expectation Maximization (EM) algorithm. The M step of the EM algorithm uses a multi-class Iterative Reweighted Least-Squares (IRLS) algorithm to estimate the hidden process parameters. To evaluate the proposed approach, an experimental study on simulated data and real world data was performed using two alternative approaches: a heteroskedastic piecewise regression model using a global optimization algorithm based on dynamic programming, and a Hidden Markov Regression Model whose parameters are estimated by the Baum-Welch algorithm. Finally, in the context of the remote monitoring of components of the French railway infrastructure, and more particularly the switch mechanism, the proposed approach has been applied to modeling and classifying time series representing the condition measurements acquired during switch operations.

  18. Real-Time System Verification by Kappa-Induction

    NASA Technical Reports Server (NTRS)

    Pike, Lee S.

    2005-01-01

    We report the first formal verification of a reintegration protocol for a safety-critical, fault-tolerant, real-time distributed embedded system. A reintegration protocol increases system survivability by allowing a node that has suffered a fault to regain state consistent with the operational nodes. The protocol is verified in the Symbolic Analysis Laboratory (SAL), where bounded model checking and decision procedures are used to verify infinite-state systems by k-induction. The protocol and its environment are modeled as synchronizing timeout automata. Because k-induction is exponential with respect to k, we optimize the formal model to reduce the size of k. Also, the reintegrator's event-triggered behavior is conservatively modeled as time-triggered behavior to further reduce the size of k and to make it invariant to the number of nodes modeled. A corollary is that a clique avoidance property is satisfied.

  19. Improved treatment of optics in the Lindquist-Wheeler models

    NASA Astrophysics Data System (ADS)

    Clifton, Timothy; Ferreira, Pedro G.; O'Donnell, Kane

    2012-01-01

    We consider the optical properties of Lindquist-Wheeler (LW) models of the Universe. These models consist of lattices constructed from regularly arranged discrete masses. They are akin to the Wigner-Seitz construction of solid state physics, and result in a dynamical description of the large-scale Universe in which the global expansion is given by a Friedmann-like equation. We show that if these models are constructed in a particular way then the redshifts of distant objects, as well as the dynamics of the global space-time, can be made to be in good agreement with the homogeneous and isotropic Friedmann-Lemaître-Robertson-Walker (FLRW) solutions of Einstein’s equations, at the level of ≲3% out to z≃2. Angular diameter and luminosity distances, on the other hand, differ from those found in the corresponding FLRW models, while being consistent with the “empty beam” approximation, together with the shearing effects due to the nearest masses. This can be compared with the large deviations found from the corresponding FLRW values obtained in a previous study that considered LW models constructed in a different way. We therefore advocate the improved LW models we consider here as useful constructions that appear to faithfully reproduce both the dynamical and observational properties of space-times containing discrete masses.

  20. Model for macroevolutionary dynamics.

    PubMed

    Maruvka, Yosef E; Shnerb, Nadav M; Kessler, David A; Ricklefs, Robert E

    2013-07-02

    The highly skewed distribution of species among genera, although challenging to macroevolutionists, provides an opportunity to understand the dynamics of diversification, including species formation, extinction, and morphological evolution. Early models were based on either the work by Yule [Yule GU (1925) Philos Trans R Soc Lond B Biol Sci 213:21-87], which neglects extinction, or a simple birth-death (speciation-extinction) process. Here, we extend the more recent development of a generic, neutral speciation-extinction (of species)-origination (of genera; SEO) model for macroevolutionary dynamics of taxon diversification. Simulations show that deviations from the homogeneity assumptions in the model can be detected in species-per-genus distributions. The SEO model fits observed species-per-genus distributions well for class-to-kingdom-sized taxonomic groups. The model's predictions for the appearance times (the time of the first existing species) of the taxonomic groups also approximately match estimates based on molecular inference and fossil records. Unlike estimates based on analyses of phylogenetic reconstruction, fitted extinction rates for large clades are close to speciation rates, consistent with high rates of species turnover and the relatively slow change in diversity observed in the fossil record. Finally, the SEO model generally supports the consistency of generic boundaries based on morphological differences between species and provides a comparator for rates of lineage splitting and morphological evolution.

  1. TH-AB-202-08: A Robust Real-Time Surface Reconstruction Method On Point Clouds Captured From a 3D Surface Photogrammetry System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, W; Sawant, A; Ruan, D

    2016-06-15

    Purpose: Surface photogrammetry (e.g. VisionRT, C-Rad) provides a noninvasive way to obtain high-frequency measurement for patient motion monitoring in radiotherapy. This work aims to develop a real-time surface reconstruction method on the acquired point clouds, whose acquisitions are subject to noise and missing measurements. In contrast to existing surface reconstruction methods that are usually computationally expensive, the proposed method reconstructs continuous surfaces with comparable accuracy in real-time. Methods: The key idea in our method is to solve and propagate a sparse linear relationship from the point cloud (measurement) manifold to the surface (reconstruction) manifold, taking advantage of the similarity inmore » local geometric topology in both manifolds. With consistent point cloud acquisition, we propose a sparse regression (SR) model to directly approximate the target point cloud as a sparse linear combination from the training set, building the point correspondences by the iterative closest point (ICP) method. To accommodate changing noise levels and/or presence of inconsistent occlusions, we further propose a modified sparse regression (MSR) model to account for the large and sparse error built by ICP, with a Laplacian prior. We evaluated our method on both clinical acquired point clouds under consistent conditions and simulated point clouds with inconsistent occlusions. The reconstruction accuracy was evaluated w.r.t. root-mean-squared-error, by comparing the reconstructed surfaces against those from the variational reconstruction method. Results: On clinical point clouds, both the SR and MSR models achieved sub-millimeter accuracy, with mean reconstruction time reduced from 82.23 seconds to 0.52 seconds and 0.94 seconds, respectively. On simulated point cloud with inconsistent occlusions, the MSR model has demonstrated its advantage in achieving consistent performance despite the introduced occlusions. Conclusion: We have developed a real-time and robust surface reconstruction method on point clouds acquired by photogrammetry systems. It serves an important enabling step for real-time motion tracking in radiotherapy. This work is supported in part by NIH grant R01 CA169102-02.« less

  2. An explicit mixed numerical method for mesoscale model

    NASA Technical Reports Server (NTRS)

    Hsu, H.-M.

    1981-01-01

    A mixed numerical method has been developed for mesoscale models. The technique consists of a forward difference scheme for time tendency terms, an upstream scheme for advective terms, and a central scheme for the other terms in a physical system. It is shown that the mixed method is conditionally stable and highly accurate for approximating the system of either shallow-water equations in one dimension or primitive equations in three dimensions. Since the technique is explicit and two time level, it conserves computer and programming resources.

  3. A simple analytical model for dynamics of time-varying target leverage ratios

    NASA Astrophysics Data System (ADS)

    Lo, C. F.; Hui, C. H.

    2012-03-01

    In this paper we have formulated a simple theoretical model for the dynamics of the time-varying target leverage ratio of a firm under some assumptions based upon empirical observations. In our theoretical model the time evolution of the target leverage ratio of a firm can be derived self-consistently from a set of coupled Ito's stochastic differential equations governing the leverage ratios of an ensemble of firms by the nonlinear Fokker-Planck equation approach. The theoretically derived time paths of the target leverage ratio bear great resemblance to those used in the time-dependent stationary-leverage (TDSL) model [Hui et al., Int. Rev. Financ. Analy. 15, 220 (2006)]. Thus, our simple model is able to provide a theoretical foundation for the selected time paths of the target leverage ratio in the TDSL model. We also examine how the pace of the adjustment of a firm's target ratio, the volatility of the leverage ratio and the current leverage ratio affect the dynamics of the time-varying target leverage ratio. Hence, with the proposed dynamics of the time-dependent target leverage ratio, the TDSL model can be readily applied to generate the default probabilities of individual firms and to assess the default risk of the firms.

  4. Berezinskii-Kosterlitz-Thouless transition in the time-reversal-symmetric Hofstadter-Hubbard model

    NASA Astrophysics Data System (ADS)

    Iskin, M.

    2018-01-01

    Assuming that two-component Fermi gases with opposite artificial magnetic fields on a square optical lattice are well described by the so-called time-reversal-symmetric Hofstadter-Hubbard model, we explore the thermal superfluid properties along with the critical Berezinskii-Kosterlitz-Thouless (BKT) transition temperature in this model over a wide range of its parameters. In particular, since our self-consistent BCS-BKT approach takes the multiband butterfly spectrum explicitly into account, it unveils how dramatically the interband contribution to the phase stiffness dominates the intraband one with an increasing interaction strength for any given magnetic flux.

  5. Continuous distribution of emission states from single CdSe/ZnS quantum dots.

    PubMed

    Zhang, Kai; Chang, Hauyee; Fu, Aihua; Alivisatos, A Paul; Yang, Haw

    2006-04-01

    The photoluminescence dynamics of colloidal CdSe/ZnS/streptavidin quantum dots were studied using time-resolved single-molecule spectroscopy. Statistical tests of the photon-counting data suggested that the simple "on/off" discrete state model is inconsistent with experimental results. Instead, a continuous emission state distribution model was found to be more appropriate. Autocorrelation analysis of lifetime and intensity fluctuations showed a nonlinear correlation between them. These results were consistent with the model that charged quantum dots were also emissive, and that time-dependent charge migration gave rise to the observed photoluminescence dynamics.

  6. A feedback control model for network flow with multiple pure time delays

    NASA Technical Reports Server (NTRS)

    Press, J.

    1972-01-01

    A control model describing a network flow hindered by multiple pure time (or transport) delays is formulated. Feedbacks connect each desired output with a single control sector situated at the origin. The dynamic formulation invokes the use of differential difference equations. This causes the characteristic equation of the model to consist of transcendental functions instead of a common algebraic polynomial. A general graphical criterion is developed to evaluate the stability of such a problem. A digital computer simulation confirms the validity of such criterion. An optimal decision making process with multiple delays is presented.

  7. Stability analysis for a delay differential equations model of a hydraulic turbine speed governor

    NASA Astrophysics Data System (ADS)

    Halanay, Andrei; Safta, Carmen A.; Dragoi, Constantin; Piraianu, Vlad F.

    2017-01-01

    The paper aims to study the dynamic behavior of a speed governor for a hydraulic turbine using a mathematical model. The nonlinear mathematical model proposed consists in a system of delay differential equations (DDE) to be compared with already established mathematical models of ordinary differential equations (ODE). A new kind of nonlinearity is introduced as a time delay. The delays can characterize different running conditions of the speed governor. For example, it is considered that spool displacement of hydraulic amplifier might be blocked due to oil impurities in the oil supply system and so the hydraulic amplifier has a time delay in comparison to the time control. Numerical simulations are presented in a comparative manner. A stability analysis of the hydraulic control system is performed, too. Conclusions of the dynamic behavior using the DDE model of a hydraulic turbine speed governor are useful in modeling and controlling hydropower plants.

  8. Consistent response of vegetation dynamics to recent climate change in tropical mountain regions.

    PubMed

    Krishnaswamy, Jagdish; John, Robert; Joseph, Shijo

    2014-01-01

    Global climate change has emerged as a major driver of ecosystem change. Here, we present evidence for globally consistent responses in vegetation dynamics to recent climate change in the world's mountain ecosystems located in the pan-tropical belt (30°N-30°S). We analyzed decadal-scale trends and seasonal cycles of vegetation greenness using monthly time series of satellite greenness (Normalized Difference Vegetation Index) and climate data for the period 1982-2006 for 47 mountain protected areas in five biodiversity hotspots. The time series of annual maximum NDVI for each of five continental regions shows mild greening trends followed by reversal to stronger browning trends around the mid-1990s. During the same period we found increasing trends in temperature but only marginal change in precipitation. The amplitude of the annual greenness cycle increased with time, and was strongly associated with the observed increase in temperature amplitude. We applied dynamic models with time-dependent regression parameters to study the time evolution of NDVI-climate relationships. We found that the relationship between vegetation greenness and temperature weakened over time or was negative. Such loss of positive temperature sensitivity has been documented in other regions as a response to temperature-induced moisture stress. We also used dynamic models to extract the trends in vegetation greenness that remain after accounting for the effects of temperature and precipitation. We found residual browning and greening trends in all regions, which indicate that factors other than temperature and precipitation also influence vegetation dynamics. Browning rates became progressively weaker with increase in elevation as indicated by quantile regression models. Tropical mountain vegetation is considered sensitive to climatic changes, so these consistent vegetation responses across widespread regions indicate persistent global-scale effects of climate warming and associated moisture stresses. © 2013 John Wiley & Sons Ltd.

  9. Glass transition temperature of polymer nano-composites with polymer and filler interactions

    NASA Astrophysics Data System (ADS)

    Hagita, Katsumi; Takano, Hiroshi; Doi, Masao; Morita, Hiroshi

    2012-02-01

    We systematically studied versatile coarse-grained model (bead spring model) to describe filled polymer nano-composites for coarse-grained (Kremer-Grest model) molecular dynamics simulations. This model consists of long polymers, crosslink, and fillers. We used the hollow structure as the filler to describe rigid spherical fillers with small computing costs. Our filler model consists of surface particles of icosahedra fullerene structure C320 and a repulsive force from the center of the filler is applied to the surface particles in order to make a sphere and rigid. The filler's diameter is 12 times of beads of the polymers. As the first test of our model, we study temperature dependence of volumes of periodic boundary conditions under constant pressures through NPT constant Andersen algorithm. It is found that Glass transition temperature (Tg) decrease with increasing filler's volume fraction for the case of repulsive interaction between polymer and fillers and Tg weakly increase for attractive interaction.

  10. Generalized multiplicative error models: Asymptotic inference and empirical analysis

    NASA Astrophysics Data System (ADS)

    Li, Qian

    This dissertation consists of two parts. The first part focuses on extended Multiplicative Error Models (MEM) that include two extreme cases for nonnegative series. These extreme cases are common phenomena in high-frequency financial time series. The Location MEM(p,q) model incorporates a location parameter so that the series are required to have positive lower bounds. The estimator for the location parameter turns out to be the minimum of all the observations and is shown to be consistent. The second case captures the nontrivial fraction of zero outcomes feature in a series and combines a so-called Zero-Augmented general F distribution with linear MEM(p,q). Under certain strict stationary and moment conditions, we establish a consistency and asymptotic normality of the semiparametric estimation for these two new models. The second part of this dissertation examines the differences and similarities between trades in the home market and trades in the foreign market of cross-listed stocks. We exploit the multiplicative framework to model trading duration, volume per trade and price volatility for Canadian shares that are cross-listed in the New York Stock Exchange (NYSE) and the Toronto Stock Exchange (TSX). We explore the clustering effect, interaction between trading variables, and the time needed for price equilibrium after a perturbation for each market. The clustering effect is studied through the use of univariate MEM(1,1) on each variable, while the interactions among duration, volume and price volatility are captured by a multivariate system of MEM(p,q). After estimating these models by a standard QMLE procedure, we exploit the Impulse Response function to compute the calendar time for a perturbation in these variables to be absorbed into price variance, and use common statistical tests to identify the difference between the two markets in each aspect. These differences are of considerable interest to traders, stock exchanges and policy makers.

  11. Continuous-time system identification of a smoking cessation intervention

    NASA Astrophysics Data System (ADS)

    Timms, Kevin P.; Rivera, Daniel E.; Collins, Linda M.; Piper, Megan E.

    2014-07-01

    Cigarette smoking is a major global public health issue and the leading cause of preventable death in the United States. Toward a goal of designing better smoking cessation treatments, system identification techniques are applied to intervention data to describe smoking cessation as a process of behaviour change. System identification problems that draw from two modelling paradigms in quantitative psychology (statistical mediation and self-regulation) are considered, consisting of a series of continuous-time estimation problems. A continuous-time dynamic modelling approach is employed to describe the response of craving and smoking rates during a quit attempt, as captured in data from a smoking cessation clinical trial. The use of continuous-time models provide benefits of parsimony, ease of interpretation, and the opportunity to work with uneven or missing data.

  12. Neoclassical Simulation of Tokamak Plasmas using Continuum Gyrokinetc Code TEMPEST

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, X Q

    We present gyrokinetic neoclassical simulations of tokamak plasmas with self-consistent electric field for the first time using a fully nonlinear (full-f) continuum code TEMPEST in a circular geometry. A set of gyrokinetic equations are discretized on a five dimensional computational grid in phase space. The present implementation is a Method of Lines approach where the phase-space derivatives are discretized with finite differences and implicit backwards differencing formulas are used to advance the system in time. The fully nonlinear Boltzmann model is used for electrons. The neoclassical electric field is obtained by solving gyrokinetic Poisson equation with self-consistent poloidal variation. Withmore » our 4D ({psi}, {theta}, {epsilon}, {mu}) version of the TEMPEST code we compute radial particle and heat flux, the Geodesic-Acoustic Mode (GAM), and the development of neoclassical electric field, which we compare with neoclassical theory with a Lorentz collision model. The present work provides a numerical scheme and a new capability for self-consistently studying important aspects of neoclassical transport and rotations in toroidal magnetic fusion devices.« less

  13. An MR-based Model for Cardio-Respiratory Motion Compensation of Overlays in X-Ray Fluoroscopy

    PubMed Central

    Fischer, Peter; Faranesh, Anthony; Pohl, Thomas; Maier, Andreas; Rogers, Toby; Ratnayaka, Kanishka; Lederman, Robert; Hornegger, Joachim

    2017-01-01

    In X-ray fluoroscopy, static overlays are used to visualize soft tissue. We propose a system for cardiac and respiratory motion compensation of these overlays. It consists of a 3-D motion model created from real-time MR imaging. Multiple sagittal slices are acquired and retrospectively stacked to consistent 3-D volumes. Slice stacking considers cardiac information derived from the ECG and respiratory information extracted from the images. Additionally, temporal smoothness of the stacking is enhanced. Motion is estimated from the MR volumes using deformable 3-D/3-D registration. The motion model itself is a linear direct correspondence model using the same surrogate signals as slice stacking. In X-ray fluoroscopy, only the surrogate signals need to be extracted to apply the motion model and animate the overlay in real time. For evaluation, points are manually annotated in oblique MR slices and in contrast-enhanced X-ray images. The 2-D Euclidean distance of these points is reduced from 3.85 mm to 2.75 mm in MR and from 3.0 mm to 1.8 mm in X-ray compared to the static baseline. Furthermore, the motion-compensated overlays are shown qualitatively as images and videos. PMID:28692969

  14. Development and modelisation of a hydro-power conversion system based on vortex induced vibration

    NASA Astrophysics Data System (ADS)

    Lefebure, David; Dellinger, Nicolas; François, Pierre; Mosé, Robert

    2016-11-01

    The Vortex Induced Vibration (VIV) phenomenon leads to mechanical issues concerning bluff bodies immerged in fluid flows and have therefore been studied by numerous authors. Moreover, an increasing demand for energy implies the development of alternative, complementary and renewable energy solutions. The main idea of EauVIV project consists in the use of VIV rather than its deletion. When rounded objects are immerged in a fluid flow, vortices are formed and shed on their downstream side, creating a pressure imbalance resulting in an oscillatory lift. A convertor modulus consists of an elastically mounted, rigid cylinder on end-springs, undergoing flow- induced motion when exposed to transverse fluid-flow. These vortices induce cyclic lift forces in opposite directions on the circular bar and cause the cylinder to vibrate up and down. An experimental prototype was developed and tested in a free-surface water channel and is already able to recover energy from free-stream velocity between 0.5 and 1 m.s -1. However, the large number of parameters (stiffness, damping coefficient, velocity of fluid flow, etc.) associated with its performances requires optimization and we choose to develop a complete tridimensionnal numerical model solution. A 3D numerical model has been developed in order to represent the real system behavior and improve it through, for example, the addition of parallel cylinders. The numerical model build up was carried out in three phases. The first phase consists in establishing a 2D model to choose the turbulence model and quantify the dependence of the oscillations amplitudes on the mesh size. The second corresponds to a 3D simulation with cylinder at rest in first time and with vertical oscillation in a second time. The third and final phase consists in a comparison between the experimental system dynamic behavior and its numerical model.

  15. The Timing of the Cognitive Cycle

    PubMed Central

    Madl, Tamas; Baars, Bernard J.; Franklin, Stan

    2011-01-01

    We propose that human cognition consists of cascading cycles of recurring brain events. Each cognitive cycle senses the current situation, interprets it with reference to ongoing goals, and then selects an internal or external action in response. While most aspects of the cognitive cycle are unconscious, each cycle also yields a momentary “ignition” of conscious broadcasting. Neuroscientists have independently proposed ideas similar to the cognitive cycle, the fundamental hypothesis of the LIDA model of cognition. High-level cognition, such as deliberation, planning, etc., is typically enabled by multiple cognitive cycles. In this paper we describe a timing model LIDA's cognitive cycle. Based on empirical and simulation data we propose that an initial phase of perception (stimulus recognition) occurs 80–100 ms from stimulus onset under optimal conditions. It is followed by a conscious episode (broadcast) 200–280 ms after stimulus onset, and an action selection phase 60–110 ms from the start of the conscious phase. One cognitive cycle would therefore take 260–390 ms. The LIDA timing model is consistent with brain evidence indicating a fundamental role for a theta-gamma wave, spreading forward from sensory cortices to rostral corticothalamic regions. This posteriofrontal theta-gamma wave may be experienced as a conscious perceptual event starting at 200–280 ms post stimulus. The action selection component of the cycle is proposed to involve frontal, striatal and cerebellar regions. Thus the cycle is inherently recurrent, as the anatomy of the thalamocortical system suggests. The LIDA model fits a large body of cognitive and neuroscientific evidence. Finally, we describe two LIDA-based software agents: the LIDA Reaction Time agent that simulates human performance in a simple reaction time task, and the LIDA Allport agent which models phenomenal simultaneity within timeframes comparable to human subjects. While there are many models of reaction time performance, these results fall naturally out of a biologically and computationally plausible cognitive architecture. PMID:21541015

  16. Delayed reverberation through time windows as a key to cerebellar function.

    PubMed

    Kistler, W M; Leo van Hemmen, J

    1999-11-01

    We present a functional model of the cerebellum comprising cerebellar cortex, inferior olive, deep cerebellar nuclei, and brain stem nuclei. The discerning feature of the model being time coding, we consistently describe the system in terms of postsynaptic potentials, synchronous action potentials, and propagation delays. We show by means of detailed single-neuron modeling that (i) Golgi cells can fulfill a gating task in that they form short and well-defined time windows within which granule cells can reach firing threshold, thus organizing neuronal activity in discrete 'time slices', and that (ii) rebound firing in cerebellar nuclei cells is a robust mechanism leading to a delayed reverberation of Purkinje cell activity through cerebellar-reticular projections back to the cerebellar cortex. Computer simulations of the whole cerebellar network consisting of several thousand neurons reveal that reverberation in conjunction with long-term plasticity at the parallel fiber-Purkinje cell synapses enables the system to learn, store, and recall spatio-temporal patterns of neuronal activity. Climbing fiber spikes act both as a synchronization and as a teacher signal, not as an error signal. They are due to intrinsic oscillatory properties of inferior olivary neurons and to delayed reverberation within the network. In addition to clear experimental predictions the present theory sheds new light on a number of experimental observation such as the synchronicity of climbing fiber spikes and provides a novel explanation of how the cerebellum solves timing tasks on a time scale of several hundreds of milliseconds.

  17. M ≥ 7.0 earthquake recurrence on the San Andreas fault from a stress renewal model

    USGS Publications Warehouse

    Parsons, Thomas E.

    2006-01-01

     Forecasting M ≥ 7.0 San Andreas fault earthquakes requires an assessment of their expected frequency. I used a three-dimensional finite element model of California to calculate volumetric static stress drops from scenario M ≥ 7.0 earthquakes on three San Andreas fault sections. The ratio of stress drop to tectonic stressing rate derived from geodetic displacements yielded recovery times at points throughout the model volume. Under a renewal model, stress recovery times on ruptured fault planes can be a proxy for earthquake recurrence. I show curves of magnitude versus stress recovery time for three San Andreas fault sections. When stress recovery times were converted to expected M ≥ 7.0 earthquake frequencies, they fit Gutenberg-Richter relationships well matched to observed regional rates of M ≤ 6.0 earthquakes. Thus a stress-balanced model permits large earthquake Gutenberg-Richter behavior on an individual fault segment, though it does not require it. Modeled slip magnitudes and their expected frequencies were consistent with those observed at the Wrightwood paleoseismic site if strict time predictability does not apply to the San Andreas fault.

  18. Time-resolved optical spectroscopy of oriented muscle fibers specifically and covalently labeled with extrinsic optical probes

    NASA Astrophysics Data System (ADS)

    Hayden, David Ward

    1997-11-01

    The protein myosin transforms chemical energy, in the form of ATP, into mechanical force in muscle. The rotational motions of myosin play a central role in all models of muscle contraction. I investigated the rotations of myosin in contracting muscle using time- resolved phosphorescence anisotropy (TPA), a technique sensitive to rotations on the microsecond time scale. I developed the hardware, software and theory for four- polarization TPA, which returns four time-resolved anisotropies in contrast to a single anisotropy for standard TPA. The additional anisotropies constrain the possible dye orientations and myosin head motions. Four- polarization TPA on oriented scallop muscle fibers with an extrinsic probe on the light chain shows that the rigor (no ATP, no calcium) anisotropies are consistent with a static distribution of rigid, but partially disordered molecules. Addition of ATP, in the presence or absence of calcium, induces microsecond rotational motion in a fraction of the myosin molecules, while the rest retain rigor-like orientation. This result is consistent with recently-published electron paramagnetic resonance (EPR) results and provides details of the microsecond motion that EPR is unable to detect. A method for simulation of time-resolved TPA spectra and determination of initial and final anisotropies allows testing of models of myosin rotations. The TPA spectra of several models, including restricted rotational diffusion and the Lymn-Taylor models are shown. To show the generality of the derived equations, I apply them to a comparison of EPR and fluorescence polarization spectroscopy on similar samples to investigate whether there is one model that could explain the results reported by the two techniques.

  19. Rainfall disaggregation for urban hydrology: Effects of spatial consistence

    NASA Astrophysics Data System (ADS)

    Müller, Hannes; Haberlandt, Uwe

    2015-04-01

    For urban hydrology rainfall time series with a high temporal resolution are crucial. Observed time series of this kind are very short in most cases, so they cannot be used. On the contrary, time series with lower temporal resolution (daily measurements) exist for much longer periods. The objective is to derive time series with a long duration and a high resolution by disaggregating time series of the non-recording stations with information of time series of the recording stations. The multiplicative random cascade model is a well-known disaggregation model for daily time series. For urban hydrology it is often assumed, that a day consists of only 1280 minutes in total as starting point for the disaggregation process. We introduce a new variant for the cascade model, which is functional without this assumption and also outperforms the existing approach regarding time series characteristics like wet and dry spell duration, average intensity, fraction of dry intervals and extreme value representation. However, in both approaches rainfall time series of different stations are disaggregated without consideration of surrounding stations. This yields in unrealistic spatial patterns of rainfall. We apply a simulated annealing algorithm that has been used successfully for hourly values before. Relative diurnal cycles of the disaggregated time series are resampled to reproduce the spatial dependence of rainfall. To describe spatial dependence we use bivariate characteristics like probability of occurrence, continuity ratio and coefficient of correlation. Investigation area is a sewage system in Northern Germany. We show that the algorithm has the capability to improve spatial dependence. The influence of the chosen disaggregation routine and the spatial dependence on overflow occurrences and volumes of the sewage system will be analyzed.

  20. Consistent Simulation Framework for Efficient Mass Discharge and Source Depletion Time Predictions of DNAPL Contaminants in Heterogeneous Aquifers Under Uncertainty

    NASA Astrophysics Data System (ADS)

    Nowak, W.; Koch, J.

    2014-12-01

    Predicting DNAPL fate and transport in heterogeneous aquifers is challenging and subject to an uncertainty that needs to be quantified. Models for this task needs to be equipped with an accurate source zone description, i.e., the distribution of mass of all partitioning phases (DNAPL, water, and soil) in all possible states ((im)mobile, dissolved, and sorbed), mass-transfer algorithms, and the simulation of transport processes in the groundwater. Such detailed models tend to be computationally cumbersome when used for uncertainty quantification. Therefore, a selective choice of the relevant model states, processes, and scales are both sensitive and indispensable. We investigate the questions: what is a meaningful level of model complexity and how to obtain an efficient model framework that is still physically and statistically consistent. In our proposed model, aquifer parameters and the contaminant source architecture are conceptualized jointly as random space functions. The governing processes are simulated in a three-dimensional, highly-resolved, stochastic, and coupled model that can predict probability density functions of mass discharge and source depletion times. We apply a stochastic percolation approach as an emulator to simulate the contaminant source formation, a random walk particle tracking method to simulate DNAPL dissolution and solute transport within the aqueous phase, and a quasi-steady-state approach to solve for DNAPL depletion times. Using this novel model framework, we test whether and to which degree the desired model predictions are sensitive to simplifications often found in the literature. With this we identify that aquifer heterogeneity, groundwater flow irregularity, uncertain and physically-based contaminant source zones, and their mutual interlinkages are indispensable components of a sound model framework.

  1. Maximum likelihood estimation for semiparametric transformation models with interval-censored data

    PubMed Central

    Mao, Lu; Lin, D. Y.

    2016-01-01

    Abstract Interval censoring arises frequently in clinical, epidemiological, financial and sociological studies, where the event or failure of interest is known only to occur within an interval induced by periodic monitoring. We formulate the effects of potentially time-dependent covariates on the interval-censored failure time through a broad class of semiparametric transformation models that encompasses proportional hazards and proportional odds models. We consider nonparametric maximum likelihood estimation for this class of models with an arbitrary number of monitoring times for each subject. We devise an EM-type algorithm that converges stably, even in the presence of time-dependent covariates, and show that the estimators for the regression parameters are consistent, asymptotically normal, and asymptotically efficient with an easily estimated covariance matrix. Finally, we demonstrate the performance of our procedures through simulation studies and application to an HIV/AIDS study conducted in Thailand. PMID:27279656

  2. [Human resources requirements for diabetic patients healthcare in primary care clinics of the Mexican Institute of Social Security].

    PubMed

    Doubova, Svetlana V; Ramírez-Sánchez, Claudine; Figueroa-Lara, Alejandro; Pérez-Cuevas, Ricardo

    2013-12-01

    To estimate the requirements of human resources (HR) of two models of care for diabetes patients: conventional and specific, also called DiabetIMSS, which are provided in primary care clinics of the Mexican Institute of Social Security (IMSS). An evaluative research was conducted. An expert group identified the HR activities and time required to provide healthcare consistent with the best clinical practices for diabetic patients. HR were estimated by using the evidence-based adjusted service target approach for health workforce planning; then, comparisons between existing and estimated HRs were made. To provide healthcare in accordance with the patients' metabolic control, the conventional model required increasing the number of family doctors (1.2 times) nutritionists (4.2 times) and social workers (4.1 times). The DiabetIMSS model requires greater increase than the conventional model. Increasing HR is required to provide evidence-based healthcare to diabetes patients.

  3. Use it or lose it: engaged lifestyle as a buffer of cognitive decline in aging?

    PubMed

    Hultsch, D F; Hertzog, C; Small, B J; Dixon, R A

    1999-06-01

    Data from the Victoria Longitudinal Study were used to examine the hypothesis that maintaining intellectual engagement through participation in everyday activities buffers individuals against cognitive decline in later life. The sample consisted of 250 middle-aged and older adults tested 3 times over 6 years. Structural equation modeling techniques were used to examine the relationships among changes in lifestyle variables and an array of cognitive variables. There was a relationship between changes in intellectually related activities and changes in cognitive functioning. These results are consistent with the hypothesis that intellectually engaging activities serve to buffer individuals against decline. However, an alternative model suggested the findings were also consistent with the hypothesis that high-ability individuals lead intellectually active lives until cognitive decline in old age limits their activities.

  4. Discrete time rescaling theorem: determining goodness of fit for discrete time statistical models of neural spiking.

    PubMed

    Haslinger, Robert; Pipa, Gordon; Brown, Emery

    2010-10-01

    One approach for understanding the encoding of information by spike trains is to fit statistical models and then test their goodness of fit. The time-rescaling theorem provides a goodness-of-fit test consistent with the point process nature of spike trains. The interspike intervals (ISIs) are rescaled (as a function of the model's spike probability) to be independent and exponentially distributed if the model is accurate. A Kolmogorov-Smirnov (KS) test between the rescaled ISIs and the exponential distribution is then used to check goodness of fit. This rescaling relies on assumptions of continuously defined time and instantaneous events. However, spikes have finite width, and statistical models of spike trains almost always discretize time into bins. Here we demonstrate that finite temporal resolution of discrete time models prevents their rescaled ISIs from being exponentially distributed. Poor goodness of fit may be erroneously indicated even if the model is exactly correct. We present two adaptations of the time-rescaling theorem to discrete time models. In the first we propose that instead of assuming the rescaled times to be exponential, the reference distribution be estimated through direct simulation by the fitted model. In the second, we prove a discrete time version of the time-rescaling theorem that analytically corrects for the effects of finite resolution. This allows us to define a rescaled time that is exponentially distributed, even at arbitrary temporal discretizations. We demonstrate the efficacy of both techniques by fitting generalized linear models to both simulated spike trains and spike trains recorded experimentally in monkey V1 cortex. Both techniques give nearly identical results, reducing the false-positive rate of the KS test and greatly increasing the reliability of model evaluation based on the time-rescaling theorem.

  5. A role for subchondral bone changes in the process of osteoarthritis; a micro-CT study of two canine models.

    PubMed

    Sniekers, Yvonne H; Intema, Femke; Lafeber, Floris P J G; van Osch, Gerjo J V M; van Leeuwen, Johannes P T M; Weinans, Harrie; Mastbergen, Simon C

    2008-02-12

    This study evaluates changes in peri-articular bone in two canine models for osteoarthritis: the groove model and the anterior cruciate ligament transection (ACLT) model. Evaluation was performed at 10 and 20 weeks post-surgery and in addition a 3-weeks time point was studied for the groove model. Cartilage was analysed, and architecture of the subchondral plate and trabecular bone of epiphyses was quantified using micro-CT. At 10 and 20 weeks cartilage histology and biochemistry demonstrated characteristic features of osteoarthritis in both models (very mild changes at 3 weeks). The groove model presented osteophytes only at 20 weeks, whereas the ACLT model showed osteophytes already at 10 weeks. Trabecular bone changes in the groove model were small and not consistent. This contrasts the ACLT model in which bone volume fraction was clearly reduced at 10 and 20 weeks (15-20%). However, changes in metaphyseal bone indicate unloading in the ACLT model, not in the groove model. For both models the subchondral plate thickness was strongly reduced (25-40%) and plate porosity was strongly increased (25-85%) at all time points studied. These findings show differential regulation of subchondral trabecular bone in the groove and ACLT model, with mild changes in the groove model and more severe changes in the ACLT model. In the ACLT model, part of these changes may be explained by unloading of the treated leg. In contrast, subchondral plate thinning and increased porosity were very consistent in both models, independent of loading conditions, indicating that this thinning is an early response in the osteoarthritis process.

  6. Coping, stress, and the psychological symptoms of children of divorce: a cross-sectional and longitudinal study.

    PubMed

    Sandler, I N; Tein, J Y; West, S G

    1994-12-01

    The authors conducted a cross-sectional and prospective longitudinal study of stress, coping, and psychological symptoms in children of divorce. The sample consisted of 258 children (mean age = 10.1; SD = 1.2), of whom 196 were successfully followed 5.5 months later. A 4-dimensional model of coping was found using confirmatory factor analysis, with the factors being active coping, avoidance, distraction, and support. In the cross-sectional model avoidance coping partially mediated the relations between negative events and symptoms while active coping moderated the relations between negative events and conduct problems. In the longitudinal model significant negative paths were found from active coping and distraction Time 1 to internalizing symptoms Time 2, while Time 1 support coping had a positive path coefficient to Time 2 depression. Positive paths were found between negative events at Time 1 and anxiety at Time 2, and between all symptoms at Time 1 and negative events at Time 2.

  7. Matching asteroid population characteristics with a model constructed from the YORP-induced rotational fission hypothesis

    NASA Astrophysics Data System (ADS)

    Jacobson, Seth A.; Marzari, Francesco; Rossi, Alessandro; Scheeres, Daniel J.

    2016-10-01

    From the results of a comprehensive asteroid population evolution model, we conclude that the YORP-induced rotational fission hypothesis is consistent with the observed population statistics of small asteroids in the main belt including binaries and contact binaries. These conclusions rest on the asteroid rotation model of Marzari et al. ([2011]Icarus, 214, 622-631), which incorporates both the YORP effect and collisional evolution. This work adds to that model the rotational fission hypothesis, described in detail within, and the binary evolution model of Jacobson et al. ([2011a] Icarus, 214, 161-178) and Jacobson et al. ([2011b] The Astrophysical Journal Letters, 736, L19). Our complete asteroid population evolution model is highly constrained by these and other previous works, and therefore it has only two significant free parameters: the ratio of low to high mass ratio binaries formed after rotational fission events and the mean strength of the binary YORP (BYORP) effect. We successfully reproduce characteristic statistics of the small asteroid population: the binary fraction, the fast binary fraction, steady-state mass ratio fraction and the contact binary fraction. We find that in order for the model to best match observations, rotational fission produces high mass ratio (> 0.2) binary components with four to eight times the frequency as low mass ratio (<0.2) components, where the mass ratio is the mass of the secondary component divided by the mass of the primary component. This is consistent with post-rotational fission binary system mass ratio being drawn from either a flat or a positive and shallow distribution, since the high mass ratio bin is four times the size of the low mass ratio bin; this is in contrast to the observed steady-state binary mass ratio, which has a negative and steep distribution. This can be understood in the context of the BYORP-tidal equilibrium hypothesis, which predicts that low mass ratio binaries survive for a significantly longer period of time than high mass ratio systems. We also find that the mean of the log-normal BYORP coefficient distribution μB ≳10-2 , which is consistent with estimates from shape modeling (McMahon and Scheeres, 2012a).

  8. Self-consistent asset pricing models

    NASA Astrophysics Data System (ADS)

    Malevergne, Y.; Sornette, D.

    2007-08-01

    We discuss the foundations of factor or regression models in the light of the self-consistency condition that the market portfolio (and more generally the risk factors) is (are) constituted of the assets whose returns it is (they are) supposed to explain. As already reported in several articles, self-consistency implies correlations between the return disturbances. As a consequence, the alphas and betas of the factor model are unobservable. Self-consistency leads to renormalized betas with zero effective alphas, which are observable with standard OLS regressions. When the conditions derived from internal consistency are not met, the model is necessarily incomplete, which means that some sources of risk cannot be replicated (or hedged) by a portfolio of stocks traded on the market, even for infinite economies. Analytical derivations and numerical simulations show that, for arbitrary choices of the proxy which are different from the true market portfolio, a modified linear regression holds with a non-zero value αi at the origin between an asset i's return and the proxy's return. Self-consistency also introduces “orthogonality” and “normality” conditions linking the betas, alphas (as well as the residuals) and the weights of the proxy portfolio. Two diagnostics based on these orthogonality and normality conditions are implemented on a basket of 323 assets which have been components of the S&P500 in the period from January 1990 to February 2005. These two diagnostics show interesting departures from dynamical self-consistency starting about 2 years before the end of the Internet bubble. Assuming that the CAPM holds with the self-consistency condition, the OLS method automatically obeys the resulting orthogonality and normality conditions and therefore provides a simple way to self-consistently assess the parameters of the model by using proxy portfolios made only of the assets which are used in the CAPM regressions. Finally, the factor decomposition with the self-consistency condition derives a risk-factor decomposition in the multi-factor case which is identical to the principal component analysis (PCA), thus providing a direct link between model-driven and data-driven constructions of risk factors. This correspondence shows that PCA will therefore suffer from the same limitations as the CAPM and its multi-factor generalization, namely lack of out-of-sample explanatory power and predictability. In the multi-period context, the self-consistency conditions force the betas to be time-dependent with specific constraints.

  9. Modeling Polio Data Using the First Order Non-Negative Integer-Valued Autoregressive, INAR(1), Model

    NASA Astrophysics Data System (ADS)

    Vazifedan, Turaj; Shitan, Mahendran

    Time series data may consists of counts, such as the number of road accidents, the number of patients in a certain hospital, the number of customers waiting for service at a certain time and etc. When the value of the observations are large it is usual to use Gaussian Autoregressive Moving Average (ARMA) process to model the time series. However if the observed counts are small, it is not appropriate to use ARMA process to model the observed phenomenon. In such cases we need to model the time series data by using Non-Negative Integer valued Autoregressive (INAR) process. The modeling of counts data is based on the binomial thinning operator. In this paper we illustrate the modeling of counts data using the monthly number of Poliomyelitis data in United States between January 1970 until December 1983. We applied the AR(1), Poisson regression model and INAR(1) model and the suitability of these models were assessed by using the Index of Agreement(I.A.). We found that INAR(1) model is more appropriate in the sense it had a better I.A. and it is natural since the data are counts.

  10. Healing and/or breaking? The mental health implications of repeated economic insecurity.

    PubMed

    Watson, Barry; Osberg, Lars

    2017-09-01

    Current literature confirms the negative consequences of contemporaneous economic insecurity for mental health, but ignores possible implications of repeated insecurity. This paper asks how much a person's history of economic insecurity matters for psychological distress by contrasting the implications of two models. Consistent with the health capital literature, the Healing model suggests psychological distress is a stock variable affected by shocks from life events, with past events having less impact than more recent shocks. Alternatively, the Breaking Point model considers that high levels of distress represent a distinct shift in life state, which occurs if the accumulation of past life stresses exceeds some critical value. Using five cycles of Canadian National Population Health Survey data (2000-2009), we model the impact of past economic insecurity shocks on current psychological distress in a way that can distinguish between these hypotheses. In our sample of 1775 males and 1883 females aged 25 to 64, we find a robust healing effect for one-time economic insecurity shocks. For males, only a recent one-time occurrence of economic insecurity is predictive of higher current psychological distress (0.19 standard deviations). Moreover, working age adults tend to recover from past accumulated experiences of economic insecurity if they were recently economically secure. However, consistent with the Breaking Point hypothesis, males experiencing three or four cycles of recent insecurity are estimated to have a level of current psychological distress that is 0.26-0.29 standard deviations higher than those who were employed and job secure throughout the same time period. We also find, consistent with other literature, distinct gender differences - for working age females, all economic insecurity variables are statistically insignificant at conventional levels. Our results suggest that although Canadians are resilient to one-time insecurity shocks, males most vulnerable to repeated bouts suffer from elevated levels of psychological distress. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Estimating the Full Cost of Family-Financed Time Inputs to Education.

    ERIC Educational Resources Information Center

    Levine, Victor

    This paper presents a methodology for estimating the full cost of parental time allocated to child-care activities at home. Building upon the human capital hypothesis, a model is developed in which the cost of an hour diverted from labor market activity is seen as consisting of three components: 1) direct wages foregone; 2) investments in…

  12. New Perspectives for the Evaluation of Training Sessions in Self-Regulated Learning: Time-Series Analyses of Diary Data

    ERIC Educational Resources Information Center

    Schmitz, Bernhard; Wiese, Bettina S.

    2006-01-01

    The present study combines a standardized diary approach with time-series analysis methods to investigate the process of self-regulated learning. Based on a process-focused adaptation of Zimmerman's (2000) learning model, an intervention (consisting of four weekly training sessions) to increase self-regulated learning was developed. The diaries…

  13. Gender Differences in College Leisure Time Physical Activity: Application of the Theory of Planned Behavior and Integrated Behavioral Model

    ERIC Educational Resources Information Center

    Beville, Jill M.; Umstattd Meyer, M. Renée; Usdan, Stuart L.; Turner, Lori W.; Jackson, John C.; Lian, Brad E.

    2014-01-01

    Objective: National data consistently report that males participate in leisure time physical activity (LTPA) at higher rates than females. This study expanded previous research to examine gender differences in LTPA of college students using the theory of planned behavior (TPB) by including 2 additional constructs, descriptive norm and…

  14. Using Structural Equation Modeling to Assess Functional Connectivity in the Brain: Power and Sample Size Considerations

    ERIC Educational Resources Information Center

    Sideridis, Georgios; Simos, Panagiotis; Papanicolaou, Andrew; Fletcher, Jack

    2014-01-01

    The present study assessed the impact of sample size on the power and fit of structural equation modeling applied to functional brain connectivity hypotheses. The data consisted of time-constrained minimum norm estimates of regional brain activity during performance of a reading task obtained with magnetoencephalography. Power analysis was first…

  15. A Unified Theory of Non-Ideal Gas Lattice Boltzmann Models

    NASA Technical Reports Server (NTRS)

    Luo, Li-Shi

    1998-01-01

    A non-ideal gas lattice Boltzmann model is directly derived, in an a priori fashion, from the Enskog equation for dense gases. The model is rigorously obtained by a systematic procedure to discretize the Enskog equation (in the presence of an external force) in both phase space and time. The lattice Boltzmann model derived here is thermodynamically consistent and is free of the defects which exist in previous lattice Boltzmann models for non-ideal gases. The existing lattice Boltzmann models for non-ideal gases are analyzed and compared with the model derived here.

  16. Separation of cognitive impairments in attention-deficit/hyperactivity disorder into 2 familial factors.

    PubMed

    Kuntsi, Jonna; Wood, Alexis C; Rijsdijk, Frühling; Johnson, Katherine A; Andreou, Penelope; Albrecht, Björn; Arias-Vasquez, Alejandro; Buitelaar, Jan K; McLoughlin, Gráinne; Rommelse, Nanda N J; Sergeant, Joseph A; Sonuga-Barke, Edmund J; Uebel, Henrik; van der Meere, Jaap J; Banaschewski, Tobias; Gill, Michael; Manor, Iris; Miranda, Ana; Mulas, Fernando; Oades, Robert D; Roeyers, Herbert; Rothenberger, Aribert; Steinhausen, Hans-Christoph; Faraone, Stephen V; Asherson, Philip

    2010-11-01

    Attention-deficit/hyperactivity disorder (ADHD) is associated with widespread cognitive impairments, but it is not known whether the apparent multiple impairments share etiological roots or separate etiological pathways exist. A better understanding of the etiological pathways is important for the development of targeted interventions and for identification of suitable intermediate phenotypes for molecular genetic investigations. To determine, by using a multivariate familial factor analysis approach, whether 1 or more familial factors underlie the slow and variable reaction times, impaired response inhibition, and choice impulsivity associated with ADHD. An ADHD and control sibling-pair design. Belgium, Germany, Ireland, Israel, Spain, Switzerland, and the United Kingdom. A total of 1265 participants, aged 6 to 18 years: 464 probands with ADHD and 456 of their siblings (524 with combined-subtype ADHD), and 345 control participants. Performance on a 4-choice reaction time task, a go/no-go inhibition task, and a choice-delay task. The final model consisted of 2 familial factors. The larger factor, reflecting 85% of the familial variance of ADHD, captured 98% to 100% of the familial influences on mean reaction time and reaction time variability. The second, smaller factor, reflecting 13% of the familial variance of ADHD, captured 62% to 82% of the familial influences on commission and omission errors on the go/no-go task. Choice impulsivity was excluded in the final model because of poor fit. The findings suggest the existence of 2 familial pathways to cognitive impairments in ADHD and indicate promising cognitive targets for future molecular genetic investigations. The familial distinction between the 2 cognitive impairments is consistent with recent theoretical models--a developmental model and an arousal-attention model--of 2 separable underlying processes in ADHD. Future research that tests the familial model within a developmental framework may inform developmentally sensitive interventions.

  17. A Comparison of Pseudo-Maximum Likelihood and Asymptotically Distribution-Free Dynamic Factor Analysis Parameter Estimation in Fitting Covariance Structure Models to Block-Toeplitz Matrices Representing Single-Subject Multivariate Time-Series.

    ERIC Educational Resources Information Center

    Molenaar, Peter C. M.; Nesselroade, John R.

    1998-01-01

    Pseudo-Maximum Likelihood (p-ML) and Asymptotically Distribution Free (ADF) estimation methods for estimating dynamic factor model parameters within a covariance structure framework were compared through a Monte Carlo simulation. Both methods appear to give consistent model parameter estimates, but only ADF gives standard errors and chi-square…

  18. A methodology to estimate greenhouse gases emissions in Life Cycle Inventories of wastewater treatment plants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rodriguez-Garcia, G., E-mail: gonzalo.rodriguez.garcia@usc.es; Hospido, A., E-mail: almudena.hospido@usc.es; Bagley, D.M., E-mail: bagley@uwyo.edu

    2012-11-15

    The main objective of this paper is to present the Direct Emissions Estimation Model (DEEM), a model for the estimation of CO{sub 2} and N{sub 2}O emissions from a wastewater treatment plant (WWTP). This model is consistent with non-specific but widely used models such as AS/AD and ASM no. 1 and presents the benefits of simplicity and application over a common WWTP simulation platform, BioWin Registered-Sign , making it suitable for Life Cycle Assessment and Carbon Footprint studies. Its application in a Spanish WWTP indicates direct N{sub 2}O emissions to be 8 times larger than those associated with electricity usemore » and thus relevant for LCA. CO{sub 2} emissions can be of similar importance to electricity-associated ones provided that 20% of them are of non-biogenic origin. - Highlights: Black-Right-Pointing-Pointer A model has been developed for the estimation of GHG emissions in WWTP. Black-Right-Pointing-Pointer Model was consistent with both ASM no. 1 and AS/AD. Black-Right-Pointing-Pointer N{sub 2}O emissions are 8 times more relevant than the one associated with electricity. Black-Right-Pointing-Pointer CO{sub 2} emissions are as important as electricity if 20% of it is non-biogenic.« less

  19. Winter-to-Summer Precipitation Phasing in Southwestern North America: A Multi-Century Perspective from Paleoclimatic Model-Data Comparisons

    NASA Technical Reports Server (NTRS)

    Coats, Sloan; Smerdon, Jason E.; Seager, Richard; Griffin, Daniel; Cook, Benjamin I.

    2015-01-01

    The phasing of winter-to-summer precipitation anomalies in the North American monsoon (NAM) region 2 (113.25 deg W-107.75 deg W, 30 deg N-35.25 deg N-NAM2) of southwestern North America is analyzed in fully coupled simulations of the Last Millennium and compared to tree ring reconstructed winter and summer precipitation variability. The models simulate periods with in-phase seasonal precipitation anomalies, but the strength of this relationship is variable on multidecadal time scales, behavior that is also exhibited by the reconstructions. The models, however, are unable to simulate periods with consistently out-of-phase winter-to-summer precipitation anomalies as observed in the latter part of the instrumental interval. The periods with predominantly in-phase winter-to-summer precipitation anomalies in the models are significant against randomness, and while this result is suggestive of a potential for dual-season drought on interannual and longer time scales, models do not consistently exhibit the persistent dual-season drought seen in the dendroclimatic reconstructions. These collective findings indicate that model-derived drought risk assessments may underestimate the potential for dual-season drought in 21st century projections of hydroclimate in the American Southwest and parts of Mexico.

  20. Persistent hemifacial spasm after microvascular decompression: a risk assessment model.

    PubMed

    Shah, Aalap; Horowitz, Michael

    2017-06-01

    Microvascular decompression (MVD) for hemifacial spasm (HFS) provides resolution of disabling symptoms such as eyelid twitching and muscle contractions of the entire hemiface. The primary aim of this study was to evaluate the predictive value of patient demographics and spasm characteristics on long-term outcomes, with or without intraoperative lateral spread response (LSR) as an additional variable in a risk assessment model. A retrospective study was undertaken to evaluate the associations of pre-operative patient characteristics, as well as intraoperative LSR and need for a staged procedure on the presence of persistent or recurrent HFS at the time of hospital discharge and at follow-up. A risk assessment model was constructed with the inclusion of six clinically or statistically significant variables from the univariate analyses. A receiving operator characteristic curve was generated, and area under the curve was calculated to determine the strength of the predictive model. A risk assessment model was first created consisting of significant pre-operative variables (Model 1) (age >50, female gender, history of botulinum toxin use, platysma muscle involvement). This model demonstrated borderline predictive value for persistent spasm at discharge (AUC .60; p=.045) and fair predictive value at follow-up (AUC .75; p=.001). Intraoperative variables (e.g. LSR persistence) demonstrated little additive value (Model 2) (AUC .67). Patients with a higher risk score (three or greater) demonstrated greater odds of persistent HFS at the time of discharge (OR 1.5 [95%CI 1.16-1.97]; p=.035), as well as greater odds of persistent or recurrent spasm at the time of follow-up (OR 3.0 [95%CI 1.52-5.95]; p=.002) Conclusions: A risk assessment model consisting of pre-operative clinical characteristics is useful in prognosticating HFS persistence at follow-up.

  1. A three-site gauge model for flavor hierarchies and flavor anomalies

    NASA Astrophysics Data System (ADS)

    Bordone, Marzia; Cornella, Claudia; Fuentes-Martín, Javier; Isidori, Gino

    2018-04-01

    We present a three-site Pati-Salam gauge model able to explain the Standard Model flavor hierarchies while, at the same time, accommodating the recent experimental hints of lepton-flavor non-universality in B decays. The model is consistent with low- and high-energy bounds, and predicts a rich spectrum of new states at the TeV scale that could be probed in the near future by the high-pT experiments at the LHC.

  2. Patient-specific geometrical modeling of orthopedic structures with high efficiency and accuracy for finite element modeling and 3D printing.

    PubMed

    Huang, Huajun; Xiang, Chunling; Zeng, Canjun; Ouyang, Hanbin; Wong, Kelvin Kian Loong; Huang, Wenhua

    2015-12-01

    We improved the geometrical modeling procedure for fast and accurate reconstruction of orthopedic structures. This procedure consists of medical image segmentation, three-dimensional geometrical reconstruction, and assignment of material properties. The patient-specific orthopedic structures reconstructed by this improved procedure can be used in the virtual surgical planning, 3D printing of real orthopedic structures and finite element analysis. A conventional modeling consists of: image segmentation, geometrical reconstruction, mesh generation, and assignment of material properties. The present study modified the conventional method to enhance software operating procedures. Patient's CT images of different bones were acquired and subsequently reconstructed to give models. The reconstruction procedures were three-dimensional image segmentation, modification of the edge length and quantity of meshes, and the assignment of material properties according to the intensity of gravy value. We compared the performance of our procedures to the conventional procedures modeling in terms of software operating time, success rate and mesh quality. Our proposed framework has the following improvements in the geometrical modeling: (1) processing time: (femur: 87.16 ± 5.90 %; pelvis: 80.16 ± 7.67 %; thoracic vertebra: 17.81 ± 4.36 %; P < 0.05); (2) least volume reduction (femur: 0.26 ± 0.06 %; pelvis: 0.70 ± 0.47, thoracic vertebra: 3.70 ± 1.75 %; P < 0.01) and (3) mesh quality in terms of aspect ratio (femur: 8.00 ± 7.38 %; pelvis: 17.70 ± 9.82 %; thoracic vertebra: 13.93 ± 9.79 %; P < 0.05) and maximum angle (femur: 4.90 ± 5.28 %; pelvis: 17.20 ± 19.29 %; thoracic vertebra: 3.86 ± 3.82 %; P < 0.05). Our proposed patient-specific geometrical modeling requires less operating time and workload, but the orthopedic structures were generated at a higher rate of success as compared with the conventional method. It is expected to benefit the surgical planning of orthopedic structures with less operating time and high accuracy of modeling.

  3. Eurodelta-Trends, a Multi-Model Experiment of Air Quality Hindcast in Europe over 1990-2010. Experiment Design and Key Findings

    NASA Astrophysics Data System (ADS)

    Colette, A.; Ciarelli, G.; Otero, N.; Theobald, M.; Solberg, S.; Andersson, C.; Couvidat, F.; Manders-Groot, A.; Mar, K. A.; Mircea, M.; Pay, M. T.; Raffort, V.; Tsyro, S.; Cuvelier, K.; Adani, M.; Bessagnet, B.; Bergstrom, R.; Briganti, G.; Cappelletti, A.; D'isidoro, M.; Fagerli, H.; Ojha, N.; Roustan, Y.; Vivanco, M. G.

    2017-12-01

    The Eurodelta-Trends multi-model chemistry-transport experiment has been designed to better understand the evolution of air pollution and its drivers for the period 1990-2010 in Europe. The main objective of the experiment is to assess the efficiency of air pollutant emissions mitigation measures in improving regional scale air quality. The experiment is designed in three tiers with increasing degree of computational demand in order to facilitate the participation of as many modelling teams as possible. The basic experiment consists of simulations for the years 1990, 2000 and 2010. Sensitivity analysis for the same three years using various combinations of (i) anthropogenic emissions, (ii) chemical boundary conditions and (iii) meteorology complements it. The most demanding tier consists in two complete time series from 1990 to 2010, simulated using either time varying emissions for corresponding years or constant emissions. Eight chemistry-transport models have contributed with calculation results to at least one experiment tier, and six models have completed the 21-year trend simulations. The modelling results are publicly available for further use by the scientific community. We assess the skill of the models in capturing observed air pollution trends for the 1990-2010 time period. The average particulate matter relative trends are well captured by the models, even if they display the usual lower bias in reproducing absolute levels. Ozone trends are also well reproduced, yet slightly overestimated in the 1990s. The attribution study emphasizes the efficiency of mitigation measures in reducing air pollution over Europe, although a strong impact of long range transport is pointed out for ozone trends. Meteorological variability is also an important factor in some regions of Europe. The results of the first health and ecosystem impact studies impacts building upon a regional scale multi-model ensemble over a 20yr time period will also be presented.

  4. Time-Domain Filtering for Spatial Large-Eddy Simulation

    NASA Technical Reports Server (NTRS)

    Pruett, C. David

    1997-01-01

    An approach to large-eddy simulation (LES) is developed whose subgrid-scale model incorporates filtering in the time domain, in contrast to conventional approaches, which exploit spatial filtering. The method is demonstrated in the simulation of a heated, compressible, axisymmetric jet, and results are compared with those obtained from fully resolved direct numerical simulation. The present approach was, in fact, motivated by the jet-flow problem and the desire to manipulate the flow by localized (point) sources for the purposes of noise suppression. Time-domain filtering appears to be more consistent with the modeling of point sources; moreover, time-domain filtering may resolve some fundamental inconsistencies associated with conventional space-filtered LES approaches.

  5. Nonequilibrium thermodynamics of the shear-transformation-zone model

    NASA Astrophysics Data System (ADS)

    Luo, Alan M.; Ã-ttinger, Hans Christian

    2014-02-01

    The shear-transformation-zone (STZ) model has been applied numerous times to describe the plastic deformation of different types of amorphous systems. We formulate this model within the general equation for nonequilibrium reversible-irreversible coupling (GENERIC) framework, thereby clarifying the thermodynamic structure of the constitutive equations and guaranteeing thermodynamic consistency. We propose natural, physically motivated forms for the building blocks of the GENERIC, which combine to produce a closed set of time evolution equations for the state variables, valid for any choice of free energy. We demonstrate an application of the new GENERIC-based model by choosing a simple form of the free energy. In addition, we present some numerical results and contrast those with the original STZ equations.

  6. A Fully Coupled Multi-Rigid-Body Fuel Slosh Dynamics Model Applied to the Triana Stack

    NASA Technical Reports Server (NTRS)

    London, K. W.

    2001-01-01

    A somewhat general multibody model is presented that accounts for energy dissipation associated with fuel slosh and which unifies some of the existing more specialized representations. This model is used to predict the mutation growth time constant for the Triana Spacecraft, or Stack, consisting of the Triana Observatory mated with the Gyroscopic Upper Stage of GUS (includes the solid rocket motor, SRM, booster). At the nominal spin rate of 60 rpm and with 145 kg of hydrazine propellant on board, a time constant of 116 s is predicted for worst case sloshing of a spherical slug model compared to 1,681 s (nominal), 1,043 s (worst case) for sloshing of a three degree of freedom pendulum model.

  7. Characterization and Computational Modeling of Minor Phases in Alloy LSHR

    NASA Technical Reports Server (NTRS)

    Jou, Herng-Jeng; Olson, Gregory; Gabb, Timothy; Garg, Anita; Miller, Derek

    2012-01-01

    The minor phases of powder metallurgy disk superalloy LSHR were studied. Samples were consistently heat treated at three different temperatures for long times to approach equilibrium. Additional heat treatments were also performed for shorter times, to assess minor phase kinetics in non-equilibrium conditions. Minor phases including MC carbides, M23C6 carbides, M3B2 borides, and sigma were identified. Their average sizes and total area fractions were determined. CALPHAD thermodynamics databases and PrecipiCalc(TradeMark), a computational precipitation modeling tool, were employed with Ni-base thermodynamics and diffusion databases to model and simulate the phase microstructural evolution observed in the experiments with an objective to identify the model limitations and the directions of model enhancement.

  8. Review of Real-Time Simulator and the Steps Involved for Implementation of a Model from MATLAB/SIMULINK to Real-Time

    NASA Astrophysics Data System (ADS)

    Mikkili, Suresh; Panda, Anup Kumar; Prattipati, Jayanthi

    2015-06-01

    Nowadays the researchers want to develop their model in real-time environment. Simulation tools have been widely used for the design and improvement of electrical systems since the mid twentieth century. The evolution of simulation tools has progressed in step with the evolution of computing technologies. In recent years, computing technologies have improved dramatically in performance and become widely available at a steadily decreasing cost. Consequently, simulation tools have also seen dramatic performance gains and steady cost decreases. Researchers and engineers now have the access to affordable, high performance simulation tools that were previously too cost prohibitive, except for the largest manufacturers. This work has introduced a specific class of digital simulator known as a real-time simulator by answering the questions "what is real-time simulation", "why is it needed" and "how it works". The latest trend in real-time simulation consists of exporting simulation models to FPGA. In this article, the Steps involved for implementation of a model from MATLAB to REAL-TIME are provided in detail.

  9. Evaluating a scalable model for implementing electronic health records in resource-limited settings.

    PubMed

    Were, Martin C; Emenyonu, Nneka; Achieng, Marion; Shen, Changyu; Ssali, John; Masaba, John P M; Tierney, William M

    2010-01-01

    Current models for implementing electronic health records (EHRs) in resource-limited settings may not be scalable because they fail to address human-resource and cost constraints. This paper describes an implementation model which relies on shared responsibility between local sites and an external three-pronged support infrastructure consisting of: (1) a national technical expertise center, (2) an implementer's community, and (3) a developer's community. This model was used to implement an open-source EHR in three Ugandan HIV-clinics. Pre-post time-motion study at one site revealed that Primary Care Providers spent a third less time in direct and indirect care of patients (p<0.001) and 40% more time on personal activities (p=0.09) after EHRs implementation. Time spent by previously enrolled patients with non-clinician staff fell by half (p=0.004) and with pharmacy by 63% (p<0.001). Surveyed providers were highly satisfied with the EHRs and its support infrastructure. This model offers a viable approach for broadly implementing EHRs in resource-limited settings.

  10. Modeling long correlation times using additive binary Markov chains: Applications to wind generation time series.

    PubMed

    Weber, Juliane; Zachow, Christopher; Witthaut, Dirk

    2018-03-01

    Wind power generation exhibits a strong temporal variability, which is crucial for system integration in highly renewable power systems. Different methods exist to simulate wind power generation but they often cannot represent the crucial temporal fluctuations properly. We apply the concept of additive binary Markov chains to model a wind generation time series consisting of two states: periods of high and low wind generation. The only input parameter for this model is the empirical autocorrelation function. The two-state model is readily extended to stochastically reproduce the actual generation per period. To evaluate the additive binary Markov chain method, we introduce a coarse model of the electric power system to derive backup and storage needs. We find that the temporal correlations of wind power generation, the backup need as a function of the storage capacity, and the resting time distribution of high and low wind events for different shares of wind generation can be reconstructed.

  11. Modeling long correlation times using additive binary Markov chains: Applications to wind generation time series

    NASA Astrophysics Data System (ADS)

    Weber, Juliane; Zachow, Christopher; Witthaut, Dirk

    2018-03-01

    Wind power generation exhibits a strong temporal variability, which is crucial for system integration in highly renewable power systems. Different methods exist to simulate wind power generation but they often cannot represent the crucial temporal fluctuations properly. We apply the concept of additive binary Markov chains to model a wind generation time series consisting of two states: periods of high and low wind generation. The only input parameter for this model is the empirical autocorrelation function. The two-state model is readily extended to stochastically reproduce the actual generation per period. To evaluate the additive binary Markov chain method, we introduce a coarse model of the electric power system to derive backup and storage needs. We find that the temporal correlations of wind power generation, the backup need as a function of the storage capacity, and the resting time distribution of high and low wind events for different shares of wind generation can be reconstructed.

  12. Prevalence of consistent condom use with various types of sex partners and associated factors among money boys in Changsha, China.

    PubMed

    Wang, Lian-Hong; Yan, Jin; Yang, Guo-Li; Long, Shuo; Yu, Yong; Wu, Xi-Lin

    2015-04-01

    Money boys with inconsistent condom use (less than 100% of the time) are at high risk of infection by human immunodeficiency virus (HIV) or sexually transmitted infection (STI), but relatively little research has examined their risk behaviors. We investigated the prevalence of consistent condom use (100% of the time) and associated factors among money boys. A cross-sectional study using a structured questionnaire was conducted among money boys in Changsha, China, between July 2012 and January 2013. Independent variables included socio-demographic data, substance abuse history, work characteristics, and self-reported HIV and STI history. Dependent variables included the consistent condom use with different types of sex partners. Among the participants, 82.4% used condoms consistently with male clients, 80.2% with male sex partners, and 77.1% with female sex partners in the past 3 months. A multiple stepwise logistic regression model identified four statistically significant factors associated with lower likelihoods of consistent condom use with male clients: age group, substance abuse, lack of an "employment" arrangement, and having no HIV test within the prior 6 months. In a similar model, only one factor associated significantly with lower likelihoods of consistent condom use with male sex partners was identified in multiple stepwise logistic regression analyses: having no HIV test within the prior six months. As for female sex partners, two significant variables were statistically significant in the multiple stepwise logistic regression analysis: having no HIV test within the prior 6 months and having STI history. Interventions which are linked with more realistic and acceptable HIV prevention methods are greatly warranted and should increase risk awareness and the behavior of consistent condom use in both commercial and personal relationship. © 2015 International Society for Sexual Medicine.

  13. Discrete Time Rescaling Theorem: Determining Goodness of Fit for Discrete Time Statistical Models of Neural Spiking

    PubMed Central

    Haslinger, Robert; Pipa, Gordon; Brown, Emery

    2010-01-01

    One approach for understanding the encoding of information by spike trains is to fit statistical models and then test their goodness of fit. The time rescaling theorem provides a goodness of fit test consistent with the point process nature of spike trains. The interspike intervals (ISIs) are rescaled (as a function of the model’s spike probability) to be independent and exponentially distributed if the model is accurate. A Kolmogorov Smirnov (KS) test between the rescaled ISIs and the exponential distribution is then used to check goodness of fit. This rescaling relies upon assumptions of continuously defined time and instantaneous events. However spikes have finite width and statistical models of spike trains almost always discretize time into bins. Here we demonstrate that finite temporal resolution of discrete time models prevents their rescaled ISIs from being exponentially distributed. Poor goodness of fit may be erroneously indicated even if the model is exactly correct. We present two adaptations of the time rescaling theorem to discrete time models. In the first we propose that instead of assuming the rescaled times to be exponential, the reference distribution be estimated through direct simulation by the fitted model. In the second, we prove a discrete time version of the time rescaling theorem which analytically corrects for the effects of finite resolution. This allows us to define a rescaled time which is exponentially distributed, even at arbitrary temporal discretizations. We demonstrate the efficacy of both techniques by fitting Generalized Linear Models (GLMs) to both simulated spike trains and spike trains recorded experimentally in monkey V1 cortex. Both techniques give nearly identical results, reducing the false positive rate of the KS test and greatly increasing the reliability of model evaluation based upon the time rescaling theorem. PMID:20608868

  14. Solar pv fed stand-alone excitation system of a synchronous machine for reactive power generation

    NASA Astrophysics Data System (ADS)

    Sudhakar, N.; Jain, Siddhartha; Jyotheeswara Reddy, K.

    2017-11-01

    This paper presents a model of a stand-alone solar energy conversion system based on synchronous machine working as a synchronous condenser in overexcited state. The proposed model consists of a Synchronous Condenser, a DC/DC boost converter whose output is fed to the field of the SC. The boost converter is supplied by the modelled solar panel and a day time variable irradiance is fed to the panel during the simulation time. The model also has one alternate source of rechargeable batteries for the time when irradiance falls below a threshold value. Also the excess power produced when there is ample irradiance is divided in two parts and one is fed to the boost converter while other is utilized to recharge the batteries. A simulation is done in MATLAB-SIMULINK and the obtained results show the utility of such modelling for supplying reactive power is feasible.

  15. MESOSCOPIC MODELING OF STOCHASTIC REACTION-DIFFUSION KINETICS IN THE SUBDIFFUSIVE REGIME

    PubMed Central

    BLANC, EMILIE; ENGBLOM, STEFAN; HELLANDER, ANDREAS; LÖTSTEDT, PER

    2017-01-01

    Subdiffusion has been proposed as an explanation of various kinetic phenomena inside living cells. In order to fascilitate large-scale computational studies of subdiffusive chemical processes, we extend a recently suggested mesoscopic model of subdiffusion into an accurate and consistent reaction-subdiffusion computational framework. Two different possible models of chemical reaction are revealed and some basic dynamic properties are derived. In certain cases those mesoscopic models have a direct interpretation at the macroscopic level as fractional partial differential equations in a bounded time interval. Through analysis and numerical experiments we estimate the macroscopic effects of reactions under subdiffusive mixing. The models display properties observed also in experiments: for a short time interval the behavior of the diffusion and the reaction is ordinary, in an intermediate interval the behavior is anomalous, and at long times the behavior is ordinary again. PMID:29046618

  16. A Note on the Problem of Proper Time in Weyl Space-Time

    NASA Astrophysics Data System (ADS)

    Avalos, R.; Dahia, F.; Romero, C.

    2018-02-01

    We discuss the question of whether or not a general Weyl structure is a suitable mathematical model of space-time. This is an issue that has been in debate since Weyl formulated his unified field theory for the first time. We do not present the discussion from the point of view of a particular unification theory, but instead from a more general standpoint, in which the viability of such a structure as a model of space-time is investigated. Our starting point is the well known axiomatic approach to space-time given by Elhers, Pirani and Schild (EPS). In this framework, we carry out an exhaustive analysis of what is required for a consistent definition for proper time and show that such a definition leads to the prediction of the so-called "second clock effect". We take the view that if, based on experience, we were to reject space-time models predicting this effect, this could be incorporated as the last axiom in the EPS approach. Finally, we provide a proof that, in this case, we are led to a Weyl integrable space-time as the most general structure that would be suitable to model space-time.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Callahan, M.A.

    Three major issues to be dealt with over the next ten years in the exposure assessment field are: consistency in terminology, the impact of computer technology on the choice of data and modeling, and conceptual issues such as the use of time-weighted averages.

  18. Improving Gastric Cancer Outcome Prediction Using Single Time-Point Artificial Neural Network Models

    PubMed Central

    Nilsaz-Dezfouli, Hamid; Abu-Bakar, Mohd Rizam; Arasan, Jayanthi; Adam, Mohd Bakri; Pourhoseingholi, Mohamad Amin

    2017-01-01

    In cancer studies, the prediction of cancer outcome based on a set of prognostic variables has been a long-standing topic of interest. Current statistical methods for survival analysis offer the possibility of modelling cancer survivability but require unrealistic assumptions about the survival time distribution or proportionality of hazard. Therefore, attention must be paid in developing nonlinear models with less restrictive assumptions. Artificial neural network (ANN) models are primarily useful in prediction when nonlinear approaches are required to sift through the plethora of available information. The applications of ANN models for prognostic and diagnostic classification in medicine have attracted a lot of interest. The applications of ANN models in modelling the survival of patients with gastric cancer have been discussed in some studies without completely considering the censored data. This study proposes an ANN model for predicting gastric cancer survivability, considering the censored data. Five separate single time-point ANN models were developed to predict the outcome of patients after 1, 2, 3, 4, and 5 years. The performance of ANN model in predicting the probabilities of death is consistently high for all time points according to the accuracy and the area under the receiver operating characteristic curve. PMID:28469384

  19. Oceanic Fluxes of Mass, Heat and Freshwater: A Global Estimate and Perspective

    NASA Technical Reports Server (NTRS)

    MacDonald, Alison Marguerite

    1995-01-01

    Data from fifteen globally distributed, modern, high resolution, hydrographic oceanic transects are combined in an inverse calculation using large scale box models. The models provide estimates of the global meridional heat and freshwater budgets and are used to examine the sensitivity of the global circulation, both inter and intra-basin exchange rates, to a variety of external constraints provided by estimates of Ekman, boundary current and throughflow transports. A solution is found which is consistent with both the model physics and the global data set, despite a twenty five year time span and a lack of seasonal consistency among the data. The overall pattern of the global circulation suggested by the models is similar to that proposed in previously published local studies and regional reviews. However, significant qualitative and quantitative differences exist. These differences are due both to the model definition and to the global nature of the data set.

  20. Nonlinear ARMA models for the D(st) index and their physical interpretation

    NASA Technical Reports Server (NTRS)

    Vassiliadis, D.; Klimas, A. J.; Baker, D. N.

    1996-01-01

    Time series models successfully reproduce or predict geomagnetic activity indices from solar wind parameters. A method is presented that converts a type of nonlinear filter, the nonlinear Autoregressive Moving Average (ARMA) model to the nonlinear damped oscillator physical model. The oscillator parameters, the growth and decay, the oscillation frequencies and the coupling strength to the input are derived from the filter coefficients. Mathematical methods are derived to obtain unique and consistent filter coefficients while keeping the prediction error low. These methods are applied to an oscillator model for the Dst geomagnetic index driven by the solar wind input. A data set is examined in two ways: the model parameters are calculated as averages over short time intervals, and a nonlinear ARMA model is calculated and the model parameters are derived as a function of the phase space.

  1. Modeling Rabbit Responses to Single and Multiple Aerosol ...

    EPA Pesticide Factsheets

    Journal Article Survival models are developed here to predict response and time-to-response for mortality in rabbits following exposures to single or multiple aerosol doses of Bacillus anthracis spores. Hazard function models were developed for a multiple dose dataset to predict the probability of death through specifying dose-response functions and the time between exposure and the time-to-death (TTD). Among the models developed, the best-fitting survival model (baseline model) has an exponential dose-response model with a Weibull TTD distribution. Alternative models assessed employ different underlying dose-response functions and use the assumption that, in a multiple dose scenario, earlier doses affect the hazard functions of each subsequent dose. In addition, published mechanistic models are analyzed and compared with models developed in this paper. None of the alternative models that were assessed provided a statistically significant improvement in fit over the baseline model. The general approach utilizes simple empirical data analysis to develop parsimonious models with limited reliance on mechanistic assumptions. The baseline model predicts TTDs consistent with reported results from three independent high-dose rabbit datasets. More accurate survival models depend upon future development of dose-response datasets specifically designed to assess potential multiple dose effects on response and time-to-response. The process used in this paper to dev

  2. Stochastic Stability of Sampled Data Systems with a Jump Linear Controller

    NASA Technical Reports Server (NTRS)

    Gonzalez, Oscar R.; Herencia-Zapana, Heber; Gray, W. Steven

    2004-01-01

    In this paper an equivalence between the stochastic stability of a sampled-data system and its associated discrete-time representation is established. The sampled-data system consists of a deterministic, linear, time-invariant, continuous-time plant and a stochastic, linear, time-invariant, discrete-time, jump linear controller. The jump linear controller models computer systems and communication networks that are subject to stochastic upsets or disruptions. This sampled-data model has been used in the analysis and design of fault-tolerant systems and computer-control systems with random communication delays without taking into account the inter-sample response. This paper shows that the known equivalence between the stability of a deterministic sampled-data system and the associated discrete-time representation holds even in a stochastic framework.

  3. Specifying the role of exposure to violence and violent behavior on initiation of gun carrying: a longitudinal test of three models of youth gun carrying.

    PubMed

    Spano, Richard; Pridemore, William Alex; Bolland, John

    2012-01-01

    Two waves of longitudinal data from 1,049 African American youth living in extreme poverty are used to examine the impact of exposure to violence (Time 1) and violent behavior (Time 1) on first time gun carrying (Time 2). Multivariate logistic regression results indicate that (a) violent behavior (Time 1) increased the likelihood of initiation of gun carrying (Time 2) by 76% after controlling for exposure to violence at Time 1, which is consistent with the stepping stone model of youth gun carrying, and (b) youth who were both exposed to violence at Time 1 and engaged in violent behavior at Time 1 were more than 2.5 times more likely to initiate gun carrying at Time 2 compared to youth who had neither of these characteristics, which supports the cumulative risk model of youth gun carrying. The authors discuss the implications of these findings in clarifying the role of violence in the community on youth gun carrying and the primary prevention of youth gun violence.

  4. Modeling stream fish distributions using interval-censored detection times.

    PubMed

    Ferreira, Mário; Filipe, Ana Filipa; Bardos, David C; Magalhães, Maria Filomena; Beja, Pedro

    2016-08-01

    Controlling for imperfect detection is important for developing species distribution models (SDMs). Occupancy-detection models based on the time needed to detect a species can be used to address this problem, but this is hindered when times to detection are not known precisely. Here, we extend the time-to-detection model to deal with detections recorded in time intervals and illustrate the method using a case study on stream fish distribution modeling. We collected electrofishing samples of six fish species across a Mediterranean watershed in Northeast Portugal. Based on a Bayesian hierarchical framework, we modeled the probability of water presence in stream channels, and the probability of species occupancy conditional on water presence, in relation to environmental and spatial variables. We also modeled time-to-first detection conditional on occupancy in relation to local factors, using modified interval-censored exponential survival models. Posterior distributions of occupancy probabilities derived from the models were used to produce species distribution maps. Simulations indicated that the modified time-to-detection model provided unbiased parameter estimates despite interval-censoring. There was a tendency for spatial variation in detection rates to be primarily influenced by depth and, to a lesser extent, stream width. Species occupancies were consistently affected by stream order, elevation, and annual precipitation. Bayesian P-values and AUCs indicated that all models had adequate fit and high discrimination ability, respectively. Mapping of predicted occupancy probabilities showed widespread distribution by most species, but uncertainty was generally higher in tributaries and upper reaches. The interval-censored time-to-detection model provides a practical solution to model occupancy-detection when detections are recorded in time intervals. This modeling framework is useful for developing SDMs while controlling for variation in detection rates, as it uses simple data that can be readily collected by field ecologists.

  5. Scaling and efficiency determine the irreversible evolution of a market

    PubMed Central

    Baldovin, F.; Stella, A. L.

    2007-01-01

    In setting up a stochastic description of the time evolution of a financial index, the challenge consists in devising a model compatible with all stylized facts emerging from the analysis of financial time series and providing a reliable basis for simulating such series. Based on constraints imposed by market efficiency and on an inhomogeneous-time generalization of standard simple scaling, we propose an analytical model which accounts simultaneously for empirical results like the linear decorrelation of successive returns, the power law dependence on time of the volatility autocorrelation function, and the multiscaling associated to this dependence. In addition, our approach gives a justification and a quantitative assessment of the irreversible character of the index dynamics. This irreversibility enters as a key ingredient in a novel simulation strategy of index evolution which demonstrates the predictive potential of the model.

  6. Regression analysis of sparse asynchronous longitudinal data

    PubMed Central

    Cao, Hongyuan; Zeng, Donglin; Fine, Jason P.

    2015-01-01

    Summary We consider estimation of regression models for sparse asynchronous longitudinal observations, where time-dependent responses and covariates are observed intermittently within subjects. Unlike with synchronous data, where the response and covariates are observed at the same time point, with asynchronous data, the observation times are mismatched. Simple kernel-weighted estimating equations are proposed for generalized linear models with either time invariant or time-dependent coefficients under smoothness assumptions for the covariate processes which are similar to those for synchronous data. For models with either time invariant or time-dependent coefficients, the estimators are consistent and asymptotically normal but converge at slower rates than those achieved with synchronous data. Simulation studies evidence that the methods perform well with realistic sample sizes and may be superior to a naive application of methods for synchronous data based on an ad hoc last value carried forward approach. The practical utility of the methods is illustrated on data from a study on human immunodeficiency virus. PMID:26568699

  7. Conduction Delay Learning Model for Unsupervised and Supervised Classification of Spatio-Temporal Spike Patterns

    PubMed Central

    Matsubara, Takashi

    2017-01-01

    Precise spike timing is considered to play a fundamental role in communications and signal processing in biological neural networks. Understanding the mechanism of spike timing adjustment would deepen our understanding of biological systems and enable advanced engineering applications such as efficient computational architectures. However, the biological mechanisms that adjust and maintain spike timing remain unclear. Existing algorithms adopt a supervised approach, which adjusts the axonal conduction delay and synaptic efficacy until the spike timings approximate the desired timings. This study proposes a spike timing-dependent learning model that adjusts the axonal conduction delay and synaptic efficacy in both unsupervised and supervised manners. The proposed learning algorithm approximates the Expectation-Maximization algorithm, and classifies the input data encoded into spatio-temporal spike patterns. Even in the supervised classification, the algorithm requires no external spikes indicating the desired spike timings unlike existing algorithms. Furthermore, because the algorithm is consistent with biological models and hypotheses found in existing biological studies, it could capture the mechanism underlying biological delay learning. PMID:29209191

  8. Conduction Delay Learning Model for Unsupervised and Supervised Classification of Spatio-Temporal Spike Patterns.

    PubMed

    Matsubara, Takashi

    2017-01-01

    Precise spike timing is considered to play a fundamental role in communications and signal processing in biological neural networks. Understanding the mechanism of spike timing adjustment would deepen our understanding of biological systems and enable advanced engineering applications such as efficient computational architectures. However, the biological mechanisms that adjust and maintain spike timing remain unclear. Existing algorithms adopt a supervised approach, which adjusts the axonal conduction delay and synaptic efficacy until the spike timings approximate the desired timings. This study proposes a spike timing-dependent learning model that adjusts the axonal conduction delay and synaptic efficacy in both unsupervised and supervised manners. The proposed learning algorithm approximates the Expectation-Maximization algorithm, and classifies the input data encoded into spatio-temporal spike patterns. Even in the supervised classification, the algorithm requires no external spikes indicating the desired spike timings unlike existing algorithms. Furthermore, because the algorithm is consistent with biological models and hypotheses found in existing biological studies, it could capture the mechanism underlying biological delay learning.

  9. Chaotic itinerancy and power-law residence time distribution in stochastic dynamical systems.

    PubMed

    Namikawa, Jun

    2005-08-01

    Chaotic itinerant motion among varieties of ordered states is described by a stochastic model based on the mechanism of chaotic itinerancy. The model consists of a random walk on a half-line and a Markov chain with a transition probability matrix. The stability of attractor ruin in the model is investigated by analyzing the residence time distribution of orbits at attractor ruins. It is shown that the residence time distribution averaged over all attractor ruins can be described by the superposition of (truncated) power-law distributions if the basin of attraction for each attractor ruin has a zero measure. This result is confirmed by simulation of models exhibiting chaotic itinerancy. Chaotic itinerancy is also shown to be absent in coupled Milnor attractor systems if the transition probability among attractor ruins can be represented as a Markov chain.

  10. Minimal universal quantum heat machine.

    PubMed

    Gelbwaser-Klimovsky, D; Alicki, R; Kurizki, G

    2013-01-01

    In traditional thermodynamics the Carnot cycle yields the ideal performance bound of heat engines and refrigerators. We propose and analyze a minimal model of a heat machine that can play a similar role in quantum regimes. The minimal model consists of a single two-level system with periodically modulated energy splitting that is permanently, weakly, coupled to two spectrally separated heat baths at different temperatures. The equation of motion allows us to compute the stationary power and heat currents in the machine consistent with the second law of thermodynamics. This dual-purpose machine can act as either an engine or a refrigerator (heat pump) depending on the modulation rate. In both modes of operation, the maximal Carnot efficiency is reached at zero power. We study the conditions for finite-time optimal performance for several variants of the model. Possible realizations of the model are discussed.

  11. A systems analysis of the erythropoietic responses to weightlessness. Volume 2: Description of the model of erythropoiesis regulation. Part A: Model for regulation of erythropoiesis. Part B: Detailed description of the model for regulation of erythropoiesis

    NASA Technical Reports Server (NTRS)

    Leonard, J. I.

    1985-01-01

    A mathematical model of the erythropoiesis on total red blood cell mass is presented. The loss of red cell mass has been a consistent finding during space flight. Computer simulation of this phenomenon required a model that could account for oxygen transport, red cell production, and red cell destruction. The elements incorporated into the feedback regulation loop of the model are based on the accepted concept that erythrocyte production is governed by the balance between oxygen supply and demand in the body. The mechanisms and pathways of the control circuit include oxygenation of hemoglobin and oxygenation of tissues by blood transport and diffusional processes. Other features of the model include a variable oxygen-hemoglobin affinity, and time delays which represent time for erythropoietin (erythrocyte-stimulating hormone) distribution in plasma, and time for maturation of the erythrocytes in bone marrow.

  12. Automated Reconstruction of Historic Roof Structures from Point Clouds - Development and Examples

    NASA Astrophysics Data System (ADS)

    Pöchtrager, M.; Styhler-Aydın, G.; Döring-Williams, M.; Pfeifer, N.

    2017-08-01

    The analysis of historic roof constructions is an important task for planning the adaptive reuse of buildings or for maintenance and restoration issues. Current approaches to modeling roof constructions consist of several consecutive operations that need to be done manually or using semi-automatic routines. To increase efficiency and allow the focus to be on analysis rather than on data processing, a set of methods was developed for the fully automated analysis of the roof constructions, including integration of architectural and structural modeling. Terrestrial laser scanning permits high-detail surveying of large-scale structures within a short time. Whereas 3-D laser scan data consist of millions of single points on the object surface, we need a geometric description of structural elements in order to obtain a structural model consisting of beam axis and connections. Preliminary results showed that the developed methods work well for beams in flawless condition with a quadratic cross section and no bending. Deformations or damages such as cracks and cuts on the wooden beams can lead to incomplete representations in the model. Overall, a high degree of automation was achieved.

  13. Modeling and Simulation of High Resolution Optical Remote Sensing Satellite Geometric Chain

    NASA Astrophysics Data System (ADS)

    Xia, Z.; Cheng, S.; Huang, Q.; Tian, G.

    2018-04-01

    The high resolution satellite with the longer focal length and the larger aperture has been widely used in georeferencing of the observed scene in recent years. The consistent end to end model of high resolution remote sensing satellite geometric chain is presented, which consists of the scene, the three line array camera, the platform including attitude and position information, the time system and the processing algorithm. The integrated design of the camera and the star tracker is considered and the simulation method of the geolocation accuracy is put forward by introduce the new index of the angle between the camera and the star tracker. The model is validated by the geolocation accuracy simulation according to the test method of the ZY-3 satellite imagery rigorously. The simulation results show that the geolocation accuracy is within 25m, which is highly consistent with the test results. The geolocation accuracy can be improved about 7 m by the integrated design. The model combined with the simulation method is applicable to the geolocation accuracy estimate before the satellite launching.

  14. Radar Unix: a complete package for GPR data processing

    NASA Astrophysics Data System (ADS)

    Grandjean, Gilles; Durand, Herve

    1999-03-01

    A complete package for ground penetrating radar data interpretation including data processing, forward modeling and a case history database consultation is presented. Running on an Unix operating system, its architecture consists of a graphical user interface generating batch files transmitted to a library of processing routines. This design allows a better software maintenance and the possibility for the user to run processing or modeling batch files by itself and differed in time. A case history data base is available and consists of an hypertext document which can be consulted by using a standard HTML browser. All the software specifications are presented through a realistic example.

  15. Theta oscillations promote temporal sequence learning.

    PubMed

    Crivelli-Decker, Jordan; Hsieh, Liang-Tien; Clarke, Alex; Ranganath, Charan

    2018-05-17

    Many theoretical models suggest that neural oscillations play a role in learning or retrieval of temporal sequences, but the extent to which oscillations support sequence representation remains unclear. To address this question, we used scalp electroencephalography (EEG) to examine oscillatory activity over learning of different object sequences. Participants made semantic decisions on each object as they were presented in a continuous stream. For three "Consistent" sequences, the order of the objects was always fixed. Activity during Consistent sequences was compared to "Random" sequences that consisted of the same objects presented in a different order on each repetition. Over the course of learning, participants made faster semantic decisions to objects in Consistent, as compared to objects in Random sequences. Thus, participants were able to use sequence knowledge to predict upcoming items in Consistent sequences. EEG analyses revealed decreased oscillatory power in the theta (4-7 Hz) band at frontal sites following decisions about objects in Consistent sequences, as compared with objects in Random sequences. The theta power difference between Consistent and Random only emerged in the second half of the task, as participants were more effectively able to predict items in Consistent sequences. Moreover, we found increases in parieto-occipital alpha (10-13 Hz) and beta (14-28 Hz) power during the pre-response period for objects in Consistent sequences, relative to objects in Random sequences. Linear mixed effects modeling revealed that single trial theta oscillations were related to reaction time for future objects in a sequence, whereas beta and alpha oscillations were only predictive of reaction time on the current trial. These results indicate that theta and alpha/beta activity preferentially relate to future and current events, respectively. More generally our findings highlight the importance of band-specific neural oscillations in the learning of temporal order information. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  16. Constructing a Time-Invariant Measure of the Socio-economic Status of U.S. Census Tracts.

    PubMed

    Miles, Jeremy N; Weden, Margaret M; Lavery, Diana; Escarce, José J; Cagney, Kathleen A; Shih, Regina A

    2016-02-01

    Contextual research on time and place requires a consistent measurement instrument for neighborhood conditions in order to make unbiased inferences about neighborhood change. We develop such a time-invariant measure of neighborhood socio-economic status (NSES) using exploratory and confirmatory factor analyses fit to census data at the tract level from the 1990 and 2000 U.S. Censuses and the 2008-2012 American Community Survey. A single factor model fit the data well at all three time periods, and factor loadings--but not indicator intercepts--could be constrained to equality over time without decrement to fit. After addressing remaining longitudinal measurement bias, we found that NSES increased from 1990 to 2000, and then--consistent with the timing of the "Great Recession"--declined in 2008-2012 to a level approaching that of 1990. Our approach for evaluating and adjusting for time-invariance is not only instructive for studies of NSES but also more generally for longitudinal studies in which the variable of interest is a latent construct.

  17. Steady state magnetic field configurations for the earth's magnetotail

    NASA Technical Reports Server (NTRS)

    Hau, L.-N.; Wolf, R. A.; Voigt, G.-H.; Wu, C. C.

    1989-01-01

    A two-dimensional, force-balance magnetic field model is presented. The theoretical existence of a steady state magnetic field configuration that is force-balanced and consistent with slow, lossless, adiabatic, earthward convection within the limit of the ideal MHD is demonstrated. A numerical solution is obtained for a two-dimensional magnetosphere with a rectangular magnetopause and nonflaring tail. The results are consistent with the convection time sequences reported by Erickson (1985).

  18. The space of ultrametric phylogenetic trees.

    PubMed

    Gavryushkin, Alex; Drummond, Alexei J

    2016-08-21

    The reliability of a phylogenetic inference method from genomic sequence data is ensured by its statistical consistency. Bayesian inference methods produce a sample of phylogenetic trees from the posterior distribution given sequence data. Hence the question of statistical consistency of such methods is equivalent to the consistency of the summary of the sample. More generally, statistical consistency is ensured by the tree space used to analyse the sample. In this paper, we consider two standard parameterisations of phylogenetic time-trees used in evolutionary models: inter-coalescent interval lengths and absolute times of divergence events. For each of these parameterisations we introduce a natural metric space on ultrametric phylogenetic trees. We compare the introduced spaces with existing models of tree space and formulate several formal requirements that a metric space on phylogenetic trees must possess in order to be a satisfactory space for statistical analysis, and justify them. We show that only a few known constructions of the space of phylogenetic trees satisfy these requirements. However, our results suggest that these basic requirements are not enough to distinguish between the two metric spaces we introduce and that the choice between metric spaces requires additional properties to be considered. Particularly, that the summary tree minimising the square distance to the trees from the sample might be different for different parameterisations. This suggests that further fundamental insight is needed into the problem of statistical consistency of phylogenetic inference methods. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  19. Model selection and constraints from holographic dark energy scenarios

    NASA Astrophysics Data System (ADS)

    Akhlaghi, I. A.; Malekjani, M.; Basilakos, S.; Haghi, H.

    2018-07-01

    In this study, we combine the expansion and the growth data in order to investigate the ability of the three most popular holographic dark energy models, namely event future horizon, Ricci scale, and Granda-Oliveros IR cutoffs, to fit the data. Using a standard χ2 minimization method, we place tight constraints on the free parameters of the models. Based on the values of the Akaike and Bayesian information criteria, we find that two out of three holographic dark energy models are disfavoured by the data, because they predict a non-negligible amount of fractional dark energy density at early enough times. Although the growth rate data are relatively consistent with the holographic dark energy models which are based on Ricci scale and Granda-Oliveros IR cutoffs, the combined analysis provides strong indications against these models. Finally, we find that the model for which the holographic dark energy is related with the future horizon is consistent with the combined observational data.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rasmussen, Martin; Hastings, Alan; Smith, Matthew J.

    We develop a theory for residence times and mean ages for nonautonomous compartmental systems. Using the McKendrick–von Forster equation, we show that the mean ages of mass in a compartmental system satisfy a linear nonautonomous ordinary differential equation that is exponentially stable. We then define a nonautonomous version of residence time as the mean age of mass leaving the compartmental system at a particular time and show that our nonautonomous theory is consistent with the autonomous case. We apply these results to study a nine-dimensional nonautonomous compartmental system modeling the carbon cycle, which is a simplified version of the Carnegie–Ames–Stanfordmore » approach (CASA) model.« less

  1. Loss Aversion and Time-Differentiated Electricity Pricing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spurlock, C. Anna

    2015-06-01

    I develop a model of loss aversion over electricity expenditure, from which I derive testable predictions for household electricity consumption while on combination time-of-use (TOU) and critical peak pricing (CPP) plans. Testing these predictions results in evidence consistent with loss aversion: (1) spillover effects - positive expenditure shocks resulted in significantly more peak consumption reduction for several weeks thereafter; and (2) clustering - disproportionate probability of consuming such that expenditure would be equal between the TOUCPP or standard flat-rate pricing structures. This behavior is inconsistent with a purely neoclassical utility model, and has important implications for application of time-differentiated electricitymore » pricing.« less

  2. Nonlinear dynamic macromodeling techniques for audio systems

    NASA Astrophysics Data System (ADS)

    Ogrodzki, Jan; Bieńkowski, Piotr

    2015-09-01

    This paper develops a modelling method and a models identification technique for the nonlinear dynamic audio systems. Identification is performed by means of a behavioral approach based on a polynomial approximation. This approach makes use of Discrete Fourier Transform and Harmonic Balance Method. A model of an audio system is first created and identified and then it is simulated in real time using an algorithm of low computational complexity. The algorithm consists in real time emulation of the system response rather than in simulation of the system itself. The proposed software is written in Python language using object oriented programming techniques. The code is optimized for a multithreads environment.

  3. Effect of Pt Doping on Nucleation and Crystallization in Li2O.2SiO2 Glass: Experimental Measurements and Computer Modeling

    NASA Technical Reports Server (NTRS)

    Narayan, K. Lakshmi; Kelton, K. F.; Ray, C. S.

    1996-01-01

    Heterogeneous nucleation and its effects on the crystallization of lithium disilicate glass containing small amounts of Pt are investigated. Measurements of the nucleation frequencies and induction times with and without Pt are shown to be consistent with predictions based on the classical nucleation theory. A realistic computer model for the transformation is presented. Computed differential thermal analysis data (such as crystallization rates as a function of time and temperature) are shown to be in good agreement with experimental results. This modeling provides a new, more quantitative method for analyzing calorimetric data.

  4. Gaussian solitary waves and compactons in Fermi–Pasta–Ulam lattices with Hertzian potentials

    PubMed Central

    James, Guillaume; Pelinovsky, Dmitry

    2014-01-01

    We consider a class of fully nonlinear Fermi–Pasta–Ulam (FPU) lattices, consisting of a chain of particles coupled by fractional power nonlinearities of order α>1. This class of systems incorporates a classical Hertzian model describing acoustic wave propagation in chains of touching beads in the absence of precompression. We analyse the propagation of localized waves when α is close to unity. Solutions varying slowly in space and time are searched with an appropriate scaling, and two asymptotic models of the chain of particles are derived consistently. The first one is a logarithmic Korteweg–de Vries (KdV) equation and possesses linearly orbitally stable Gaussian solitary wave solutions. The second model consists of a generalized KdV equation with Hölder-continuous fractional power nonlinearity and admits compacton solutions, i.e. solitary waves with compact support. When , we numerically establish the asymptotically Gaussian shape of exact FPU solitary waves with near-sonic speed and analytically check the pointwise convergence of compactons towards the limiting Gaussian profile. PMID:24808748

  5. A Bayesian method for characterizing distributed micro-releases: II. inference under model uncertainty with short time-series data.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marzouk, Youssef; Fast P.; Kraus, M.

    2006-01-01

    Terrorist attacks using an aerosolized pathogen preparation have gained credibility as a national security concern after the anthrax attacks of 2001. The ability to characterize such attacks, i.e., to estimate the number of people infected, the time of infection, and the average dose received, is important when planning a medical response. We address this question of characterization by formulating a Bayesian inverse problem predicated on a short time-series of diagnosed patients exhibiting symptoms. To be of relevance to response planning, we limit ourselves to 3-5 days of data. In tests performed with anthrax as the pathogen, we find that thesemore » data are usually sufficient, especially if the model of the outbreak used in the inverse problem is an accurate one. In some cases the scarcity of data may initially support outbreak characterizations at odds with the true one, but with sufficient data the correct inferences are recovered; in other words, the inverse problem posed and its solution methodology are consistent. We also explore the effect of model error-situations for which the model used in the inverse problem is only a partially accurate representation of the outbreak; here, the model predictions and the observations differ by more than a random noise. We find that while there is a consistent discrepancy between the inferred and the true characterizations, they are also close enough to be of relevance when planning a response.« less

  6. Efficiency at Maximum Power Output of a Quantum-Mechanical Brayton Cycle

    NASA Astrophysics Data System (ADS)

    Yuan, Yuan; He, Ji-Zhou; Gao, Yong; Wang, Jian-Hui

    2014-03-01

    The performance in finite time of a quantum-mechanical Brayton engine cycle is discussed, without introduction of temperature. The engine model consists of two quantum isoenergetic and two quantum isobaric processes, and works with a single particle in a harmonic trap. Directly employing the finite-time thermodynamics, the efficiency at maximum power output is determined. Extending the harmonic trap to a power-law trap, we find that the efficiency at maximum power is independent of any parameter involved in the model, but depends on the confinement of the trapping potential.

  7. Time course of changes in sperm morphometry and semen variables during testosterone-induced suppression of human spermatogenesis.

    PubMed

    Garrett, C; Liu, D Y; McLachlan, R I; Baker, H W G

    2005-11-01

    Quantification of changes in semen may give insight into the testosterone (T)-induced disruption of spermatogenesis in man. A model analogous to flushing of sperm from the genital tract after vasectomy was used to quantify the time course of semen changes in subjects participating in male contraceptive trials using 800 mg T-implant (n = 25) or 200 mg weekly intramuscular injection (IM-T; n = 33). A modified exponential decay model allowed for delayed onset and incomplete disruption to spermatogenesis. Semen variables measured weekly during a 91-day period after initial treatment were fitted to the model. Sperm concentration, total count, motility and morphometry exhibited similar average decay rates (5 day half-life). The mean delay to onset of decline in concentration was 15 (IM-T) and 18 (T-implant) days. The significantly longer (P < 0.005) delays deduced for the commencement of fall in normal morphology (41 days), normal morphometry (40 days) and sperm viability (43 and 55 days), and the change of morphometry to smaller more compact sperm heads are consistent with sperm being progressively cleared from the genital tract rather than continued shedding of immature or abnormal sperm by the seminiferous epithelium. A significant negative relationship was found between lag time and baseline sperm concentration, consistent with longer sperm-epididymal transit times associated with lower daily production rates.

  8. Real-Time 3d Reconstruction from Images Taken from AN Uav

    NASA Astrophysics Data System (ADS)

    Zingoni, A.; Diani, M.; Corsini, G.; Masini, A.

    2015-08-01

    We designed a method for creating 3D models of objects and areas from two aerial images acquired from an UAV. The models are generated automatically and in real-time, and consist in dense and true-colour reconstructions of the considered areas, which give the impression to the operator to be physically present within the scene. The proposed method only needs a cheap compact camera, mounted on a small UAV. No additional instrumentation is necessary, so that the costs are very limited. The method consists of two main parts: the design of the acquisition system and the 3D reconstruction algorithm. In the first part, the choices for the acquisition geometry and for the camera parameters are optimized, in order to yield the best performance. In the second part, a reconstruction algorithm extracts the 3D model from the two acquired images, maximizing the accuracy under the real-time constraint. A test was performed in monitoring a construction yard, obtaining very promising results. Highly realistic and easy-to-interpret 3D models of objects and areas of interest were produced in less than one second, with an accuracy of about 0.5m. For its characteristics, the designed method is suitable for video-surveillance, remote sensing and monitoring, especially in those applications that require intuitive and reliable information quickly, as disasters monitoring, search and rescue and area surveillance.

  9. A stochastic estimation procedure for intermittently-observed semi-Markov multistate models with back transitions.

    PubMed

    Aralis, Hilary; Brookmeyer, Ron

    2017-01-01

    Multistate models provide an important method for analyzing a wide range of life history processes including disease progression and patient recovery following medical intervention. Panel data consisting of the states occupied by an individual at a series of discrete time points are often used to estimate transition intensities of the underlying continuous-time process. When transition intensities depend on the time elapsed in the current state and back transitions between states are possible, this intermittent observation process presents difficulties in estimation due to intractability of the likelihood function. In this manuscript, we present an iterative stochastic expectation-maximization algorithm that relies on a simulation-based approximation to the likelihood function and implement this algorithm using rejection sampling. In a simulation study, we demonstrate the feasibility and performance of the proposed procedure. We then demonstrate application of the algorithm to a study of dementia, the Nun Study, consisting of intermittently-observed elderly subjects in one of four possible states corresponding to intact cognition, impaired cognition, dementia, and death. We show that the proposed stochastic expectation-maximization algorithm substantially reduces bias in model parameter estimates compared to an alternative approach used in the literature, minimal path estimation. We conclude that in estimating intermittently observed semi-Markov models, the proposed approach is a computationally feasible and accurate estimation procedure that leads to substantial improvements in back transition estimates.

  10. A rainfall disaggregation scheme for sub-hourly time scales: Coupling a Bartlett-Lewis based model with adjusting procedures

    NASA Astrophysics Data System (ADS)

    Kossieris, Panagiotis; Makropoulos, Christos; Onof, Christian; Koutsoyiannis, Demetris

    2018-01-01

    Many hydrological applications, such as flood studies, require the use of long rainfall data at fine time scales varying from daily down to 1 min time step. However, in the real world there is limited availability of data at sub-hourly scales. To cope with this issue, stochastic disaggregation techniques are typically employed to produce possible, statistically consistent, rainfall events that aggregate up to the field data collected at coarser scales. A methodology for the stochastic disaggregation of rainfall at fine time scales was recently introduced, combining the Bartlett-Lewis process to generate rainfall events along with adjusting procedures to modify the lower-level variables (i.e., hourly) so as to be consistent with the higher-level one (i.e., daily). In the present paper, we extend the aforementioned scheme, initially designed and tested for the disaggregation of daily rainfall into hourly depths, for any sub-hourly time scale. In addition, we take advantage of the recent developments in Poisson-cluster processes incorporating in the methodology a Bartlett-Lewis model variant that introduces dependence between cell intensity and duration in order to capture the variability of rainfall at sub-hourly time scales. The disaggregation scheme is implemented in an R package, named HyetosMinute, to support disaggregation from daily down to 1-min time scale. The applicability of the methodology was assessed on a 5-min rainfall records collected in Bochum, Germany, comparing the performance of the above mentioned model variant against the original Bartlett-Lewis process (non-random with 5 parameters). The analysis shows that the disaggregation process reproduces adequately the most important statistical characteristics of rainfall at wide range of time scales, while the introduction of the model with dependent intensity-duration results in a better performance in terms of skewness, rainfall extremes and dry proportions.

  11. Dayside Magnetosphere-Ionosphere Coupling and Prompt Response of Low-Latitude/Equatorial Ionosphere

    NASA Astrophysics Data System (ADS)

    Tu, J.; Song, P.

    2017-12-01

    We use a newly developed numerical simulation model of the ionosphere/thermosphere to investigate magnetosphere-ionosphere coupling and response of the low-latitude/equatorial ionosphere. The simulation model adapts an inductive-dynamic approach (including self-consistent solutions of Faraday's law and retaining inertia terms in ion momentum equations), that is, based on magnetic field B and plasma velocity v (B-v paradigm), in contrast to the conventional modeling based on electric field E and current j (E-j paradigm). The most distinct feature of this model is that the magnetic field in the ionosphere is not constant but self-consistently varies, e.g., with currents, in time. The model solves self-consistently time-dependent continuity, momentum, and energy equations for multiple species of ions and neutrals including photochemistry, and Maxwell's equations. The governing equations solved in the model are a set of multifluid-collisional-Hall MHD equations which are one of unique features of our ionosphere/thermosphere model. With such an inductive-dynamic approach, all possible MHD wave modes, each of which may refract and reflect depending on the local conditions, are retained in the solutions so that the dynamic coupling between the magnetosphere and ionosphere and among different regions of the ionosphere can be self-consistently investigated. In this presentation, we show that the disturbances propagate in the Alfven speed from the magnetosphere along the magnetic field lines down to the ionosphere/thermosphere and that they experience a mode conversion to compressional mode MHD waves (particularly fast mode) in the ionosphere. Because the fast modes can propagate perpendicular to the field, they propagate from the dayside high-latitude to the nightside as compressional waves and to the dayside low-latitude/equatorial ionosphere as rarefaction waves. The apparent prompt response of the low-latitude/equatorial ionosphere, manifesting as the sudden increase of the upward flow around the equator and global antisunward convection, is the result of such coupling of the high-latitude and the low-latitude/equatorial ionosphere, and the requirement of the flow continuity, instead of mechanisms such as the penetration electric field.

  12. Earthquake likelihood model testing

    USGS Publications Warehouse

    Schorlemmer, D.; Gerstenberger, M.C.; Wiemer, S.; Jackson, D.D.; Rhoades, D.A.

    2007-01-01

    INTRODUCTIONThe Regional Earthquake Likelihood Models (RELM) project aims to produce and evaluate alternate models of earthquake potential (probability per unit volume, magnitude, and time) for California. Based on differing assumptions, these models are produced to test the validity of their assumptions and to explore which models should be incorporated in seismic hazard and risk evaluation. Tests based on physical and geological criteria are useful but we focus on statistical methods using future earthquake catalog data only. We envision two evaluations: a test of consistency with observed data and a comparison of all pairs of models for relative consistency. Both tests are based on the likelihood method, and both are fully prospective (i.e., the models are not adjusted to fit the test data). To be tested, each model must assign a probability to any possible event within a specified region of space, time, and magnitude. For our tests the models must use a common format: earthquake rates in specified “bins” with location, magnitude, time, and focal mechanism limits.Seismology cannot yet deterministically predict individual earthquakes; however, it should seek the best possible models for forecasting earthquake occurrence. This paper describes the statistical rules of an experiment to examine and test earthquake forecasts. The primary purposes of the tests described below are to evaluate physical models for earthquakes, assure that source models used in seismic hazard and risk studies are consistent with earthquake data, and provide quantitative measures by which models can be assigned weights in a consensus model or be judged as suitable for particular regions.In this paper we develop a statistical method for testing earthquake likelihood models. A companion paper (Schorlemmer and Gerstenberger 2007, this issue) discusses the actual implementation of these tests in the framework of the RELM initiative.Statistical testing of hypotheses is a common task and a wide range of possible testing procedures exist. Jolliffe and Stephenson (2003) present different forecast verifications from atmospheric science, among them likelihood testing of probability forecasts and testing the occurrence of binary events. Testing binary events requires that for each forecasted event, the spatial, temporal and magnitude limits be given. Although major earthquakes can be considered binary events, the models within the RELM project express their forecasts on a spatial grid and in 0.1 magnitude units; thus the results are a distribution of rates over space and magnitude. These forecasts can be tested with likelihood tests.In general, likelihood tests assume a valid null hypothesis against which a given hypothesis is tested. The outcome is either a rejection of the null hypothesis in favor of the test hypothesis or a nonrejection, meaning the test hypothesis cannot outperform the null hypothesis at a given significance level. Within RELM, there is no accepted null hypothesis and thus the likelihood test needs to be expanded to allow comparable testing of equipollent hypotheses.To test models against one another, we require that forecasts are expressed in a standard format: the average rate of earthquake occurrence within pre-specified limits of hypocentral latitude, longitude, depth, magnitude, time period, and focal mechanisms. Focal mechanisms should either be described as the inclination of P-axis, declination of P-axis, and inclination of the T-axis, or as strike, dip, and rake angles. Schorlemmer and Gerstenberger (2007, this issue) designed classes of these parameters such that similar models will be tested against each other. These classes make the forecasts comparable between models. Additionally, we are limited to testing only what is precisely defined and consistently reported in earthquake catalogs. Therefore it is currently not possible to test such information as fault rupture length or area, asperity location, etc. Also, to account for data quality issues, we allow for location and magnitude uncertainties as well as the probability that an event is dependent on another event.As we mentioned above, only models with comparable forecasts can be tested against each other. Our current tests are designed to examine grid-based models. This requires that any fault-based model be adapted to a grid before testing is possible. While this is a limitation of the testing, it is an inherent difficulty in any such comparative testing. Please refer to appendix B for a statistical evaluation of the application of the Poisson hypothesis to fault-based models.The testing suite we present consists of three different tests: L-Test, N-Test, and R-Test. These tests are defined similarily to Kagan and Jackson (1995). The first two tests examine the consistency of the hypotheses with the observations while the last test compares the spatial performances of the models.

  13. Trivariate Modeling of Interparental Conflict and Adolescent Emotional Security: An Examination of Mother-Father-Child Dynamics.

    PubMed

    Cheung, Rebecca Y M; Cummings, E Mark; Zhang, Zhiyong; Davies, Patrick T

    2016-11-01

    Recognizing the significance of interacting family subsystems, the present study addresses how interparental conflict is linked to adolescent emotional security as a function of parental gender. A total of 272 families with a child at 12.60 years of age (133 boys, 139 girls) were invited to participate each year for three consecutive years. A multi-informant method was used, along with trivariate models to test the associations among mothers, fathers, and their adolescent children's behaviors. The findings from separate models of destructive and constructive interparental conflict revealed intricate linkages among family members. In the model of destructive interparental conflict, mothers and fathers predicted each other's conflict behaviors over time. Moreover, adolescents' exposure to negativity expressed by either parent dampened their emotional security. Consistent with child effects models, adolescent emotional insecurity predicted fathers' destructive conflict behaviors. As for the model of constructive interparental conflict, fathers predicted mothers' conflict behaviors over time. Adolescents' exposure to fathers' constructive conflict behaviors also enhanced their sense of emotional security. Consistent with child effects models, adolescent emotional security predicted mothers' and fathers' constructive conflict behaviors. These findings extended the family and the adolescent literature by indicating that family processes are multiidirectional, involving multiple dyads in the study of parents' and adolescents' functioning. Contributions of these findings to the understanding of interparental conflict and emotional security in adolescence are discussed.

  14. Models for Individualized Instruction.

    ERIC Educational Resources Information Center

    Georgiades, William, Ed.; Clark, Donald C., Ed.

    This book, consisting of five parts, provides a collection of source materials that will assist in implementing individualized instruction; provides examples of interrelated systems for individualizing instruction; and describes the components of individualized instructional systems, including flexible use of time, differentiated staffing, new…

  15. Dynamic Identification for Control of Large Space Structures

    NASA Technical Reports Server (NTRS)

    Ibrahim, S. R.

    1985-01-01

    This is a compilation of reports by the one author on one subject. It consists of the following five journal articles: (1) A Parametric Study of the Ibrahim Time Domain Modal Identification Algorithm; (2) Large Modal Survey Testing Using the Ibrahim Time Domain Identification Technique; (3) Computation of Normal Modes from Identified Complex Modes; (4) Dynamic Modeling of Structural from Measured Complex Modes; and (5) Time Domain Quasi-Linear Identification of Nonlinear Dynamic Systems.

  16. Commercial Digital/ADP Equipment in the Ocean Environment. Volume 2. User Appendices

    DTIC Science & Technology

    1978-12-15

    is that the LINDA system uses a mini computer with a time sharing system software which allows several terminals to be operated at the same time...Acquisition System (ODAS) consists of sensors, computer hardware and computer software . Certain sensors are interfaced to the computers for real time...on USNS KANE, USNS BENT, and USKS WILKES. Commercial automatic data processing equipment used in ODAS includes: Item Model Computer PDP-9 Tape

  17. Full-wave Moment Tensor and Tomographic Inversions Based on 3D Strain Green Tensor

    DTIC Science & Technology

    2010-01-31

    propagation in three-dimensional (3D) earth, linearizes the inverse problem by iteratively updating the earth model , and provides an accurate way to...self-consistent FD-SGT databases constructed from finite-difference simulations of wave propagation in full-wave tomographic models can be used to...determine the moment tensors within minutes after a seismic event, making it possible for real time monitoring using 3D models . 15. SUBJECT TERMS

  18. 40 CFR Table 2 to Subpart Dddd of... - Model Rule-Emission Limitations

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... part) Hydrogen chloride 62 parts per million by dry volume 3-run average (1 hour minimum sample time...) Sulfur dioxide 20 parts per million by dry volume 3-run average (1 hour minimum sample time per run...-8) or ASTM D6784-02 (Reapproved 2008).c Opacity 10 percent Three 1-hour blocks consisting of ten 6...

  19. The Relationship between the Amount of Learning and Time (The Example of Equations)

    ERIC Educational Resources Information Center

    Kesan, Cenk; Kaya, Deniz; Ok, Gokce; Erkus, Yusuf

    2016-01-01

    The main purpose of this study is to determine the amount of time-dependent learning of "solving problems that require establishing of single variable equations of the first order" of the seventh grade students. The study, adopting the screening model, consisted of a total of 84 students, including 42 female and 42 male students at the…

  20. The Effect of Time on Difficulty of Learning (The Case of Problem Solving with Natural Numbers)

    ERIC Educational Resources Information Center

    Kaya, Deniz; Kesan, Cenk

    2017-01-01

    The main purpose of this study is to determine the time-dependent learning difficulty of "solving problems that require making four operations with natural numbers" of the sixth grade students. The study, adopting the scanning model, consisted of a total of 140 students, including 69 female and 71 male students at the sixth grade. Data…

  1. Thermosphere-Ionosphere-Mesosphere Modeling Using the TIE-GCM, TIME-GCM, and WACCM That Will Lead to the Development of a Seamless Model of the Whole Atmosphere

    DTIC Science & Technology

    2006-09-30

    disturbances from the lower atmosphere and ocean affect the upper atmosphere and how this variability interacts with the variability generated by solar and...represents “ general circulation model.” Both models include self-consistent ionospheric electrodynamics, that is, a calculation of the electric fields...and currents generated by the ionospheric dynamo, and consideration of their effects on the neutral dynamics. The TIE-GCM is used for studies that

  2. A work-family conflict/subjective well-being process model: a test of competing theories of longitudinal effects.

    PubMed

    Matthews, Russell A; Wayne, Julie Holliday; Ford, Michael T

    2014-11-01

    In the present study, we examine competing predictions of stress reaction models and adaptation theories regarding the longitudinal relationship between work-family conflict and subjective well-being. Based on data from 432 participants over 3 time points with 2 lags of varying lengths (i.e., 1 month, 6 months), our findings suggest that in the short term, consistent with prior theory and research, work-family conflict is associated with poorer subjective well-being. Counter to traditional work-family predictions but consistent with adaptation theories, after accounting for concurrent levels of work-family conflict as well as past levels of subjective well-being, past exposure to work-family conflict was associated with higher levels of subjective well-being over time. Moreover, evidence was found for reverse causation in that greater subjective well-being at 1 point in time was associated with reduced work-family conflict at a subsequent point in time. Finally, the pattern of results did not vary as a function of using different temporal lags. We discuss the theoretical, research, and practical implications of our findings. (PsycINFO Database Record (c) 2014 APA, all rights reserved).

  3. A computational method for sharp interface advection.

    PubMed

    Roenby, Johan; Bredmose, Henrik; Jasak, Hrvoje

    2016-11-01

    We devise a numerical method for passive advection of a surface, such as the interface between two incompressible fluids, across a computational mesh. The method is called isoAdvector, and is developed for general meshes consisting of arbitrary polyhedral cells. The algorithm is based on the volume of fluid (VOF) idea of calculating the volume of one of the fluids transported across the mesh faces during a time step. The novelty of the isoAdvector concept consists of two parts. First, we exploit an isosurface concept for modelling the interface inside cells in a geometric surface reconstruction step. Second, from the reconstructed surface, we model the motion of the face-interface intersection line for a general polygonal face to obtain the time evolution within a time step of the submerged face area. Integrating this submerged area over the time step leads to an accurate estimate for the total volume of fluid transported across the face. The method was tested on simple two-dimensional and three-dimensional interface advection problems on both structured and unstructured meshes. The results are very satisfactory in terms of volume conservation, boundedness, surface sharpness and efficiency. The isoAdvector method was implemented as an OpenFOAM ® extension and is published as open source.

  4. Predicting future forestland area: a comparison of econometric approaches.

    Treesearch

    SoEun Ahn; Andrew J. Plantinga; Ralph J. Alig

    2000-01-01

    Predictions of future forestland area are an important component of forest policy analyses. In this article, we test the ability of econometric land use models to accurately forecast forest area. We construct a panel data set for Alabama consisting of county and time-series observation for the period 1964 to 1992. We estimate models using restricted data sets-namely,...

  5. Short Pulse UV-Visible Waveguide Laser.

    DTIC Science & Technology

    1980-07-01

    27 B. Relaxation Processes ...... ................... ... 30 C. Equivalent Circuit ...... .................... ... 33 II V. KINETIC MODELING...101 2 2-() 0 10 20 30 40 TIME (nsec) Fig. 6 Temporal evolution of the current, various N +densities, and the electron density as revealed by the...processes consisting of dissociative 30 * TABLE 1 RELAXATION REACTION RATES USED IN THE He-N MODEL 2 Reaction Rate. Reference Helium Metastable Reactions 1

  6. Design and Training of Limited-Interconnect Architectures

    DTIC Science & Technology

    1991-07-16

    and signal processing. Neuromorphic (brain like) models, allow an alternative for achieving real-time operation tor such tasks, while having a...compact and robust architecture. Neuromorphic models consist of interconnections of simple computational nodes. In this approach, each node computes a...operational performance. I1. Research Objectives The research objectives were: 1. Development of on- chip local training rules specifically designed for

  7. Fully Bayesian Estimation of Data from Single Case Designs

    ERIC Educational Resources Information Center

    Rindskopf, David

    2013-01-01

    Single case designs (SCDs) generally consist of a small number of short time series in two or more phases. The analysis of SCDs statistically fits in the framework of a multilevel model, or hierarchical model. The usual analysis does not take into account the uncertainty in the estimation of the random effects. This not only has an effect on the…

  8. Electromigration model for the prediction of lifetime based on the failure unit statistics in aluminum metallization

    NASA Astrophysics Data System (ADS)

    Park, Jong Ho; Ahn, Byung Tae

    2003-01-01

    A failure model for electromigration based on the "failure unit model" was presented for the prediction of lifetime in metal lines.The failure unit model, which consists of failure units in parallel and series, can predict both the median time to failure (MTTF) and the deviation in the time to failure (DTTF) in Al metal lines. The model can describe them only qualitatively. In our model, both the probability function of the failure unit in single grain segments and polygrain segments are considered instead of in polygrain segments alone. Based on our model, we calculated MTTF, DTTF, and activation energy for different median grain sizes, grain size distributions, linewidths, line lengths, current densities, and temperatures. Comparisons between our results and published experimental data showed good agreements and our model could explain the previously unexplained phenomena. Our advanced failure unit model might be further applied to other electromigration characteristics of metal lines.

  9. EURODELTA-Trends, a multi-model experiment of air quality hindcast in Europe over 1990-2010

    NASA Astrophysics Data System (ADS)

    Colette, Augustin; Andersson, Camilla; Manders, Astrid; Mar, Kathleen; Mircea, Mihaela; Pay, Maria-Teresa; Raffort, Valentin; Tsyro, Svetlana; Cuvelier, Cornelius; Adani, Mario; Bessagnet, Bertrand; Bergström, Robert; Briganti, Gino; Butler, Tim; Cappelletti, Andrea; Couvidat, Florian; D'Isidoro, Massimo; Doumbia, Thierno; Fagerli, Hilde; Granier, Claire; Heyes, Chris; Klimont, Zig; Ojha, Narendra; Otero, Noelia; Schaap, Martijn; Sindelarova, Katarina; Stegehuis, Annemiek I.; Roustan, Yelva; Vautard, Robert; van Meijgaard, Erik; Garcia Vivanco, Marta; Wind, Peter

    2017-09-01

    The EURODELTA-Trends multi-model chemistry-transport experiment has been designed to facilitate a better understanding of the evolution of air pollution and its drivers for the period 1990-2010 in Europe. The main objective of the experiment is to assess the efficiency of air pollutant emissions mitigation measures in improving regional-scale air quality. The present paper formulates the main scientific questions and policy issues being addressed by the EURODELTA-Trends modelling experiment with an emphasis on how the design and technical features of the modelling experiment answer these questions. The experiment is designed in three tiers, with increasing degrees of computational demand in order to facilitate the participation of as many modelling teams as possible. The basic experiment consists of simulations for the years 1990, 2000, and 2010. Sensitivity analysis for the same three years using various combinations of (i) anthropogenic emissions, (ii) chemical boundary conditions, and (iii) meteorology complements it. The most demanding tier consists of two complete time series from 1990 to 2010, simulated using either time-varying emissions for corresponding years or constant emissions. Eight chemistry-transport models have contributed with calculation results to at least one experiment tier, and five models have - to date - completed the full set of simulations (and 21-year trend calculations have been performed by four models). The modelling results are publicly available for further use by the scientific community. The main expected outcomes are (i) an evaluation of the models' performances for the three reference years, (ii) an evaluation of the skill of the models in capturing observed air pollution trends for the 1990-2010 time period, (iii) attribution analyses of the respective role of driving factors (e.g. emissions, boundary conditions, meteorology), (iv) a dataset based on a multi-model approach, to provide more robust model results for use in impact studies related to human health, ecosystem, and radiative forcing.

  10. Response time in economic games reflects different types of decision conflict for prosocial and proself individuals.

    PubMed

    Yamagishi, Toshio; Matsumoto, Yoshie; Kiyonari, Toko; Takagishi, Haruto; Li, Yang; Kanai, Ryota; Sakagami, Masamichi

    2017-06-13

    Behavioral and neuroscientific studies explore two pathways through which internalized social norms promote prosocial behavior. One pathway involves internal control of impulsive selfishness, and the other involves emotion-based prosocial preferences that are translated into behavior when they evade cognitive control for pursuing self-interest. We measured 443 participants' overall prosocial behavior in four economic games. Participants' predispositions [social value orientation (SVO)] were more strongly reflected in their overall game behavior when they made decisions quickly than when they spent a longer time. Prosocially (or selfishly) predisposed participants behaved less prosocially (or less selfishly) when they spent more time in decision making, such that their SVO prosociality yielded limited effects in actual behavior in their slow decisions. The increase (or decrease) in slower decision makers was prominent among consistent prosocials (or proselfs) whose strong preference for prosocial (or proself) goals would make it less likely to experience conflict between prosocial and proself goals. The strong effect of RT on behavior in consistent prosocials (or proselfs) suggests that conflict between prosocial and selfish goals alone is not responsible for slow decisions. Specifically, we found that contemplation of the risk of being exploited by others (social risk aversion) was partly responsible for making consistent prosocials (but not consistent proselfs) spend longer time in decision making and behave less prosocially. Conflict between means rather than between goals (immediate versus strategic pursuit of self-interest) was suggested to be responsible for the time-related increase in consistent proselfs' prosocial behavior. The findings of this study are generally in favor of the intuitive cooperation model of prosocial behavior.

  11. Unconditionally stable finite-difference time-domain methods for modeling the Sagnac effect

    NASA Astrophysics Data System (ADS)

    Novitski, Roman; Scheuer, Jacob; Steinberg, Ben Z.

    2013-02-01

    We present two unconditionally stable finite-difference time-domain (FDTD) methods for modeling the Sagnac effect in rotating optical microsensors. The methods are based on the implicit Crank-Nicolson scheme, adapted to hold in the rotating system reference frame—the rotating Crank-Nicolson (RCN) methods. The first method (RCN-2) is second order accurate in space whereas the second method (RCN-4) is fourth order accurate. Both methods are second order accurate in time. We show that the RCN-4 scheme is more accurate and has better dispersion isotropy. The numerical results show good correspondence with the expression for the classical Sagnac resonant frequency splitting when using group refractive indices of the resonant modes of a microresonator. Also we show that the numerical results are consistent with the perturbation theory for the rotating degenerate microcavities. We apply our method to simulate the effect of rotation on an entire Coupled Resonator Optical Waveguide (CROW) consisting of a set of coupled microresonators. Preliminary results validate the formation of a rotation-induced gap at the center of a transfer function of a CROW.

  12. Timing of head movements is consistent with energy minimization in walking ungulates

    PubMed Central

    Loscher, David M.; Meyer, Fiete; Kracht, Kerstin

    2016-01-01

    Many ungulates show a conspicuous nodding motion of the head when walking. Until now, the functional significance of this behaviour remained unclear. Combining in vivo kinematics of quadrupedal mammals with a computer model, we show that the timing of vertical displacements of the head and neck is consistent with minimizing energy expenditure for carrying these body parts in an inverted pendulum walking gait. Varying the timing of head movements in the model resulted in increased metabolic cost estimate for carrying the head and neck of up to 63%. Oscillations of the head–neck unit result in weight force oscillations transmitted to the forelimbs. Advantageous timing increases the load in single support phases, in which redirecting the trajectory of the centre of mass (COM) is thought to be energetically inexpensive. During double support, in which—according to collision mechanics—directional changes of the impulse of the COM are expensive, the observed timing decreases the load. Because the head and neck comprise approximately 10% of body mass, the effect shown here should also affect the animals' overall energy expenditure. This mechanism, working analogously in high-tech backpacks for energy-saving load carriage, is widespread in ungulates, and provides insight into how animals economize locomotion. PMID:27903873

  13. A model of partial differential equations for HIV propagation in lymph nodes

    NASA Astrophysics Data System (ADS)

    Marinho, E. B. S.; Bacelar, F. S.; Andrade, R. F. S.

    2012-01-01

    A system of partial differential equations is used to model the dissemination of the Human Immunodeficiency Virus (HIV) in CD4+T cells within lymph nodes. Besides diffusion terms, the model also includes a time-delay dependence to describe the time lag required by the immunologic system to provide defenses to new virus strains. The resulting dynamics strongly depends on the properties of the invariant sets of the model, consisting of three fixed points related to the time independent and spatial homogeneous tissue configurations in healthy and infected states. A region in the parameter space is considered, for which the time dependence of the space averaged model variables follows the clinical pattern reported for infected patients: a short scale primary infection, followed by a long latency period of almost complete recovery and third phase characterized by damped oscillations around a value with large HIV counting. Depending on the value of the diffusion coefficient, the latency time increases with respect to that one obtained for the space homogeneous version of the model. It is found that same initial conditions lead to quite different spatial patterns, which depend strongly on the latency interval.

  14. Superelement Analysis of Tile-Reinforced Composite Armor

    NASA Technical Reports Server (NTRS)

    Davila, Carlos G.

    1998-01-01

    Super-elements can greatly improve the computational efficiency of analyses of tile-reinforced structures such as the hull of the Composite Armored Vehicle. By taking advantage of the periodicity in this type of construction, super-elements can be used to simplify the task of modeling, to virtually eliminate the time required to assemble the stiffness matrices, and to reduce significantly the analysis solution time. Furthermore, super-elements are fully transferable between analyses and analysts, so that they provide a consistent method to share information and reduce duplication. This paper describes a methodology that was developed to model and analyze large upper hull components of the Composite Armored Vehicle. The analyses are based on two types of superelement models. The first type is based on element-layering, which consists of modeling a laminate by using several layers of shell elements constrained together with compatibility equations. Element layering is used to ensure the proper transverse shear deformation in the laminate rubber layer. The second type of model uses three-dimensional elements. Since no graphical pre-processor currently supports super-elements, a special technique based on master-elements was developed. Master-elements are representations of super-elements that are used in conjunction with a custom translator to write the superelement connectivities as input decks for ABAQUS.

  15. An approximation method for improving dynamic network model fitting.

    PubMed

    Carnegie, Nicole Bohme; Krivitsky, Pavel N; Hunter, David R; Goodreau, Steven M

    There has been a great deal of interest recently in the modeling and simulation of dynamic networks, i.e., networks that change over time. One promising model is the separable temporal exponential-family random graph model (ERGM) of Krivitsky and Handcock, which treats the formation and dissolution of ties in parallel at each time step as independent ERGMs. However, the computational cost of fitting these models can be substantial, particularly for large, sparse networks. Fitting cross-sectional models for observations of a network at a single point in time, while still a non-negligible computational burden, is much easier. This paper examines model fitting when the available data consist of independent measures of cross-sectional network structure and the duration of relationships under the assumption of stationarity. We introduce a simple approximation to the dynamic parameters for sparse networks with relationships of moderate or long duration and show that the approximation method works best in precisely those cases where parameter estimation is most likely to fail-networks with very little change at each time step. We consider a variety of cases: Bernoulli formation and dissolution of ties, independent-tie formation and Bernoulli dissolution, independent-tie formation and dissolution, and dependent-tie formation models.

  16. Cortical region-specific sleep homeostasis in mice: effects of time of day and waking experience.

    PubMed

    Guillaumin, Mathilde C C; McKillop, Laura E; Cui, Nanyi; Fisher, Simon P; Foster, Russell G; de Vos, Maarten; Peirson, Stuart N; Achermann, Peter; Vyazovskiy, Vladyslav V

    2018-04-25

    Sleep-wake history, wake behaviours, lighting conditions and circadian time influence sleep, but neither their relative contribution, nor the underlying mechanisms are fully understood. The dynamics of EEG slow-wave activity (SWA) during sleep can be described using the two-process model, whereby the parameters of homeostatic Process S are estimated using empirical EEG SWA (0.5-4 Hz) in non-rapid eye movement sleep (NREM), and the 24-h distribution of vigilance states. We hypothesised that the influence of extrinsic factors on sleep homeostasis, such as the time of day or wake behaviour, would manifest in systematic deviations between empirical SWA and model predictions. To test this hypothesis, we performed parameter estimation and tested model predictions using NREM SWA derived from continuous EEG recordings from the frontal and occipital cortex in mice. The animals showed prolonged wake periods, followed by consolidated sleep, both during the dark and light phases, and wakefulness primarily consisted of voluntary wheel running, learning a new motor skill or novel object exploration. Simulated SWA matched empirical levels well across conditions, and neither waking experience nor time of day had a significant influence on the fit between data and simulation. However, we consistently observed that Process S declined during sleep significantly faster in the frontal than in the occipital area of the neocortex. The striking resilience of the model to specific wake behaviours, lighting conditions and time of day suggests that intrinsic factors underpinning the dynamics of Process S are robust to extrinsic influences, despite their major role in shaping the overall amount and distribution of vigilance states across 24 h.

  17. A risk-model for hospital mortality among patients with severe sepsis or septic shock based on German national administrative claims data

    PubMed Central

    Fleischmann-Struzek, Carolin; Rüddel, Hendrik; Reinhart, Konrad; Thomas-Rüddel, Daniel O.

    2018-01-01

    Background Sepsis is a major cause of preventable deaths in hospitals. Feasible and valid methods for comparing quality of sepsis care between hospitals are needed. The aim of this study was to develop a risk-adjustment model suitable for comparing sepsis-related mortality between German hospitals. Methods We developed a risk-model using national German claims data. Since these data are available with a time-lag of 1.5 years only, the stability of the model across time was investigated. The model was derived from inpatient cases with severe sepsis or septic shock treated in 2013 using logistic regression with backward selection and generalized estimating equations to correct for clustering. It was validated among cases treated in 2015. Finally, the model development was repeated in 2015. To investigate secular changes, the risk-adjusted trajectory of mortality across the years 2010–2015 was analyzed. Results The 2013 deviation sample consisted of 113,750 cases; the 2015 validation sample consisted of 134,851 cases. The model developed in 2013 showed good validity regarding discrimination (AUC = 0.74), calibration (observed mortality in 1st and 10th risk-decile: 11%-78%), and fit (R2 = 0.16). Validity remained stable when the model was applied to 2015 (AUC = 0.74, 1st and 10th risk-decile: 10%-77%, R2 = 0.17). There was no indication of overfitting of the model. The final model developed in year 2015 contained 40 risk-factors. Between 2010 and 2015 hospital mortality in sepsis decreased from 48% to 42%. Adjusted for risk-factors the trajectory of decrease was still significant. Conclusions The risk-model shows good predictive validity and stability across time. The model is suitable to be used as an external algorithm for comparing risk-adjusted sepsis mortality among German hospitals or regions based on administrative claims data, but secular changes need to be taken into account when interpreting risk-adjusted mortality. PMID:29558486

  18. A risk-model for hospital mortality among patients with severe sepsis or septic shock based on German national administrative claims data.

    PubMed

    Schwarzkopf, Daniel; Fleischmann-Struzek, Carolin; Rüddel, Hendrik; Reinhart, Konrad; Thomas-Rüddel, Daniel O

    2018-01-01

    Sepsis is a major cause of preventable deaths in hospitals. Feasible and valid methods for comparing quality of sepsis care between hospitals are needed. The aim of this study was to develop a risk-adjustment model suitable for comparing sepsis-related mortality between German hospitals. We developed a risk-model using national German claims data. Since these data are available with a time-lag of 1.5 years only, the stability of the model across time was investigated. The model was derived from inpatient cases with severe sepsis or septic shock treated in 2013 using logistic regression with backward selection and generalized estimating equations to correct for clustering. It was validated among cases treated in 2015. Finally, the model development was repeated in 2015. To investigate secular changes, the risk-adjusted trajectory of mortality across the years 2010-2015 was analyzed. The 2013 deviation sample consisted of 113,750 cases; the 2015 validation sample consisted of 134,851 cases. The model developed in 2013 showed good validity regarding discrimination (AUC = 0.74), calibration (observed mortality in 1st and 10th risk-decile: 11%-78%), and fit (R2 = 0.16). Validity remained stable when the model was applied to 2015 (AUC = 0.74, 1st and 10th risk-decile: 10%-77%, R2 = 0.17). There was no indication of overfitting of the model. The final model developed in year 2015 contained 40 risk-factors. Between 2010 and 2015 hospital mortality in sepsis decreased from 48% to 42%. Adjusted for risk-factors the trajectory of decrease was still significant. The risk-model shows good predictive validity and stability across time. The model is suitable to be used as an external algorithm for comparing risk-adjusted sepsis mortality among German hospitals or regions based on administrative claims data, but secular changes need to be taken into account when interpreting risk-adjusted mortality.

  19. The AgMIP GRIDded Crop Modeling Initiative (AgGRID) and the Global Gridded Crop Model Intercomparison (GGCMI)

    NASA Technical Reports Server (NTRS)

    Elliott, Joshua; Muller, Christoff

    2015-01-01

    Climate change is a significant risk for agricultural production. Even under optimistic scenarios for climate mitigation action, present-day agricultural areas are likely to face significant increases in temperatures in the coming decades, in addition to changes in precipitation, cloud cover, and the frequency and duration of extreme heat, drought, and flood events (IPCC, 2013). These factors will affect the agricultural system at the global scale by impacting cultivation regimes, prices, trade, and food security (Nelson et al., 2014a). Global-scale evaluation of crop productivity is a major challenge for climate impact and adaptation assessment. Rigorous global assessments that are able to inform planning and policy will benefit from consistent use of models, input data, and assumptions across regions and time that use mutually agreed protocols designed by the modeling community. To ensure this consistency, large-scale assessments are typically performed on uniform spatial grids, with spatial resolution of typically 10 to 50 km, over specified time-periods. Many distinct crop models and model types have been applied on the global scale to assess productivity and climate impacts, often with very different results (Rosenzweig et al., 2014). These models are based to a large extent on field-scale crop process or ecosystems models and they typically require resolved data on weather, environmental, and farm management conditions that are lacking in many regions (Bondeau et al., 2007; Drewniak et al., 2013; Elliott et al., 2014b; Gueneau et al., 2012; Jones et al., 2003; Liu et al., 2007; M¨uller and Robertson, 2014; Van den Hoof et al., 2011;Waha et al., 2012; Xiong et al., 2014). Due to data limitations, the requirements of consistency, and the computational and practical limitations of running models on a large scale, a variety of simplifying assumptions must generally be made regarding prevailing management strategies on the grid scale in both the baseline and future periods. Implementation differences in these and other modeling choices contribute to significant variation among global-scale crop model assessments in addition to differences in crop model implementations that also cause large differences in site-specific crop modeling (Asseng et al., 2013; Bassu et al., 2014).

  20. Search for Production of Resonant States in the Photon-Jet Mass Distribution Using p p Collisions at s = 7 TeV Collected by the ATLAS Detector

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aad, G.; Abbott, B.; Abdallah, J.

    2012-05-22

    This Letter describes a model-independent search for the production of new resonant states in photon + jet events in 2.11 fb -1 of proton-proton collisions at √ s = 7 TeV . We compare the photon + jet mass distribution to a background model derived from data and find consistency with the background-only hypothesis. Given the lack of evidence for a signal, we set 95% credibility level limits on generic Gaussian-shaped signals and on a benchmark excited-quark ( q * ) model, excluding 2 TeV Gaussian resonances with cross section times branching fraction times acceptance times efficiency near 5 fbmore » and excluding q * masses below 2.46 TeV, respectively.« less

  1. A Dynamical Systems Model for Understanding Behavioral Interventions for Weight Loss

    NASA Astrophysics Data System (ADS)

    Navarro-Barrientos, J.-Emeterio; Rivera, Daniel E.; Collins, Linda M.

    We propose a dynamical systems model that captures the daily fluctuations of human weight change, incorporating both physiological and psychological factors. The model consists of an energy balance integrated with a mechanistic behavioral model inspired by the Theory of Planned Behavior (TPB); the latter describes how important variables in a behavioral intervention can influence healthy eating habits and increased physical activity over time. The model can be used to inform behavioral scientists in the design of optimized interventions for weight loss and body composition change.

  2. Ion confinement and transport in a toroidal plasma with externally imposed radial electric fields

    NASA Technical Reports Server (NTRS)

    Roth, J. R.; Krawczonek, W. M.; Powers, E. J.; Kim, Y. C.; Hong, H. Y.

    1979-01-01

    Strong electric fields were imposed along the minor radius of the toroidal plasma by biasing it with electrodes maintained at kilovolt potentials. Coherent, low-frequency disturbances characteristic of various magnetohydrodynamic instabilities were absent in the high-density, well-confined regime. High, direct-current radial electric fields with magnitudes up to 135 volts per centimeter penetrated inward to at least one-half the plasma radius. When the electric field pointed radially toward, the ion transport was inward against a strong local density gradient; and the plasma density and confinement time were significantly enhanced. The radial transport along the electric field appeared to be consistent with fluctuation-induced transport. With negative electrode polarity the particle confinement was consistent with a balance of two processes: a radial infusion of ions, in those sectors of the plasma not containing electrodes, that resulted from the radially inward fields; and ion losses to the electrodes, each of the which acted as a sink and drew ions out of the plasma. A simple model of particle confinement was proposed in which the particle confinement time is proportional to the plasma volume. The scaling predicted by this model was consistent with experimental measurements.

  3. Post-eruptive inflation of Okmok Volcano, Alaska, from InSAR, 2008–2014

    USGS Publications Warehouse

    Qu, Feifei; Lu, Zhong; Poland, Michael; Freymueller, Jeffrey T.; Zhang, Qin; Jung, Hyung-Sup

    2016-01-01

    Okmok, a ~10-km wide caldera that occupies most of the northeastern end of Umnak Island, is one of the most active volcanoes in the Aleutian arc. The most recent eruption at Okmok during July-August 2008 was by far its largest and most explosive since at least the early 19th century. We investigate post-eruptive magma supply and storage at the volcano during 2008–2014 by analyzing all available synthetic aperture radar (SAR) images of Okmok acquired during that time period using the multi-temporal InSAR technique. Data from the C-band Envisat and X-band TerraSAR-X satellites indicate that Okmok started inflating very soon after the end of 2008 eruption at a time-variable rate of 48-130 mm/y, consistent with GPS measurements. The “model-assisted” phase unwrapping method is applied to improve the phase unwrapping operation for long temporal baseline pairs. The InSAR time-series is used as input for deformation source modeling, which suggests magma accumulating at variable rates in a shallow storage zone at ~3.9 km below sea level beneath the summit caldera, consistent with previous studies. The modeled volume accumulation in the 6 years following the 2008 eruption is ~75% of the 1997 eruption volume and ~25% of the 2008 eruption volume.

  4. Multi-Epoch Multiwavelength Spectra and Models for Blazar 3C 279

    NASA Technical Reports Server (NTRS)

    Hartman, R. C.; Boettcher, M.; Aldering, G.; Aller, H.; Aller, M.; Backman, D. E.; Balonek, T. J.; Bertsch, D. L.; Bloom, S. D.; Bock, H.; hide

    2001-01-01

    Of the blazars detected by EGRET in GeV gamma-rays, 3C 279 is not only the best-observed by EGRET, but also one of the best-monitored at lower frequencies. We have assembled eleven spectra, from GHz radio through GeV gamma-rays, from the time intervals of EGRET observations. Although some of the data have appeared in previous publications, most are new, including data taken during the high states in early 1999 and early 2000. All of the spectra show substantial gamma-ray contribution to the total luminosity of the object; in a high state, the gamma-ray luminosity dominates over that at all other frequencies by a factor of more than 10. There is no clear pattern of time correlation; different bands do not always rise and fall together, even in the optical, X-ray, and gamma-ray bands. The spectra are modeled using a leptonic jet, with combined synchrotron self-Compton + external Compton gamma-ray production. Spectral variability of 3C 279 is consistent with variations of the bulk Lorentz factor of the jet, accompanied by changes in the spectral shape of the electron distribution. Our modeling results are consistent with the UV spectrum of 3C 279 being dominated by accretion disk radiation during times of low gamma-ray intensity.

  5. Seemingly unrelated intervention time series models for effectiveness evaluation of large scale environmental remediation.

    PubMed

    Ip, Ryan H L; Li, W K; Leung, Kenneth M Y

    2013-09-15

    Large scale environmental remediation projects applied to sea water always involve large amount of capital investments. Rigorous effectiveness evaluations of such projects are, therefore, necessary and essential for policy review and future planning. This study aims at investigating effectiveness of environmental remediation using three different Seemingly Unrelated Regression (SUR) time series models with intervention effects, including Model (1) assuming no correlation within and across variables, Model (2) assuming no correlation across variable but allowing correlations within variable across different sites, and Model (3) allowing all possible correlations among variables (i.e., an unrestricted model). The results suggested that the unrestricted SUR model is the most reliable one, consistently having smallest variations of the estimated model parameters. We discussed our results with reference to marine water quality management in Hong Kong while bringing managerial issues into consideration. Copyright © 2013 Elsevier Ltd. All rights reserved.

  6. Architecture for reactive planning of robot actions

    NASA Astrophysics Data System (ADS)

    Riekki, Jukka P.; Roening, Juha

    1995-01-01

    In this article, a reactive system for planning robot actions is described. The described hierarchical control system architecture consists of planning-executing-monitoring-modelling elements (PEMM elements). A PEMM element is a goal-oriented, combined processing and data element. It includes a planner, an executor, a monitor, a modeler, and a local model. The elements form a tree-like structure. An element receives tasks from its ancestor and sends subtasks to its descendants. The model knowledge is distributed into the local models, which are connected to each other. The elements can be synchronized. The PEMM architecture is strictly hierarchical. It integrated planning, sensing, and modelling into a single framework. A PEMM-based control system is reactive, as it can cope with asynchronous events and operate under time constraints. The control system is intended to be used primarily to control mobile robots and robot manipulators in dynamic and partially unknown environments. It is suitable especially for applications consisting of physically separated devices and computing resources.

  7. Lunar PMAD technology assessment

    NASA Technical Reports Server (NTRS)

    Metcalf, Kenneth J.

    1992-01-01

    This report documents an initial set of power conditioning models created to generate 'ballpark' power management and distribution (PMAD) component mass and size estimates. It contains converter, rectifier, inverter, transformer, remote bus isolator (RBI), and remote power controller (RPC) models. These models allow certain studies to be performed; however, additional models are required to assess a full range of PMAD alternatives. The intent is to eventually form a library of PMAD models that will allow system designers to evaluate various power system architectures and distribution techniques quickly and consistently. The models in this report are designed primarily for space exploration initiative (SEI) missions requiring continuous power and supporting manned operations. The mass estimates were developed by identifying the stages in a component and obtaining mass breakdowns for these stages from near term electronic hardware elements. Technology advances were then incorporated to generate hardware masses consistent with the 2000 to 2010 time period. The mass of a complete component is computed by algorithms that calculate the masses of the component stages, control and monitoring, enclosure, and thermal management subsystem.

  8. Detecting consistent patterns of directional adaptation using differential selection codon models.

    PubMed

    Parto, Sahar; Lartillot, Nicolas

    2017-06-23

    Phylogenetic codon models are often used to characterize the selective regimes acting on protein-coding sequences. Recent methodological developments have led to models explicitly accounting for the interplay between mutation and selection, by modeling the amino acid fitness landscape along the sequence. However, thus far, most of these models have assumed that the fitness landscape is constant over time. Fluctuations of the fitness landscape may often be random or depend on complex and unknown factors. However, some organisms may be subject to systematic changes in selective pressure, resulting in reproducible molecular adaptations across independent lineages subject to similar conditions. Here, we introduce a codon-based differential selection model, which aims to detect and quantify the fine-grained consistent patterns of adaptation at the protein-coding level, as a function of external conditions experienced by the organism under investigation. The model parameterizes the global mutational pressure, as well as the site- and condition-specific amino acid selective preferences. This phylogenetic model is implemented in a Bayesian MCMC framework. After validation with simulations, we applied our method to a dataset of HIV sequences from patients with known HLA genetic background. Our differential selection model detects and characterizes differentially selected coding positions specifically associated with two different HLA alleles. Our differential selection model is able to identify consistent molecular adaptations as a function of repeated changes in the environment of the organism. These models can be applied to many other problems, ranging from viral adaptation to evolution of life-history strategies in plants or animals.

  9. State-of-charge estimation in lithium-ion batteries: A particle filter approach

    NASA Astrophysics Data System (ADS)

    Tulsyan, Aditya; Tsai, Yiting; Gopaluni, R. Bhushan; Braatz, Richard D.

    2016-11-01

    The dynamics of lithium-ion batteries are complex and are often approximated by models consisting of partial differential equations (PDEs) relating the internal ionic concentrations and potentials. The Pseudo two-dimensional model (P2D) is one model that performs sufficiently accurately under various operating conditions and battery chemistries. Despite its widespread use for prediction, this model is too complex for standard estimation and control applications. This article presents an original algorithm for state-of-charge estimation using the P2D model. Partial differential equations are discretized using implicit stable algorithms and reformulated into a nonlinear state-space model. This discrete, high-dimensional model (consisting of tens to hundreds of states) contains implicit, nonlinear algebraic equations. The uncertainty in the model is characterized by additive Gaussian noise. By exploiting the special structure of the pseudo two-dimensional model, a novel particle filter algorithm that sweeps in time and spatial coordinates independently is developed. This algorithm circumvents the degeneracy problems associated with high-dimensional state estimation and avoids the repetitive solution of implicit equations by defining a 'tether' particle. The approach is illustrated through extensive simulations.

  10. Pharmacokinetic Modeling of Perfluoroalkyl Acids in Rodents

    EPA Science Inventory

    Perfluorooctanoic acid (PFOA) has pharmacokinetic properties that appear consistent with a number of processes that are currently not well understood. Studies in mice exposed orally at lower doses (1 and 10 mg/kg) demonstrated blood, liver, and kidney concentration time courses ...

  11. Toxicogenomic Effects Common to Triazole Antifungals and Conserved Between Rats and Humans

    EPA Science Inventory

    The triazole antifungals myclobutanil, propiconazole and triadimefon cause varying degrees of hepatic toxicity and disrupt steroid hormone homeostasis in rodent in vivo models. To identify biological pathways consistently modulated across multiple time-points and various study d...

  12. SGR-like behaviour of the repeating FRB 121102

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, F.Y.; Yu, H., E-mail: fayinwang@nju.edu.cn, E-mail: yuhai@smail.nju.edu.cn

    2017-03-01

    Fast radio bursts (FRBs) are millisecond-duration radio signals occurring at cosmological distances. However the physical model of FRBs is mystery, many models have been proposed. Here we study the frequency distributions of peak flux, fluence, duration and waiting time for the repeating FRB 121102. The cumulative distributions of peak flux, fluence and duration show power-law forms. The waiting time distribution also shows power-law distribution, and is consistent with a non-stationary Poisson process. These distributions are similar as those of soft gamma repeaters (SGRs). We also use the statistical results to test the proposed models for FRBs. These distributions are consistentmore » with the predictions from avalanche models of slowly driven nonlinear dissipative systems.« less

  13. [Establishment of a canine slow transit constipation model and evalution of defecation, gastrointestinal transit and pathological sections].

    PubMed

    Zhu, D; Chen, S; Yao, S K; Li, Y M; Chen, S X

    2018-06-12

    Objective: To establish a canine model of slow transit constipation (STC), and to test the changes in defecation, gastrointestinal transit time and pathology sections. Methods: Baseline information was measured in 8 beagle dogs, and these dogs were randomly divided into the control group and the model group. The dogs in model group were given a diet of canned meat, as well as a combination of compound diphenoxylate and alosetron hydrochloride for 5 weeks. Dogs in control group were given normal diet with no special intervention. Stool frequency and consistency were observed and recorded daily, and the gastrointestinal transit time (GITT) were measured every week. All animals underwent the midline laparotomy and the colonic tissues were taken from the rectosigmoid colon, then investigated by light microscopy, electron microscopy, and immunohistochemistry to evaluate changes of protein gene product 9.5(PGP9.5), synaptophysin and c-kit between two groups. Results: 8 beagle dogs underwent all experiment items successfully.Both of the stool frequency and scores of stool consistency decreased in model group( F =6.568, P =0.043; F =25.954, P =0.002). GITT delayed in model group( F =42.573, P =0.001). After 5 weeks of intervention, in the model group, the myenteric neurons and interstitial cells of Cajal showed damage such as swelling of mitochondria under electron microscopy, and both of the PGP9.5 and synaptophysin integrated option density of rectosigmoid colon were decreased ( t =3.471, P =0.013; t =2.506, P =0.046)under immunohistochemistry. The c-kit integrated option density showed no statistically significant differences between two groups( t =1.709, P =0.138). Conclusions: The canine model of STC which was consistent with clinical symptoms and pathological changes was successfully established, and it can be used to observe and evaluate the therapeutic effect of electrical stimulation, surgery and so on.

  14. Internal models of target motion: expected dynamics overrides measured kinematics in timing manual interceptions.

    PubMed

    Zago, Myrka; Bosco, Gianfranco; Maffei, Vincenzo; Iosa, Marco; Ivanenko, Yuri P; Lacquaniti, Francesco

    2004-04-01

    Prevailing views on how we time the interception of a moving object assume that the visual inputs are informationally sufficient to estimate the time-to-contact from the object's kinematics. Here we present evidence in favor of a different view: the brain makes the best estimate about target motion based on measured kinematics and an a priori guess about the causes of motion. According to this theory, a predictive model is used to extrapolate time-to-contact from expected dynamics (kinetics). We projected a virtual target moving vertically downward on a wide screen with different randomized laws of motion. In the first series of experiments, subjects were asked to intercept this target by punching a real ball that fell hidden behind the screen and arrived in synchrony with the visual target. Subjects systematically timed their motor responses consistent with the assumption of gravity effects on an object's mass, even when the visual target did not accelerate. With training, the gravity model was not switched off but adapted to nonaccelerating targets by shifting the time of motor activation. In the second series of experiments, there was no real ball falling behind the screen. Instead the subjects were required to intercept the visual target by clicking a mousebutton. In this case, subjects timed their responses consistent with the assumption of uniform motion in the absence of forces, even when the target actually accelerated. Overall, the results are in accord with the theory that motor responses evoked by visual kinematics are modulated by a prior of the target dynamics. The prior appears surprisingly resistant to modifications based on performance errors.

  15. Parareal in time 3D numerical solver for the LWR Benchmark neutron diffusion transient model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baudron, Anne-Marie, E-mail: anne-marie.baudron@cea.fr; CEA-DRN/DMT/SERMA, CEN-Saclay, 91191 Gif sur Yvette Cedex; Lautard, Jean-Jacques, E-mail: jean-jacques.lautard@cea.fr

    2014-12-15

    In this paper we present a time-parallel algorithm for the 3D neutrons calculation of a transient model in a nuclear reactor core. The neutrons calculation consists in numerically solving the time dependent diffusion approximation equation, which is a simplified transport equation. The numerical resolution is done with finite elements method based on a tetrahedral meshing of the computational domain, representing the reactor core, and time discretization is achieved using a θ-scheme. The transient model presents moving control rods during the time of the reaction. Therefore, cross-sections (piecewise constants) are taken into account by interpolations with respect to the velocity ofmore » the control rods. The parallelism across the time is achieved by an adequate use of the parareal in time algorithm to the handled problem. This parallel method is a predictor corrector scheme that iteratively combines the use of two kinds of numerical propagators, one coarse and one fine. Our method is made efficient by means of a coarse solver defined with large time step and fixed position control rods model, while the fine propagator is assumed to be a high order numerical approximation of the full model. The parallel implementation of our method provides a good scalability of the algorithm. Numerical results show the efficiency of the parareal method on large light water reactor transient model corresponding to the Langenbuch–Maurer–Werner benchmark.« less

  16. Neural network modelling of the influence of channelopathies on reflex visual attention.

    PubMed

    Gravier, Alexandre; Quek, Chai; Duch, Włodzisław; Wahab, Abdul; Gravier-Rymaszewska, Joanna

    2016-02-01

    This paper introduces a model of Emergent Visual Attention in presence of calcium channelopathy (EVAC). By modelling channelopathy, EVAC constitutes an effort towards identifying the possible causes of autism. The network structure embodies the dual pathways model of cortical processing of visual input, with reflex attention as an emergent property of neural interactions. EVAC extends existing work by introducing attention shift in a larger-scale network and applying a phenomenological model of channelopathy. In presence of a distractor, the channelopathic network's rate of failure to shift attention is lower than the control network's, but overall, the control network exhibits a lower classification error rate. The simulation results also show differences in task-relative reaction times between control and channelopathic networks. The attention shift timings inferred from the model are consistent with studies of attention shift in autistic children.

  17. The dynamic radiation environment assimilation model (DREAM)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reeves, Geoffrey D; Koller, Josef; Tokar, Robert L

    2010-01-01

    The Dynamic Radiation Environment Assimilation Model (DREAM) is a 3-year effort sponsored by the US Department of Energy to provide global, retrospective, or real-time specification of the natural and potential nuclear radiation environments. The DREAM model uses Kalman filtering techniques that combine the strengths of new physical models of the radiation belts with electron observations from long-term satellite systems such as GPS and geosynchronous systems. DREAM includes a physics model for the production and long-term evolution of artificial radiation belts from high altitude nuclear explosions. DREAM has been validated against satellites in arbitrary orbits and consistently produces more accurate resultsmore » than existing models. Tools for user-specific applications and graphical displays are in beta testing and a real-time version of DREAM has been in continuous operation since November 2009.« less

  18. Evaluation of the CEAS trend and monthly weather data models for soybean yields in Iowa, Illinois, and Indiana

    NASA Technical Reports Server (NTRS)

    French, V. (Principal Investigator)

    1982-01-01

    The CEAS models evaluated use historic trend and meteorological and agroclimatic variables to forecast soybean yields in Iowa, Illinois, and Indiana. Indicators of yield reliability and current measures of modeled yield reliability were obtained from bootstrap tests on the end of season models. Indicators of yield reliability show that the state models are consistently better than the crop reporting district (CRD) models. One CRD model is especially poor. At the state level, the bias of each model is less than one half quintal/hectare. The standard deviation is between one and two quintals/hectare. The models are adequate in terms of coverage and are to a certain extent consistent with scientific knowledge. Timely yield estimates can be made during the growing season using truncated models. The models are easy to understand and use and are not costly to operate. Other than the specification of values used to determine evapotranspiration, the models are objective. Because the method of variable selection used in the model development is adequately documented, no evaluation can be made of the objectivity and cost of redevelopment of the model.

  19. X-ray time lags in PG 1211+143

    NASA Astrophysics Data System (ADS)

    Lobban, A. P.; Vaughan, S.; Pounds, K.; Reeves, J. N.

    2018-05-01

    We investigate the X-ray time lags of a recent ˜630 ks XMM-Newton observation of PG 1211+143. We find well-correlated variations across the XMM-Newton EPIC bandpass, with the first detection of a hard lag in this source with a mean time delay of up to ˜3 ks at the lowest frequencies. We find that the energy-dependence of the low-frequency hard lag scales approximately linearly with log(E) when averaged over all orbits, consistent with the propagating fluctuations model. However, we find that the low-frequency lag behaviour becomes more complex on time-scales longer than a single orbit, suggestive of additional modes of variability. We also detect a high-frequency soft lag at ˜10-4 Hz with the magnitude of the delay peaking at ≲ 0.8 ks, consistent with previous observations, which we discuss in terms of small-scale reverberation.

  20. Semiparametric modeling: Correcting low-dimensional model error in parametric models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berry, Tyrus, E-mail: thb11@psu.edu; Harlim, John, E-mail: jharlim@psu.edu; Department of Meteorology, the Pennsylvania State University, 503 Walker Building, University Park, PA 16802-5013

    2016-03-01

    In this paper, a semiparametric modeling approach is introduced as a paradigm for addressing model error arising from unresolved physical phenomena. Our approach compensates for model error by learning an auxiliary dynamical model for the unknown parameters. Practically, the proposed approach consists of the following steps. Given a physics-based model and a noisy data set of historical observations, a Bayesian filtering algorithm is used to extract a time-series of the parameter values. Subsequently, the diffusion forecast algorithm is applied to the retrieved time-series in order to construct the auxiliary model for the time evolving parameters. The semiparametric forecasting algorithm consistsmore » of integrating the existing physics-based model with an ensemble of parameters sampled from the probability density function of the diffusion forecast. To specify initial conditions for the diffusion forecast, a Bayesian semiparametric filtering method that extends the Kalman-based filtering framework is introduced. In difficult test examples, which introduce chaotically and stochastically evolving hidden parameters into the Lorenz-96 model, we show that our approach can effectively compensate for model error, with forecasting skill comparable to that of the perfect model.« less

  1. The timing and sources of information for the adoption and implementation of production innovations

    NASA Technical Reports Server (NTRS)

    Ettlie, J. E.

    1976-01-01

    Two dimensions (personal-impersonal and internal-external) are used to characterize information sources as they become important during the interorganizational transfer of production innovations. The results of three studies are reviewed for the purpose of deriving a model of the timing and importance of different information sources and the utilization of new technology. Based on the findings of two retrospective studies, it was concluded that the pattern of information seeking behavior in user organizations during the awareness stage of adoption is not a reliable predictor of the eventual utilization rate. Using the additional findings of a real-time study, an empirical model of the relative importance of information sources for successful user organizations is presented. These results are extended and integrated into a theoretical model consisting of a time-profile of successful implementations and the relative importance of four types of information sources during seven stages of the adoption-implementation process.

  2. Variational Assimilation of GOME Total-Column Ozone Satellite Data in a 2D Latitude-Longitude Tracer-Transport Model.

    NASA Astrophysics Data System (ADS)

    Eskes, H. J.; Piters, A. J. M.; Levelt, P. F.; Allaart, M. A. F.; Kelder, H. M.

    1999-10-01

    A four-dimensional data-assimilation method is described to derive synoptic ozone fields from total-column ozone satellite measurements. The ozone columns are advected by a 2D tracer-transport model, using ECMWF wind fields at a single pressure level. Special attention is paid to the modeling of the forecast error covariance and quality control. The temporal and spatial dependence of the forecast error is taken into account, resulting in a global error field at any instant in time that provides a local estimate of the accuracy of the assimilated field. The authors discuss the advantages of the 4D-variational (4D-Var) approach over sequential assimilation schemes. One of the attractive features of the 4D-Var technique is its ability to incorporate measurements at later times t > t0 in the analysis at time t0, in a way consistent with the time evolution as described by the model. This significantly improves the offline analyzed ozone fields.

  3. Optimal control of epidemic information dissemination over networks.

    PubMed

    Chen, Pin-Yu; Cheng, Shin-Ming; Chen, Kwang-Cheng

    2014-12-01

    Information dissemination control is of crucial importance to facilitate reliable and efficient data delivery, especially in networks consisting of time-varying links or heterogeneous links. Since the abstraction of information dissemination much resembles the spread of epidemics, epidemic models are utilized to characterize the collective dynamics of information dissemination over networks. From a systematic point of view, we aim to explore the optimal control policy for information dissemination given that the control capability is a function of its distribution time, which is a more realistic model in many applications. The main contributions of this paper are to provide an analytically tractable model for information dissemination over networks, to solve the optimal control signal distribution time for minimizing the accumulated network cost via dynamic programming, and to establish a parametric plug-in model for information dissemination control. In particular, we evaluate its performance in mobile and generalized social networks as typical examples.

  4. Membrane potential dynamics of grid cells

    PubMed Central

    Domnisoru, Cristina; Kinkhabwala, Amina A.; Tank, David W.

    2014-01-01

    During navigation, grid cells increase their spike rates in firing fields arranged on a strikingly regular triangular lattice, while their spike timing is often modulated by theta oscillations. Oscillatory interference models of grid cells predict theta amplitude modulations of membrane potential during firing field traversals, while competing attractor network models predict slow depolarizing ramps. Here, using in-vivo whole-cell recordings, we tested these models by directly measuring grid cell intracellular potentials in mice running along linear tracks in virtual reality. Grid cells had large and reproducible ramps of membrane potential depolarization that were the characteristic signature tightly correlated with firing fields. Grid cells also exhibited intracellular theta oscillations that influenced their spike timing. However, the properties of theta amplitude modulations were not consistent with the view that they determine firing field locations. Our results support cellular and network mechanisms in which grid fields are produced by slow ramps, as in attractor models, while theta oscillations control spike timing. PMID:23395984

  5. Regression analysis of informative current status data with the additive hazards model.

    PubMed

    Zhao, Shishun; Hu, Tao; Ma, Ling; Wang, Peijie; Sun, Jianguo

    2015-04-01

    This paper discusses regression analysis of current status failure time data arising from the additive hazards model in the presence of informative censoring. Many methods have been developed for regression analysis of current status data under various regression models if the censoring is noninformative, and also there exists a large literature on parametric analysis of informative current status data in the context of tumorgenicity experiments. In this paper, a semiparametric maximum likelihood estimation procedure is presented and in the method, the copula model is employed to describe the relationship between the failure time of interest and the censoring time. Furthermore, I-splines are used to approximate the nonparametric functions involved and the asymptotic consistency and normality of the proposed estimators are established. A simulation study is conducted and indicates that the proposed approach works well for practical situations. An illustrative example is also provided.

  6. A new physics-based modeling approach for tsunami-ionosphere coupling

    NASA Astrophysics Data System (ADS)

    Meng, X.; Komjathy, A.; Verkhoglyadova, O. P.; Yang, Y.-M.; Deng, Y.; Mannucci, A. J.

    2015-06-01

    Tsunamis can generate gravity waves propagating upward through the atmosphere, inducing total electron content (TEC) disturbances in the ionosphere. To capture this process, we have implemented tsunami-generated gravity waves into the Global Ionosphere-Thermosphere Model (GITM) to construct a three-dimensional physics-based model WP (Wave Perturbation)-GITM. WP-GITM takes tsunami wave properties, including the wave height, wave period, wavelength, and propagation direction, as inputs and time-dependently characterizes the responses of the upper atmosphere between 100 km and 600 km altitudes. We apply WP-GITM to simulate the ionosphere above the West Coast of the United States around the time when the tsunami associated with the March 2011 Tohuku-Oki earthquke arrived. The simulated TEC perturbations agree with Global Positioning System observations reasonably well. For the first time, a fully self-consistent and physics-based model has reproduced the GPS-observed traveling ionospheric signatures of an actual tsunami event.

  7. From Geometry Optimization to Time Dependent Molecular Structure Modeling: Method Developments, ab initio Theories and Applications

    NASA Astrophysics Data System (ADS)

    Liang, Wenkel

    This dissertation consists of two general parts: (I) developments of optimization algorithms (both nuclear and electronic degrees of freedom) for time-independent molecules and (II) novel methods, first-principle theories and applications in time dependent molecular structure modeling. In the first part, we discuss in specific two new algorithms for static geometry optimization, the eigenspace update (ESU) method in nonredundant internal coordinate that exhibits an enhanced performace with up to a factor of 3 savings in computational cost for large-sized molecular systems; the Car-Parrinello density matrix search (CP-DMS) method that enables direct minimization of the SCF energy as an effective alternative to conventional diagonalization approach. For the second part, we consider the time dependence and first presents two nonadiabatic dynamic studies that model laser controlled molecular photo-dissociation for qualitative understandings of intense laser-molecule interaction, using ab initio direct Ehrenfest dynamics scheme implemented with real-time time-dependent density functional theory (RT-TDDFT) approach developed in our group. Furthermore, we place our special interest on the nonadiabatic electronic dynamics in the ultrafast time scale, and presents (1) a novel technique that can not only obtain energies but also the electron densities of doubly excited states within a single determinant framework, by combining methods of CP-DMS with RT-TDDFT; (2) a solvated first-principles electronic dynamics method by incorporating the polarizable continuum solvation model (PCM) to RT-TDDFT, which is found to be very effective in describing the dynamical solvation effect in the charge transfer process and yields a consistent absorption spectrum in comparison to the conventional linear response results in solution. (3) applications of the PCM-RT-TDDFT method to study the intramolecular charge-transfer (CT) dynamics in a C60 derivative. Such work provides insights into the characteristics of ultrafast dynamics in photoexcited fullerene derivatives, and aids in the rational design for pre-dissociative exciton in the intramolecular CT process in organic solar cells.

  8. Error modeling for surrogates of dynamical systems using machine learning: Machine-learning-based error model for surrogates of dynamical systems

    DOE PAGES

    Trehan, Sumeet; Carlberg, Kevin T.; Durlofsky, Louis J.

    2017-07-14

    A machine learning–based framework for modeling the error introduced by surrogate models of parameterized dynamical systems is proposed. The framework entails the use of high-dimensional regression techniques (eg, random forests, and LASSO) to map a large set of inexpensively computed “error indicators” (ie, features) produced by the surrogate model at a given time instance to a prediction of the surrogate-model error in a quantity of interest (QoI). This eliminates the need for the user to hand-select a small number of informative features. The methodology requires a training set of parameter instances at which the time-dependent surrogate-model error is computed bymore » simulating both the high-fidelity and surrogate models. Using these training data, the method first determines regression-model locality (via classification or clustering) and subsequently constructs a “local” regression model to predict the time-instantaneous error within each identified region of feature space. We consider 2 uses for the resulting error model: (1) as a correction to the surrogate-model QoI prediction at each time instance and (2) as a way to statistically model arbitrary functions of the time-dependent surrogate-model error (eg, time-integrated errors). We then apply the proposed framework to model errors in reduced-order models of nonlinear oil-water subsurface flow simulations, with time-varying well-control (bottom-hole pressure) parameters. The reduced-order models used in this work entail application of trajectory piecewise linearization in conjunction with proper orthogonal decomposition. Moreover, when the first use of the method is considered, numerical experiments demonstrate consistent improvement in accuracy in the time-instantaneous QoI prediction relative to the original surrogate model, across a large number of test cases. When the second use is considered, results show that the proposed method provides accurate statistical predictions of the time- and well-averaged errors.« less

  9. Error modeling for surrogates of dynamical systems using machine learning: Machine-learning-based error model for surrogates of dynamical systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trehan, Sumeet; Carlberg, Kevin T.; Durlofsky, Louis J.

    A machine learning–based framework for modeling the error introduced by surrogate models of parameterized dynamical systems is proposed. The framework entails the use of high-dimensional regression techniques (eg, random forests, and LASSO) to map a large set of inexpensively computed “error indicators” (ie, features) produced by the surrogate model at a given time instance to a prediction of the surrogate-model error in a quantity of interest (QoI). This eliminates the need for the user to hand-select a small number of informative features. The methodology requires a training set of parameter instances at which the time-dependent surrogate-model error is computed bymore » simulating both the high-fidelity and surrogate models. Using these training data, the method first determines regression-model locality (via classification or clustering) and subsequently constructs a “local” regression model to predict the time-instantaneous error within each identified region of feature space. We consider 2 uses for the resulting error model: (1) as a correction to the surrogate-model QoI prediction at each time instance and (2) as a way to statistically model arbitrary functions of the time-dependent surrogate-model error (eg, time-integrated errors). We then apply the proposed framework to model errors in reduced-order models of nonlinear oil-water subsurface flow simulations, with time-varying well-control (bottom-hole pressure) parameters. The reduced-order models used in this work entail application of trajectory piecewise linearization in conjunction with proper orthogonal decomposition. Moreover, when the first use of the method is considered, numerical experiments demonstrate consistent improvement in accuracy in the time-instantaneous QoI prediction relative to the original surrogate model, across a large number of test cases. When the second use is considered, results show that the proposed method provides accurate statistical predictions of the time- and well-averaged errors.« less

  10. Kinetic modeling of Nernst effect in magnetized hohlraums.

    PubMed

    Joglekar, A S; Ridgers, C P; Kingham, R J; Thomas, A G R

    2016-04-01

    We present nanosecond time-scale Vlasov-Fokker-Planck-Maxwell modeling of magnetized plasma transport and dynamics in a hohlraum with an applied external magnetic field, under conditions similar to recent experiments. Self-consistent modeling of the kinetic electron momentum equation allows for a complete treatment of the heat flow equation and Ohm's law, including Nernst advection of magnetic fields. In addition to showing the prevalence of nonlocal behavior, we demonstrate that effects such as anomalous heat flow are induced by inverse bremsstrahlung heating. We show magnetic field amplification up to a factor of 3 from Nernst compression into the hohlraum wall. The magnetic field is also expelled towards the hohlraum axis due to Nernst advection faster than frozen-in flux would suggest. Nonlocality contributes to the heat flow towards the hohlraum axis and results in an augmented Nernst advection mechanism that is included self-consistently through kinetic modeling.

  11. Flexible Modeling of Survival Data with Covariates Subject to Detection Limits via Multiple Imputation.

    PubMed

    Bernhardt, Paul W; Wang, Huixia Judy; Zhang, Daowen

    2014-01-01

    Models for survival data generally assume that covariates are fully observed. However, in medical studies it is not uncommon for biomarkers to be censored at known detection limits. A computationally-efficient multiple imputation procedure for modeling survival data with covariates subject to detection limits is proposed. This procedure is developed in the context of an accelerated failure time model with a flexible seminonparametric error distribution. The consistency and asymptotic normality of the multiple imputation estimator are established and a consistent variance estimator is provided. An iterative version of the proposed multiple imputation algorithm that approximates the EM algorithm for maximum likelihood is also suggested. Simulation studies demonstrate that the proposed multiple imputation methods work well while alternative methods lead to estimates that are either biased or more variable. The proposed methods are applied to analyze the dataset from a recently-conducted GenIMS study.

  12. Emergency response to an anthrax attack

    PubMed Central

    Wein, Lawrence M.; Craft, David L.; Kaplan, Edward H.

    2003-01-01

    We developed a mathematical model to compare various emergency responses in the event of an airborne anthrax attack. The system consists of an atmospheric dispersion model, an age-dependent dose–response model, a disease progression model, and a set of spatially distributed two-stage queueing systems consisting of antibiotic distribution and hospital care. Our results underscore the need for the extremely aggressive and timely use of oral antibiotics by all asymptomatics in the exposure region, distributed either preattack or by nonprofessionals postattack, and the creation of surge capacity for supportive hospital care via expanded training of nonemergency care workers at the local level and the use of federal and military resources and nationwide medical volunteers. The use of prioritization (based on disease stage and/or age) at both queues, and the development and deployment of modestly rapid and sensitive biosensors, while helpful, produce only second-order improvements. PMID:12651951

  13. Heat-pipe Earth.

    PubMed

    Moore, William B; Webb, A Alexander G

    2013-09-26

    The heat transport and lithospheric dynamics of early Earth are currently explained by plate tectonic and vertical tectonic models, but these do not offer a global synthesis consistent with the geologic record. Here we use numerical simulations and comparison with the geologic record to explore a heat-pipe model in which volcanism dominates surface heat transport. These simulations indicate that a cold and thick lithosphere developed as a result of frequent volcanic eruptions that advected surface materials downwards. Declining heat sources over time led to an abrupt transition to plate tectonics. Consistent with model predictions, the geologic record shows rapid volcanic resurfacing, contractional deformation, a low geothermal gradient across the bulk of the lithosphere and a rapid decrease in heat-pipe volcanism after initiation of plate tectonics. The heat-pipe Earth model therefore offers a coherent geodynamic framework in which to explore the evolution of our planet before the onset of plate tectonics.

  14. A Self-Consistent Model of the Interacting Ring Current Ions with Electromagnetic ICWs

    NASA Technical Reports Server (NTRS)

    Khazanov, G. V.; Gamayunov, K. V.; Jordanova, V. K.; Krivorutsky, E. N.; Whitaker, Ann F. (Technical Monitor)

    2001-01-01

    Initial results from a newly developed model of the interacting ring current ions and ion cyclotron waves are presented. The model is based on the system of two bound kinetic equations: one equation describes the ring current ion dynamics, and another equation describes wave evolution. The system gives a self-consistent description of ring current ions and ion cyclotron waves in a quasilinear approach. These two equations were solved on a global scale under non steady-state conditions during the May 2-5, 1998 storm. The structure and dynamics of the ring current proton precipitating flux regions and the wave active zones at three time cuts around initial, main, and late recovery phases of the May 4, 1998 storm phase are presented and discussed in detail. Comparisons of the model wave-ion data with the Polar/HYDRA and Polar/MFE instruments results are presented..

  15. Time-dependent jet flow and noise computations

    NASA Technical Reports Server (NTRS)

    Berman, C. H.; Ramos, J. I.; Karniadakis, G. E.; Orszag, S. A.

    1990-01-01

    Methods for computing jet turbulence noise based on the time-dependent solution of Lighthill's (1952) differential equation are demonstrated. A key element in this approach is a flow code for solving the time-dependent Navier-Stokes equations at relatively high Reynolds numbers. Jet flow results at Re = 10,000 are presented here. This code combines a computationally efficient spectral element technique and a new self-consistent turbulence subgrid model to supply values for Lighthill's turbulence noise source tensor.

  16. Elucidating light-induced charge accumulation in an artificial analogue of methane monooxygenase enzymes using time-resolved X-ray absorption spectroscopy

    DOE PAGES

    Moonshiram, Dooshaye; Picon, Antonio; Vazquez-Mayagoitia, Alvaro; ...

    2017-02-08

    Here, we report the use of time-resolved X-ray absorption spectroscopy in the ns–μs time scale to track the light induced two electron transfer processes in a multi-component photocatalytic system, consisting of [Ru(bpy) 3] 2+/ a diiron(III,III) model/triethylamine. EXAFS analysis with DFT calculations confirms the structural configurations of the diiron(III,III) and reduced diiron(II,II) states.

  17. Performance Analysis of Live-Virtual-Constructive and Distributed Virtual Simulations: Defining Requirements in Terms of Temporal Consistency

    DTIC Science & Technology

    2009-12-01

    events. Work associated with aperiodic tasks have the same statistical behavior and the same timing requirements. The timing deadlines are soft. • Sporadic...answers, but it is possible to calculate how precise the estimates are. Simulation-based performance analysis of a model includes a statistical ...to evaluate all pos- sible states in a timely manner. This is the principle reason for resorting to simulation and statistical analysis to evaluate

  18. Elucidating light-induced charge accumulation in an artificial analogue of methane monooxygenase enzymes using time-resolved X-ray absorption spectroscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moonshiram, Dooshaye; Picon, Antonio; Vazquez-Mayagoitia, Alvaro

    Here, we report the use of time-resolved X-ray absorption spectroscopy in the ns–μs time scale to track the light induced two electron transfer processes in a multi-component photocatalytic system, consisting of [Ru(bpy) 3] 2+/ a diiron(III,III) model/triethylamine. EXAFS analysis with DFT calculations confirms the structural configurations of the diiron(III,III) and reduced diiron(II,II) states.

  19. Cortical Spiking Network Interfaced with Virtual Musculoskeletal Arm and Robotic Arm

    PubMed Central

    Dura-Bernal, Salvador; Zhou, Xianlian; Neymotin, Samuel A.; Przekwas, Andrzej; Francis, Joseph T.; Lytton, William W.

    2015-01-01

    Embedding computational models in the physical world is a critical step towards constraining their behavior and building practical applications. Here we aim to drive a realistic musculoskeletal arm model using a biomimetic cortical spiking model, and make a robot arm reproduce the same trajectories in real time. Our cortical model consisted of a 3-layered cortex, composed of several hundred spiking model-neurons, which display physiologically realistic dynamics. We interconnected the cortical model to a two-joint musculoskeletal model of a human arm, with realistic anatomical and biomechanical properties. The virtual arm received muscle excitations from the neuronal model, and fed back proprioceptive information, forming a closed-loop system. The cortical model was trained using spike timing-dependent reinforcement learning to drive the virtual arm in a 2D reaching task. Limb position was used to simultaneously control a robot arm using an improved network interface. Virtual arm muscle activations responded to motoneuron firing rates, with virtual arm muscles lengths encoded via population coding in the proprioceptive population. After training, the virtual arm performed reaching movements which were smoother and more realistic than those obtained using a simplistic arm model. This system provided access to both spiking network properties and to arm biophysical properties, including muscle forces. The use of a musculoskeletal virtual arm and the improved control system allowed the robot arm to perform movements which were smoother than those reported in our previous paper using a simplistic arm. This work provides a novel approach consisting of bidirectionally connecting a cortical model to a realistic virtual arm, and using the system output to drive a robotic arm in real time. Our techniques are applicable to the future development of brain neuroprosthetic control systems, and may enable enhanced brain-machine interfaces with the possibility for finer control of limb prosthetics. PMID:26635598

  20. Mass Balance of Multiyear Sea Ice in the Southern Beaufort Sea

    DTIC Science & Technology

    2013-09-30

    model of MY ice circulation, which is shown in Figure 1. In this model , we consider the Beaufort Sea to consist of four zones defined by mean drift...Arctic Regional Climate Model Simulation Project 3 International Arctic Buoy Program 4 Sea ice Experiment - Dynamic Nature of the Arctic 5Cold...2 Table 2: Datasets compiled to date Geophysical data type Source Time period acquired Buoy tracks IABP 12 hrly position data 1978-2012 Ice

  1. Biomechanical Modeling and Measurement of Blast Injury and Hearing Protection Mechanisms

    DTIC Science & Technology

    2015-10-01

    12 software into Workbench V. 15 in CFX/ANSYS; 2) building the geometry of the ear model with ossicular chain and cochlear load in CFX; 3...the ear canal to middle ear. The model consists of the ear canal, TM, middle ear ossicles and suspensory ligaments, middle ear cavity, and cochlear ...the TM, ossicles, and ligaments/muscle tendons with the cochlear load applied on the stapes footplate. 17 Fig. 21. Time-history plots of

  2. A nudging-based data assimilation method: the Back and Forth Nudging (BFN) algorithm

    NASA Astrophysics Data System (ADS)

    Auroux, D.; Blum, J.

    2008-03-01

    This paper deals with a new data assimilation algorithm, called Back and Forth Nudging. The standard nudging technique consists in adding to the equations of the model a relaxation term that is supposed to force the observations to the model. The BFN algorithm consists in repeatedly performing forward and backward integrations of the model with relaxation (or nudging) terms, using opposite signs in the direct and inverse integrations, so as to make the backward evolution numerically stable. This algorithm has first been tested on the standard Lorenz model with discrete observations (perfect or noisy) and compared with the variational assimilation method. The same type of study has then been performed on the viscous Burgers equation, comparing again with the variational method and focusing on the time evolution of the reconstruction error, i.e. the difference between the reference trajectory and the identified one over a time period composed of an assimilation period followed by a prediction period. The possible use of the BFN algorithm as an initialization for the variational method has also been investigated. Finally the algorithm has been tested on a layered quasi-geostrophic model with sea-surface height observations. The behaviours of the two algorithms have been compared in the presence of perfect or noisy observations, and also for imperfect models. This has allowed us to reach a conclusion concerning the relative performances of the two algorithms.

  3. Non-equilibrium synergistic effects in atmospheric pressure plasmas.

    PubMed

    Guo, Heng; Zhang, Xiao-Ning; Chen, Jian; Li, He-Ping; Ostrikov, Kostya Ken

    2018-03-19

    Non-equilibrium is one of the important features of an atmospheric gas discharge plasma. It involves complicated physical-chemical processes and plays a key role in various actual plasma processing. In this report, a novel complete non-equilibrium model is developed to reveal the non-equilibrium synergistic effects for the atmospheric-pressure low-temperature plasmas (AP-LTPs). It combines a thermal-chemical non-equilibrium fluid model for the quasi-neutral plasma region and a simplified sheath model for the electrode sheath region. The free-burning argon arc is selected as a model system because both the electrical-thermal-chemical equilibrium and non-equilibrium regions are involved simultaneously in this arc plasma system. The modeling results indicate for the first time that it is the strong and synergistic interactions among the mass, momentum and energy transfer processes that determine the self-consistent non-equilibrium characteristics of the AP-LTPs. An energy transfer process related to the non-uniform spatial distributions of the electron-to-heavy-particle temperature ratio has also been discovered for the first time. It has a significant influence for self-consistently predicting the transition region between the "hot" and "cold" equilibrium regions of an AP-LTP system. The modeling results would provide an instructive guidance for predicting and possibly controlling the non-equilibrium particle-energy transportation process in various AP-LTPs in future.

  4. Comment on "Detection of emerging sunspot regions in the solar interior".

    PubMed

    Braun, Douglas C

    2012-04-20

    Ilonidis et al. (Reports, 19 August 2011, p. 993) report acoustic travel-time decreases associated with emerging sunspot regions before their appearance on the solar surface. An independent analysis using helioseismic holography does not confirm these travel-time anomalies for the four regions illustrated by Ilonidis et al. This negative finding is consistent with expectations based on current emerging flux models.

  5. An Econometric Model for Estimating IQ Scores and Environmental Influences on the Pattern of IQ Scores Over Time.

    ERIC Educational Resources Information Center

    Kadane, Joseph B.; And Others

    This paper offers a preliminary analysis of the effects of a semi-segregated school system on the IQ's of its students. The basic data consist of IQ scores for fourth, sixth, and eighth grades and associated environmental data obtained from their school records. A statistical model is developed to analyze longitudinal data when both process error…

  6. Reinforcement learning of targeted movement in a spiking neuronal model of motor cortex.

    PubMed

    Chadderdon, George L; Neymotin, Samuel A; Kerr, Cliff C; Lytton, William W

    2012-01-01

    Sensorimotor control has traditionally been considered from a control theory perspective, without relation to neurobiology. In contrast, here we utilized a spiking-neuron model of motor cortex and trained it to perform a simple movement task, which consisted of rotating a single-joint "forearm" to a target. Learning was based on a reinforcement mechanism analogous to that of the dopamine system. This provided a global reward or punishment signal in response to decreasing or increasing distance from hand to target, respectively. Output was partially driven by Poisson motor babbling, creating stochastic movements that could then be shaped by learning. The virtual forearm consisted of a single segment rotated around an elbow joint, controlled by flexor and extensor muscles. The model consisted of 144 excitatory and 64 inhibitory event-based neurons, each with AMPA, NMDA, and GABA synapses. Proprioceptive cell input to this model encoded the 2 muscle lengths. Plasticity was only enabled in feedforward connections between input and output excitatory units, using spike-timing-dependent eligibility traces for synaptic credit or blame assignment. Learning resulted from a global 3-valued signal: reward (+1), no learning (0), or punishment (-1), corresponding to phasic increases, lack of change, or phasic decreases of dopaminergic cell firing, respectively. Successful learning only occurred when both reward and punishment were enabled. In this case, 5 target angles were learned successfully within 180 s of simulation time, with a median error of 8 degrees. Motor babbling allowed exploratory learning, but decreased the stability of the learned behavior, since the hand continued moving after reaching the target. Our model demonstrated that a global reinforcement signal, coupled with eligibility traces for synaptic plasticity, can train a spiking sensorimotor network to perform goal-directed motor behavior.

  7. Evaluation of different time domain peak models using extreme learning machine-based peak detection for EEG signal.

    PubMed

    Adam, Asrul; Ibrahim, Zuwairie; Mokhtar, Norrima; Shapiai, Mohd Ibrahim; Cumming, Paul; Mubin, Marizan

    2016-01-01

    Various peak models have been introduced to detect and analyze peaks in the time domain analysis of electroencephalogram (EEG) signals. In general, peak model in the time domain analysis consists of a set of signal parameters, such as amplitude, width, and slope. Models including those proposed by Dumpala, Acir, Liu, and Dingle are routinely used to detect peaks in EEG signals acquired in clinical studies of epilepsy or eye blink. The optimal peak model is the most reliable peak detection performance in a particular application. A fair measure of performance of different models requires a common and unbiased platform. In this study, we evaluate the performance of the four different peak models using the extreme learning machine (ELM)-based peak detection algorithm. We found that the Dingle model gave the best performance, with 72 % accuracy in the analysis of real EEG data. Statistical analysis conferred that the Dingle model afforded significantly better mean testing accuracy than did the Acir and Liu models, which were in the range 37-52 %. Meanwhile, the Dingle model has no significant difference compared to Dumpala model.

  8. Aging of clean foams

    NASA Astrophysics Data System (ADS)

    Weon, Byung Mook; Stewart, Peter S.

    2014-11-01

    Aging is an inevitable process in living systems. Here we show how clean foams age with time through sequential coalescence events: in particular, foam aging resembles biological aging. We measure population dynamics of bubbles in clean foams through numerical simulations with a bubble network model. We demonstrate that death rates of individual bubbles increase exponentially with time, independent on initial conditions, which is consistent with the Gompertz mortality law as usually found in biological aging. This consistency suggests that clean foams as far-from-equilibrium dissipative systems are useful to explore biological aging. This work (NRF-2013R1A22A04008115) was supported by Mid-career Researcher Program through NRF grant funded by the MEST.

  9. Determining transport coefficients for a microscopic simulation of a hadron gas

    NASA Astrophysics Data System (ADS)

    Pratt, Scott; Baez, Alexander; Kim, Jane

    2017-02-01

    Quark-gluon plasmas produced in relativistic heavy-ion collisions quickly expand and cool, entering a phase consisting of multiple interacting hadronic resonances just below the QCD deconfinement temperature, T ˜155 MeV. Numerical microscopic simulations have emerged as the principal method for modeling the behavior of the hadronic stage of heavy-ion collisions, but the transport properties that characterize these simulations are not well understood. Methods are presented here for extracting the shear viscosity and two transport parameters that emerge in Israel-Stewart hydrodynamics. The analysis is based on studying how the stress-energy tensor responds to velocity gradients. Results are consistent with Kubo relations if viscous relaxation times are twice the collision time.

  10. Multi-time-scale heat transfer modeling of turbid tissues exposed to short-pulsed irradiations.

    PubMed

    Kim, Kyunghan; Guo, Zhixiong

    2007-05-01

    A combined hyperbolic radiation and conduction heat transfer model is developed to simulate multi-time-scale heat transfer in turbid tissues exposed to short-pulsed irradiations. An initial temperature response of a tissue to an ultrashort pulse irradiation is analyzed by the volume-average method in combination with the transient discrete ordinates method for modeling the ultrafast radiation heat transfer. This response is found to reach pseudo steady state within 1 ns for the considered tissues. The single pulse result is then utilized to obtain the temperature response to pulse train irradiation at the microsecond/millisecond time scales. After that, the temperature field is predicted by the hyperbolic heat conduction model which is solved by the MacCormack's scheme with error terms correction. Finally, the hyperbolic conduction is compared with the traditional parabolic heat diffusion model. It is found that the maximum local temperatures are larger in the hyperbolic prediction than the parabolic prediction. In the modeled dermis tissue, a 7% non-dimensional temperature increase is found. After about 10 thermal relaxation times, thermal waves fade away and the predictions between the hyperbolic and parabolic models are consistent.

  11. A Spiking Neural Network Model of the Lateral Geniculate Nucleus on the SpiNNaker Machine

    PubMed Central

    Sen-Bhattacharya, Basabdatta; Serrano-Gotarredona, Teresa; Balassa, Lorinc; Bhattacharya, Akash; Stokes, Alan B.; Rowley, Andrew; Sugiarto, Indar; Furber, Steve

    2017-01-01

    We present a spiking neural network model of the thalamic Lateral Geniculate Nucleus (LGN) developed on SpiNNaker, which is a state-of-the-art digital neuromorphic hardware built with very-low-power ARM processors. The parallel, event-based data processing in SpiNNaker makes it viable for building massively parallel neuro-computational frameworks. The LGN model has 140 neurons representing a “basic building block” for larger modular architectures. The motivation of this work is to simulate biologically plausible LGN dynamics on SpiNNaker. Synaptic layout of the model is consistent with biology. The model response is validated with existing literature reporting entrainment in steady state visually evoked potentials (SSVEP)—brain oscillations corresponding to periodic visual stimuli recorded via electroencephalography (EEG). Periodic stimulus to the model is provided by: a synthetic spike-train with inter-spike-intervals in the range 10–50 Hz at a resolution of 1 Hz; and spike-train output from a state-of-the-art electronic retina subjected to a light emitting diode flashing at 10, 20, and 40 Hz, simulating real-world visual stimulus to the model. The resolution of simulation is 0.1 ms to ensure solution accuracy for the underlying differential equations defining Izhikevichs neuron model. Under this constraint, 1 s of model simulation time is executed in 10 s real time on SpiNNaker; this is because simulations on SpiNNaker work in real time for time-steps dt ⩾ 1 ms. The model output shows entrainment with both sets of input and contains harmonic components of the fundamental frequency. However, suppressing the feed-forward inhibition in the circuit produces subharmonics within the gamma band (>30 Hz) implying a reduced information transmission fidelity. These model predictions agree with recent lumped-parameter computational model-based predictions, using conventional computers. Scalability of the framework is demonstrated by a multi-node architecture consisting of three “nodes,” where each node is the “basic building block” LGN model. This 420 neuron model is tested with synthetic periodic stimulus at 10 Hz to all the nodes. The model output is the average of the outputs from all nodes, and conforms to the above-mentioned predictions of each node. Power consumption for model simulation on SpiNNaker is ≪1 W. PMID:28848380

  12. A Spiking Neural Network Model of the Lateral Geniculate Nucleus on the SpiNNaker Machine.

    PubMed

    Sen-Bhattacharya, Basabdatta; Serrano-Gotarredona, Teresa; Balassa, Lorinc; Bhattacharya, Akash; Stokes, Alan B; Rowley, Andrew; Sugiarto, Indar; Furber, Steve

    2017-01-01

    We present a spiking neural network model of the thalamic Lateral Geniculate Nucleus (LGN) developed on SpiNNaker, which is a state-of-the-art digital neuromorphic hardware built with very-low-power ARM processors. The parallel, event-based data processing in SpiNNaker makes it viable for building massively parallel neuro-computational frameworks. The LGN model has 140 neurons representing a "basic building block" for larger modular architectures. The motivation of this work is to simulate biologically plausible LGN dynamics on SpiNNaker. Synaptic layout of the model is consistent with biology. The model response is validated with existing literature reporting entrainment in steady state visually evoked potentials (SSVEP)-brain oscillations corresponding to periodic visual stimuli recorded via electroencephalography (EEG). Periodic stimulus to the model is provided by: a synthetic spike-train with inter-spike-intervals in the range 10-50 Hz at a resolution of 1 Hz; and spike-train output from a state-of-the-art electronic retina subjected to a light emitting diode flashing at 10, 20, and 40 Hz, simulating real-world visual stimulus to the model. The resolution of simulation is 0.1 ms to ensure solution accuracy for the underlying differential equations defining Izhikevichs neuron model. Under this constraint, 1 s of model simulation time is executed in 10 s real time on SpiNNaker; this is because simulations on SpiNNaker work in real time for time-steps dt ⩾ 1 ms. The model output shows entrainment with both sets of input and contains harmonic components of the fundamental frequency. However, suppressing the feed-forward inhibition in the circuit produces subharmonics within the gamma band (>30 Hz) implying a reduced information transmission fidelity. These model predictions agree with recent lumped-parameter computational model-based predictions, using conventional computers. Scalability of the framework is demonstrated by a multi-node architecture consisting of three "nodes," where each node is the "basic building block" LGN model. This 420 neuron model is tested with synthetic periodic stimulus at 10 Hz to all the nodes. The model output is the average of the outputs from all nodes, and conforms to the above-mentioned predictions of each node. Power consumption for model simulation on SpiNNaker is ≪1 W.

  13. Mira variables: An informal review

    NASA Technical Reports Server (NTRS)

    Wing, R. F.

    1980-01-01

    The structure of the Mira variables is discussed with particular emphasis on the extent of their observable atmospheres, the various methods for measuring the sizes of these atmospheres, and the manner in which the size changes through the cycle. The results obtained by direct, photometric and spectroscopic methods are compared, and the problems of interpretation are addressed. Also, a simple model for the atmospheric structure and motions of Miras based on recent observations of the doubling of infrared molecualr times is described. This model, consisting of two atmospheric layers plus a circumstellar shell, provides a physically plausible picture of the atmosphere which is consistent with the photometrically measured magnitude and temperature variations as well as the spectroscopic data.

  14. Sub-optimal control of unsteady boundary layer separation and optimal control of Saltzman-Lorenz model

    NASA Astrophysics Data System (ADS)

    Sardesai, Chetan R.

    The primary objective of this research is to explore the application of optimal control theory in nonlinear, unsteady, fluid dynamical settings. Two problems are considered: (1) control of unsteady boundary-layer separation, and (2) control of the Saltzman-Lorenz model. The unsteady boundary-layer equations are nonlinear partial differential equations that govern the eruptive events that arise when an adverse pressure gradient acts on a boundary layer at high Reynolds numbers. The Saltzman-Lorenz model consists of a coupled set of three nonlinear ordinary differential equations that govern the time-dependent coefficients in truncated Fourier expansions of Rayleigh-Renard convection and exhibit deterministic chaos. Variational methods are used to derive the nonlinear optimal control formulations based on cost functionals that define the control objective through a performance measure and a penalty function that penalizes the cost of control. The resulting formulation consists of the nonlinear state equations, which must be integrated forward in time, and the nonlinear control (adjoint) equations, which are integrated backward in time. Such coupled forward-backward time integrations are computationally demanding; therefore, the full optimal control problem for the Saltzman-Lorenz model is carried out, while the more complex unsteady boundary-layer case is solved using a sub-optimal approach. The latter is a quasi-steady technique in which the unsteady boundary-layer equations are integrated forward in time, and the steady control equation is solved at each time step. Both sub-optimal control of the unsteady boundary-layer equations and optimal control of the Saltzman-Lorenz model are found to be successful in meeting the control objectives for each problem. In the case of boundary-layer separation, the control results indicate that it is necessary to eliminate the recirculation region that is a precursor to the unsteady boundary-layer eruptions. In the case of the Saltzman-Lorenz model, it is possible to control the system about either of the two unstable equilibrium points representing clockwise and counterclockwise rotation of the convection roles in a parameter regime for which the uncontrolled solution would exhibit deterministic chaos.

  15. WRF model for precipitation simulation and its application in real-time flood forecasting in the Jinshajiang River Basin, China

    NASA Astrophysics Data System (ADS)

    Zhou, Jianzhong; Zhang, Hairong; Zhang, Jianyun; Zeng, Xiaofan; Ye, Lei; Liu, Yi; Tayyab, Muhammad; Chen, Yufan

    2017-07-01

    An accurate flood forecasting with long lead time can be of great value for flood prevention and utilization. This paper develops a one-way coupled hydro-meteorological modeling system consisting of the mesoscale numerical weather model Weather Research and Forecasting (WRF) model and the Chinese Xinanjiang hydrological model to extend flood forecasting lead time in the Jinshajiang River Basin, which is the largest hydropower base in China. Focusing on four typical precipitation events includes: first, the combinations and mode structures of parameterization schemes of WRF suitable for simulating precipitation in the Jinshajiang River Basin were investigated. Then, the Xinanjiang model was established after calibration and validation to make up the hydro-meteorological system. It was found that the selection of the cloud microphysics scheme and boundary layer scheme has a great impact on precipitation simulation, and only a proper combination of the two schemes could yield accurate simulation effects in the Jinshajiang River Basin and the hydro-meteorological system can provide instructive flood forecasts with long lead time. On the whole, the one-way coupled hydro-meteorological model could be used for precipitation simulation and flood prediction in the Jinshajiang River Basin because of its relatively high precision and long lead time.

  16. Aeroelastic Modeling of X-56A Stiff-Wing Configuration Flight Test Data

    NASA Technical Reports Server (NTRS)

    Grauer, Jared A.; Boucher, Matthew J.

    2017-01-01

    Aeroelastic stability and control derivatives for the X-56A Multi-Utility Technology Testbed (MUTT), in the stiff-wing configuration, were estimated from flight test data using the output-error method. Practical aspects of the analysis are discussed. The orthogonal phase-optimized multisine inputs provided excellent data information for aeroelastic modeling. Consistent parameter estimates were determined using output error in both the frequency and time domains. The frequency domain analysis converged faster and was less sensitive to starting values for the model parameters, which was useful for determining the aeroelastic model structure and obtaining starting values for the time domain analysis. Including a modal description of the structure from a finite element model reduced the complexity of the estimation problem and improved the modeling results. Effects of reducing the model order on the short period stability and control derivatives were investigated.

  17. Decoupled ARX and RBF Neural Network Modeling Using PCA and GA Optimization for Nonlinear Distributed Parameter Systems.

    PubMed

    Zhang, Ridong; Tao, Jili; Lu, Renquan; Jin, Qibing

    2018-02-01

    Modeling of distributed parameter systems is difficult because of their nonlinearity and infinite-dimensional characteristics. Based on principal component analysis (PCA), a hybrid modeling strategy that consists of a decoupled linear autoregressive exogenous (ARX) model and a nonlinear radial basis function (RBF) neural network model are proposed. The spatial-temporal output is first divided into a few dominant spatial basis functions and finite-dimensional temporal series by PCA. Then, a decoupled ARX model is designed to model the linear dynamics of the dominant modes of the time series. The nonlinear residual part is subsequently parameterized by RBFs, where genetic algorithm is utilized to optimize their hidden layer structure and the parameters. Finally, the nonlinear spatial-temporal dynamic system is obtained after the time/space reconstruction. Simulation results of a catalytic rod and a heat conduction equation demonstrate the effectiveness of the proposed strategy compared to several other methods.

  18. Consistency restrictions on maximal electric-field strength in quantum field theory.

    PubMed

    Gavrilov, S P; Gitman, D M

    2008-09-26

    Quantum field theory with an external background can be considered as a consistent model only if backreaction is relatively small with respect to the background. To find the corresponding consistency restrictions on an external electric field and its duration in QED and QCD, we analyze the mean-energy density of quantized fields for an arbitrary constant electric field E, acting during a large but finite time T. Using the corresponding asymptotics with respect to the dimensionless parameter eET2, one can see that the leading contributions to the energy are due to the creation of particles by the electric field. Assuming that these contributions are small in comparison with the energy density of the electric background, we establish the above-mentioned restrictions, which determine, in fact, the time scales from above of depletion of an electric field due to the backreaction.

  19. A Probabilistic, Dynamic, and Attribute-wise Model of Intertemporal Choice

    PubMed Central

    Dai, Junyi; Busemeyer, Jerome R.

    2014-01-01

    Most theoretical and empirical research on intertemporal choice assumes a deterministic and static perspective, leading to the widely adopted delay discounting models. As a form of preferential choice, however, intertemporal choice may be generated by a stochastic process that requires some deliberation time to reach a decision. We conducted three experiments to investigate how choice and decision time varied as a function of manipulations designed to examine the delay duration effect, the common difference effect, and the magnitude effect in intertemporal choice. The results, especially those associated with the delay duration effect, challenged the traditional deterministic and static view and called for alternative approaches. Consequently, various static or dynamic stochastic choice models were explored and fit to the choice data, including alternative-wise models derived from the traditional exponential or hyperbolic discount function and attribute-wise models built upon comparisons of direct or relative differences in money and delay. Furthermore, for the first time, dynamic diffusion models, such as those based on decision field theory, were also fit to the choice and response time data simultaneously. The results revealed that the attribute-wise diffusion model with direct differences, power transformations of objective value and time, and varied diffusion parameter performed the best and could account for all three intertemporal effects. In addition, the empirical relationship between choice proportions and response times was consistent with the prediction of diffusion models and thus favored a stochastic choice process for intertemporal choice that requires some deliberation time to make a decision. PMID:24635188

  20. The Rational Adolescent: Strategic Information Processing during Decision Making Revealed by Eye Tracking.

    PubMed

    Kwak, Youngbin; Payne, John W; Cohen, Andrew L; Huettel, Scott A

    2015-01-01

    Adolescence is often viewed as a time of irrational, risky decision-making - despite adolescents' competence in other cognitive domains. In this study, we examined the strategies used by adolescents (N=30) and young adults (N=47) to resolve complex, multi-outcome economic gambles. Compared to adults, adolescents were more likely to make conservative, loss-minimizing choices consistent with economic models. Eye-tracking data showed that prior to decisions, adolescents acquired more information in a more thorough manner; that is, they engaged in a more analytic processing strategy indicative of trade-offs between decision variables. In contrast, young adults' decisions were more consistent with heuristics that simplified the decision problem, at the expense of analytic precision. Collectively, these results demonstrate a counter-intuitive developmental transition in economic decision making: adolescents' decisions are more consistent with rational-choice models, while young adults more readily engage task-appropriate heuristics.

  1. The Rational Adolescent: Strategic Information Processing during Decision Making Revealed by Eye Tracking

    PubMed Central

    Kwak, Youngbin; Payne, John W.; Cohen, Andrew L.; Huettel, Scott A.

    2015-01-01

    Adolescence is often viewed as a time of irrational, risky decision-making – despite adolescents' competence in other cognitive domains. In this study, we examined the strategies used by adolescents (N=30) and young adults (N=47) to resolve complex, multi-outcome economic gambles. Compared to adults, adolescents were more likely to make conservative, loss-minimizing choices consistent with economic models. Eye-tracking data showed that prior to decisions, adolescents acquired more information in a more thorough manner; that is, they engaged in a more analytic processing strategy indicative of trade-offs between decision variables. In contrast, young adults' decisions were more consistent with heuristics that simplified the decision problem, at the expense of analytic precision. Collectively, these results demonstrate a counter-intuitive developmental transition in economic decision making: adolescents' decisions are more consistent with rational-choice models, while young adults more readily engage task-appropriate heuristics. PMID:26388664

  2. Dynamics of transit times and StorAge Selection functions in four forested catchments from stable isotope data

    NASA Astrophysics Data System (ADS)

    Rodriguez, Nicolas B.; McGuire, Kevin J.; Klaus, Julian

    2017-04-01

    Transit time distributions, residence time distributions and StorAge Selection functions are fundamental integrated descriptors of water storage, mixing, and release in catchments. In this contribution, we determined these time-variant functions in four neighboring forested catchments in H.J. Andrews Experimental Forest, Oregon, USA by employing a two year time series of 18O in precipitation and discharge. Previous studies in these catchments assumed stationary, exponentially distributed transit times, and complete mixing/random sampling to explore the influence of various catchment properties on the mean transit time. Here we relaxed such assumptions to relate transit time dynamics and the variability of StoreAge Selection functions to catchment characteristics, catchment storage, and meteorological forcing seasonality. Conceptual models of the catchments, consisting of two reservoirs combined in series-parallel, were calibrated to discharge and stable isotope tracer data. We assumed randomly sampled/fully mixed conditions for each reservoir, which resulted in an incompletely mixed system overall. Based on the results we solved the Master Equation, which describes the dynamics of water ages in storage and in catchment outflows Consistent between all catchments, we found that transit times were generally shorter during wet periods, indicating the contribution of shallow storage (soil, saprolite) to discharge. During extended dry periods, transit times increased significantly indicating the contribution of deeper storage (bedrock) to discharge. Our work indicated that the strong seasonality of precipitation impacted transit times by leading to a dynamic selection of stored water ages, whereas catchment size was not a control on transit times. In general this work showed the usefulness of using time-variant transit times with conceptual models and confirmed the existence of the catchment age mixing behaviors emerging from other similar studies.

  3. Quantitative Adverse Outcome Pathways and Their Application to Predictive Toxicology

    EPA Science Inventory

    A quantitative adverse outcome pathway (qAOP) consists of one or more biologically based, computational models describing key event relationships linking a molecular initiating event (MIE) to an adverse outcome. A qAOP provides quantitative, dose–response, and time-course p...

  4. Understanding Recent Magnetar Observations from the Magnetospheric Point of View

    NASA Astrophysics Data System (ADS)

    Tong, H.

    The wind braking model and its applications to magnetars are discussed. The decreasing torque of magnetars during outbursts, anti-glitch, and anti-correlations between radiation and timing are understandable in the wind braking model. Recent timing observations of magnetars are also consistent with the previous modeling. A magnetism-powered wind nebula and a braking index smaller than three are the two predictions. Besides isolated magnetars, there may also be accreting magnetars in binary systems and magnetars accreting from fallback disks. Observationally, ultra-luminous X-ray pulsars may be accreting magnetars, while super-slow magnetars may be magnetars with fallback disks in the past. Many works are needed for both isolated magnetars and accreting magnetars.

  5. Subcritical crack growth in fibrous materials

    NASA Astrophysics Data System (ADS)

    Santucci, S.; Cortet, P.-P.; Deschanel, S.; Vanel, L.; Ciliberto, S.

    2006-05-01

    We present experiments on the slow growth of a single crack in a fax paper sheet submitted to a constant force F. We find that statistically averaged crack growth curves can be described by only two parameters: the mean rupture time τ and a characteristic growth length ζ. We propose a model based on a thermally activated rupture process that takes into account the microstructure of cellulose fibers. The model is able to reproduce the shape of the growth curve, the dependence of ζ on F as well as the effect of temperature on the rupture time τ. We find that the length scale at which rupture occurs in this model is consistently close to the diameter of cellulose microfibrils.

  6. Intelligent system of coordination and control for manufacturing

    NASA Astrophysics Data System (ADS)

    Ciortea, E. M.

    2016-08-01

    This paper wants shaping an intelligent system monitoring and control, which leads to optimizing material and information flows of the company. The paper presents a model for tracking and control system using intelligent real. Production system proposed for simulation analysis provides the ability to track and control the process in real time. Using simulation models be understood: the influence of changes in system structure, commands influence on the general condition of the manufacturing process conditions influence the behavior of some system parameters. Practical character consists of tracking and real-time control of the technological process. It is based on modular systems analyzed using mathematical models, graphic-analytical sizing, configuration, optimization and simulation.

  7. Real-Time Agent-Based Modeling Simulation with in-situ Visualization of Complex Biological Systems: A Case Study on Vocal Fold Inflammation and Healing.

    PubMed

    Seekhao, Nuttiiya; Shung, Caroline; JaJa, Joseph; Mongeau, Luc; Li-Jessen, Nicole Y K

    2016-05-01

    We present an efficient and scalable scheme for implementing agent-based modeling (ABM) simulation with In Situ visualization of large complex systems on heterogeneous computing platforms. The scheme is designed to make optimal use of the resources available on a heterogeneous platform consisting of a multicore CPU and a GPU, resulting in minimal to no resource idle time. Furthermore, the scheme was implemented under a client-server paradigm that enables remote users to visualize and analyze simulation data as it is being generated at each time step of the model. Performance of a simulation case study of vocal fold inflammation and wound healing with 3.8 million agents shows 35× and 7× speedup in execution time over single-core and multi-core CPU respectively. Each iteration of the model took less than 200 ms to simulate, visualize and send the results to the client. This enables users to monitor the simulation in real-time and modify its course as needed.

  8. Proper Generalized Decomposition (PGD) for the numerical simulation of polycrystalline aggregates under cyclic loading

    NASA Astrophysics Data System (ADS)

    Nasri, Mohamed Aziz; Robert, Camille; Ammar, Amine; El Arem, Saber; Morel, Franck

    2018-02-01

    The numerical modelling of the behaviour of materials at the microstructural scale has been greatly developed over the last two decades. Unfortunately, conventional resolution methods cannot simulate polycrystalline aggregates beyond tens of loading cycles, and they do not remain quantitative due to the plasticity behaviour. This work presents the development of a numerical solver for the resolution of the Finite Element modelling of polycrystalline aggregates subjected to cyclic mechanical loading. The method is based on two concepts. The first one consists in maintaining a constant stiffness matrix. The second uses a time/space model reduction method. In order to analyse the applicability and the performance of the use of a space-time separated representation, the simulations are carried out on a three-dimensional polycrystalline aggregate under cyclic loading. Different numbers of elements per grain and two time increments per cycle are investigated. The results show a significant CPU time saving while maintaining good precision. Moreover, increasing the number of elements and the number of time increments per cycle, the model reduction method is faster than the standard solver.

  9. Power-law expansion of the Universe from the bosonic Lorentzian type IIB matrix model

    NASA Astrophysics Data System (ADS)

    Ito, Yuta; Nishimura, Jun; Tsuchiya, Asato

    2015-11-01

    Recent studies on the Lorentzian version of the type IIB matrix model show that (3+1)D expanding universe emerges dynamically from (9+1)D space-time predicted by superstring theory. Here we study a bosonic matrix model obtained by omitting the fermionic matrices. With the adopted simplification and the usage of a large-scale parallel computer, we are able to perform Monte Carlo calculations with matrix size up to N = 512, which is twenty times larger than that used previously for the studies of the original model. When the matrix size is larger than some critical value N c ≃ 110, we find that (3+1)D expanding universe emerges dynamically with a clear large- N scaling property. Furthermore, the observed increase of the spatial extent with time t at sufficiently late times is consistent with a power-law behavior t 1/2, which is reminiscent of the expanding behavior of the Friedmann-Robertson-Walker universe in the radiation dominated era. We discuss possible implications of this result on the original supersymmetric model including fermionic matrices.

  10. Non-linear structure formation in the `Running FLRW' cosmological model

    NASA Astrophysics Data System (ADS)

    Bibiano, Antonio; Croton, Darren J.

    2016-07-01

    We present a suite of cosmological N-body simulations describing the `Running Friedmann-Lemaïtre-Robertson-Walker' (R-FLRW) cosmological model. This model is based on quantum field theory in a curved space-time and extends Lambda cold dark matter (ΛCDM) with a time-evolving vacuum density, Λ(z), and time-evolving gravitational Newton's coupling, G(z). In this paper, we review the model and introduce the necessary analytical treatment needed to adapt a reference N-body code. Our resulting simulations represent the first realization of the full growth history of structure in the R-FLRW cosmology into the non-linear regime, and our normalization choice makes them fully consistent with the latest cosmic microwave background data. The post-processing data products also allow, for the first time, an analysis of the properties of the halo and sub-halo populations. We explore the degeneracies of many statistical observables and discuss the steps needed to break them. Furthermore, we provide a quantitative description of the deviations of R-FLRW from ΛCDM, which could be readily exploited by future cosmological observations to test and further constrain the model.

  11. Dynamics of human T-cell lymphotropic virus I (HTLV-I) infection of CD4+ T-cells.

    PubMed

    Katri, Patricia; Ruan, Shigui

    2004-11-01

    Stilianakis and Seydel (Bull. Math. Biol., 1999) proposed an ODE model that describes the T-cell dynamics of human T-cell lymphotropic virus I (HTLV-I) infection and the development of adult T-cell leukemia (ATL). Their model consists of four components: uninfected healthy CD4+ T-cells, latently infected CD4+ T-cells, actively infected CD4+ T-cells, and ATL cells. Mathematical analysis that completely determines the global dynamics of this model has been done by Wang et al. (Math. Biosci., 2002). In this note, we first modify the parameters of the model to distinguish between contact and infectivity rates. Then we introduce a discrete time delay to the model to describe the time between emission of contagious particles by active CD4+ T-cells and infection of pure cells. Using the results in Culshaw and Ruan (Math. Biosci., 2000) in the analysis of time delay with respect to cell-free viral spread of HIV, we study the effect of time delay on the stability of the endemically infected equilibrium. Numerical simulations are presented to illustrate the results.

  12. Modeling of mid-infrared quantum cascade lasers: The role of temperature and operating field strength on the laser performance

    NASA Astrophysics Data System (ADS)

    Yousefvand, Hossein Reza

    2017-07-01

    In this paper a self-consistent numerical approach to study the temperature and bias dependent characteristics of mid-infrared (mid-IR) quantum cascade lasers (QCLs) is presented which integrates a number of quantum mechanical models. The field-dependent laser parameters including the nonradiative scattering times, the detuning and energy levels, the escape activation energy, the backfilling excitation energy and dipole moment of the optical transition are calculated for a wide range of applied electric fields by a self-consistent solution of Schrodinger-Poisson equations. A detailed analysis of performance of the obtained structure is carried out within a self-consistent solution of the subband population rate equations coupled with carrier coherent transport equations through the sequential resonant tunneling, by taking into account the temperature and bias dependency of the relevant parameters. Furthermore, the heat transfer equation is included in order to calculate the carrier temperature inside the active region levels. This leads to a compact predictive model to analyze the temperature and electric field dependent characteristics of the mid-IR QCLs such as the light-current (L-I), electric field-current (F-I) and core temperature-electric field (T-F) curves. For a typical mid-IR QCL, a good agreement was found between the simulated temperature-dependent L-I characteristic and experimental data, which confirms validity of the model. It is found that the main characteristics of the device such as output power and turn-on delay time are degraded by interplay between the temperature and Stark effects.

  13. Application of troposphere model from NWP and GNSS data into real-time precise positioning

    NASA Astrophysics Data System (ADS)

    Wilgan, Karina; Hadas, Tomasz; Kazmierski, Kamil; Rohm, Witold; Bosy, Jaroslaw

    2016-04-01

    The tropospheric delay empirical models are usually functions of meteorological parameters (temperature, pressure and humidity). The application of standard atmosphere parameters or global models, such as GPT (global pressure/temperature) model or UNB3 (University of New Brunswick, version 3) model, may not be sufficient, especially for positioning in non-standard weather conditions. The possible solution is to use regional troposphere models based on real-time or near-real time measurements. We implement a regional troposphere model into the PPP (Precise Point Positioning) software GNSS-WARP (Wroclaw Algorithms for Real-time Positioning) developed at Wroclaw University of Environmental and Life Sciences. The software is capable of processing static and kinematic multi-GNSS data in real-time and post-processing mode and takes advantage of final IGS (International GNSS Service) products as well as IGS RTS (Real-Time Service) products. A shortcoming of PPP technique is the time required for the solution to converge. One of the reasons is the high correlation among the estimated parameters: troposphere delay, receiver clock offset and receiver height. To efficiently decorrelate these parameters, a significant change in satellite geometry is required. Alternative solution is to introduce the external high-quality regional troposphere delay model to constrain troposphere estimates. The proposed model consists of zenith total delays (ZTD) and mapping functions calculated from meteorological parameters from Numerical Weather Prediction model WRF (Weather Research and Forecasting) and ZTDs from ground-based GNSS stations using the least-squares collocation software COMEDIE (Collocation of Meteorological Data for Interpretation and Estimation of Tropospheric Pathdelays) developed at ETH Zurich.

  14. Performance Prediction Toolkit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chennupati, Gopinath; Santhi, Nanadakishore; Eidenbenz, Stephen

    The Performance Prediction Toolkit (PPT), is a scalable co-design tool that contains the hardware and middle-ware models, which accept proxy applications as input in runtime prediction. PPT relies on Simian, a parallel discrete event simulation engine in Python or Lua, that uses the process concept, where each computing unit (host, node, core) is a Simian entity. Processes perform their task through message exchanges to remain active, sleep, wake-up, begin and end. The PPT hardware model of a compute core (such as a Haswell core) consists of a set of parameters, such as clock speed, memory hierarchy levels, their respective sizes,more » cache-lines, access times for different cache levels, average cycle counts of ALU operations, etc. These parameters are ideally read off a spec sheet or are learned using regression models learned from hardware counters (PAPI) data. The compute core model offers an API to the software model, a function called time_compute(), which takes as input a tasklist. A tasklist is an unordered set of ALU, and other CPU-type operations (in particular virtual memory loads and stores). The PPT application model mimics the loop structure of the application and replaces the computational kernels with a call to the hardware model's time_compute() function giving tasklists as input that model the compute kernel. A PPT application model thus consists of tasklists representing kernels and the high-er level loop structure that we like to think of as pseudo code. The key challenge for the hardware model's time_compute-function is to translate virtual memory accesses into actual cache hierarchy level hits and misses.PPT also contains another CPU core level hardware model, Analytical Memory Model (AMM). The AMM solves this challenge soundly, where our previous alternatives explicitly include the L1,L2,L3 hit-rates as inputs to the tasklists. Explicit hit-rates inevitably only reflect the application modeler's best guess, perhaps informed by a few small test problems using hardware counters; also, hard-coded hit-rates make the hardware model insensitive to changes in cache sizes. Alternatively, we use reuse distance distributions in the tasklists. In general, reuse profiles require the application modeler to run a very expensive trace analysis on the real code that realistically can be done at best for small examples.« less

  15. Dynamical coupling between magnetic equilibrium and transport in tokamak scenario modelling, with application to current ramps

    NASA Astrophysics Data System (ADS)

    Fable, E.; Angioni, C.; Ivanov, A. A.; Lackner, K.; Maj, O.; Medvedev, S. Yu; Pautasso, G.; Pereverzev, G. V.; Treutterer, W.; the ASDEX Upgrade Team

    2013-07-01

    The modelling of tokamak scenarios requires the simultaneous solution of both the time evolution of the plasma kinetic profiles and of the magnetic equilibrium. Their dynamical coupling involves additional complications, which are not present when the two physical problems are solved separately. Difficulties arise in maintaining consistency in the time evolution among quantities which appear in both the transport and the Grad-Shafranov equations, specifically the poloidal and toroidal magnetic fluxes as a function of each other and of the geometry. The required consistency can be obtained by means of iteration cycles, which are performed outside the equilibrium code and which can have different convergence properties depending on the chosen numerical scheme. When these external iterations are performed, the stability of the coupled system becomes a concern. In contrast, if these iterations are not performed, the coupled system is numerically stable, but can become physically inconsistent. By employing a novel scheme (Fable E et al 2012 Nucl. Fusion submitted), which ensures stability and physical consistency among the same quantities that appear in both the transport and magnetic equilibrium equations, a newly developed version of the ASTRA transport code (Pereverzev G V et al 1991 IPP Report 5/42), which is coupled to the SPIDER equilibrium code (Ivanov A A et al 2005 32nd EPS Conf. on Plasma Physics (Tarragona, 27 June-1 July) vol 29C (ECA) P-5.063), in both prescribed- and free-boundary modes is presented here for the first time. The ASTRA-SPIDER coupled system is then applied to the specific study of the modelling of controlled current ramp-up in ASDEX Upgrade discharges.

  16. A self-consistent model of ionic wind generation by negative corona discharges in air with experimental validation

    NASA Astrophysics Data System (ADS)

    Chen, She; Nobelen, J. C. P. Y.; Nijdam, S.

    2017-09-01

    Ionic wind is produced by a corona discharge when gaseous ions are accelerated in the electric field and transfer their momentum to neutral molecules by collisions. This technique is promising because a gas flow can be generated without the need for moving parts and can be easily miniaturized. The basic theory of ionic wind sounds simple but the details are far from clear. In our experiment, a negative DC voltage is applied to a needle-cylinder electrode geometry. Hot wire anemometry is used to measure the flow velocity at the downstream exit of the cylinder. The flow velocity fluctuates but the average velocity increases with the voltage. The current consists of a regular train of pulses with short rise time, the well-known Trichel pulses. To reveal the ionic wind mechanism in the Trichel pulse stage, a three-species corona model coupled with gas dynamics is built. The drift-diffusion equations of the plasma together with the Navier-Stokes equations of the flow are solved in COMSOL Multiphysics. The electric field, net number density of charged species, electrohydrodynamic (EHD) body force and flow velocity are calculated in detail by a self-consistent model. Multiple time scales are employed: hundreds of microseconds for the plasma characteristics and longer time scales (˜1 s) for the flow behavior. We found that the flow velocity as well as the EHD body force have opposite directions in the ionization region close to the tip and the ion drift region further away from the tip. The calculated mean current, Trichel pulse frequency and flow velocity are very close to our experimental results. Furthermore, in our simulations we were able to reproduce the mushroom-like minijets observed in experiments.

  17. Sampling design for groundwater solute transport: Tests of methods and analysis of Cape Cod tracer test data

    USGS Publications Warehouse

    Knopman, Debra S.; Voss, Clifford I.; Garabedian, Stephen P.

    1991-01-01

    Tests of a one-dimensional sampling design methodology on measurements of bromide concentration collected during the natural gradient tracer test conducted by the U.S. Geological Survey on Cape Cod, Massachusetts, demonstrate its efficacy for field studies of solute transport in groundwater and the utility of one-dimensional analysis. The methodology was applied to design of sparse two-dimensional networks of fully screened wells typical of those often used in engineering practice. In one-dimensional analysis, designs consist of the downstream distances to rows of wells oriented perpendicular to the groundwater flow direction and the timing of sampling to be carried out on each row. The power of a sampling design is measured by its effectiveness in simultaneously meeting objectives of model discrimination, parameter estimation, and cost minimization. One-dimensional models of solute transport, differing in processes affecting the solute and assumptions about the structure of the flow field, were considered for description of tracer cloud migration. When fitting each model using nonlinear regression, additive and multiplicative error forms were allowed for the residuals which consist of both random and model errors. The one-dimensional single-layer model of a nonreactive solute with multiplicative error was judged to be the best of those tested. Results show the efficacy of the methodology in designing sparse but powerful sampling networks. Designs that sample five rows of wells at five or fewer times in any given row performed as well for model discrimination as the full set of samples taken up to eight times in a given row from as many as 89 rows. Also, designs for parameter estimation judged to be good by the methodology were as effective in reducing the variance of parameter estimates as arbitrary designs with many more samples. Results further showed that estimates of velocity and longitudinal dispersivity in one-dimensional models based on data from only five rows of fully screened wells each sampled five or fewer times were practically equivalent to values determined from moments analysis of the complete three-dimensional set of 29,285 samples taken during 16 sampling times.

  18. Reconstruction of total and spectral solar irradiance from 1974 to 2013 based on KPVT, SoHO/MDI, and SDO/HMI observations

    NASA Astrophysics Data System (ADS)

    Yeo, K. L.; Krivova, N. A.; Solanki, S. K.; Glassmeier, K. H.

    2014-10-01

    Context. Total and spectral solar irradiance are key parameters in the assessment of solar influence on changes in the Earth's climate. Aims: We present a reconstruction of daily solar irradiance obtained using the SATIRE-S model spanning 1974 to 2013 based on full-disc observations from the KPVT, SoHO/MDI, and SDO/HMI. Methods: SATIRE-S ascribes variation in solar irradiance on timescales greater than a day to photospheric magnetism. The solar spectrum is reconstructed from the apparent surface coverage of bright magnetic features and sunspots in the daily data using the modelled intensity spectra of these magnetic structures. We cross-calibrated the various data sets, harmonizing the model input so as to yield a single consistent time series as the output. Results: The model replicates 92% (R2 = 0.916) of the variability in the PMOD TSI composite including the secular decline between the 1996 and 2008 solar cycle minima. The model also reproduces most of the variability in observed Lyman-α irradiance and the Mg II index. The ultraviolet solar irradiance measurements from the UARS and SORCE missions are mutually consistent up to about 180 nm before they start to exhibit discrepant rotational and cyclical variability, indicative of unresolved instrumental effects. As a result, the agreement between model and measurement, while relatively good below 180 nm, starts to deteriorate above this wavelength. As with earlier similar investigations, the reconstruction cannot reproduce the overall trends in SORCE/SIM SSI. We argue, from the lack of clear solar cycle modulation in the SIM record and the inconsistency between the total flux recorded by the instrument and TSI, that unaccounted instrumental trends are present. Conclusions: The daily solar irradiance time series is consistent with observations from multiple sources, demonstrating its validity and utility for climate models. It also provides further evidence that photospheric magnetism is the prime driver of variation in solar irradiance on timescales greater than a day.

  19. Decay of the 3D viscous liquid-gas two-phase flow model with damping

    NASA Astrophysics Data System (ADS)

    Zhang, Yinghui

    2016-08-01

    We establish the optimal Lp - L2(1 ≤ p < 6/5) time decay rates of the solution to the Cauchy problem for the 3D viscous liquid-gas two-phase flow model with damping and analyse the influences of the damping on the qualitative behaviors of solution. It is observed that the fraction effect of the damping affects the dispersion of fluids and enhances the time decay rate of solution. Our method of proof consists of Hodge decomposition technique, Lp - L2 estimates for the linearized equations, and delicate energy estimates.

  20. The effect of data structures on INGRES performance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Creighton, J.R.

    1987-01-01

    Computer experiments were conducted to determine the effect of using Heap, ISAM, Hash and B-tree data structures for INGRES relations. Average times for retrieve, append and update were determined for searches by unique key and non-key data. The experiments were conducted on relations of approximately 1000 tuples of 332 byte width. Multiple operations were performed, where appropriate, to obtain average times. Simple models of the data structures are presented and shown to be consistent with experimental results. The models can be used to predict performance, and to select the appropriate data structure for various applications.

  1. Microwave and hard X-ray emissions during the impulsive phase of solar flares: Nonthermal electron spectrum and time delay

    NASA Technical Reports Server (NTRS)

    Gu, Ye-Ming; Li, Chung-Sheng

    1986-01-01

    On the basis of the summing-up and analysis of the observations and theories about the impulsive microwave and hard X-ray bursts, the correlations between these two kinds of emissions were investigated. It is shown that it is only possible to explain the optically-thin microwave spectrum and its relations with the hard X-ray spectrum by means of the nonthermal source model. A simple nonthermal trap model in the mildly-relativistic case can consistently explain the main characteristics of the spectrum and the relative time delays.

  2. Prospective memory: A comparative perspective

    PubMed Central

    Crystal, Jonathon D.; Wilson, A. George

    2014-01-01

    Prospective memory consists of forming a representation of a future action, temporarily storing that representation in memory, and retrieving it at a future time point. Here we review the recent development of animal models of prospective memory. We review experiments using rats that focus on the development of time-based and event-based prospective memory. Next, we review a number of prospective-memory approaches that have been used with a variety of non-human primates. Finally, we review selected approaches from the human literature on prospective memory to identify targets for development of animal models of prospective memory. PMID:25101562

  3. Toward automatic time-series forecasting using neural networks.

    PubMed

    Yan, Weizhong

    2012-07-01

    Over the past few decades, application of artificial neural networks (ANN) to time-series forecasting (TSF) has been growing rapidly due to several unique features of ANN models. However, to date, a consistent ANN performance over different studies has not been achieved. Many factors contribute to the inconsistency in the performance of neural network models. One such factor is that ANN modeling involves determining a large number of design parameters, and the current design practice is essentially heuristic and ad hoc, this does not exploit the full potential of neural networks. Systematic ANN modeling processes and strategies for TSF are, therefore, greatly needed. Motivated by this need, this paper attempts to develop an automatic ANN modeling scheme. It is based on the generalized regression neural network (GRNN), a special type of neural network. By taking advantage of several GRNN properties (i.e., a single design parameter and fast learning) and by incorporating several design strategies (e.g., fusing multiple GRNNs), we have been able to make the proposed modeling scheme to be effective for modeling large-scale business time series. The initial model was entered into the NN3 time-series competition. It was awarded the best prediction on the reduced dataset among approximately 60 different models submitted by scholars worldwide.

  4. Modeling time-to-event (survival) data using classification tree analysis.

    PubMed

    Linden, Ariel; Yarnold, Paul R

    2017-12-01

    Time to the occurrence of an event is often studied in health research. Survival analysis differs from other designs in that follow-up times for individuals who do not experience the event by the end of the study (called censored) are accounted for in the analysis. Cox regression is the standard method for analysing censored data, but the assumptions required of these models are easily violated. In this paper, we introduce classification tree analysis (CTA) as a flexible alternative for modelling censored data. Classification tree analysis is a "decision-tree"-like classification model that provides parsimonious, transparent (ie, easy to visually display and interpret) decision rules that maximize predictive accuracy, derives exact P values via permutation tests, and evaluates model cross-generalizability. Using empirical data, we identify all statistically valid, reproducible, longitudinally consistent, and cross-generalizable CTA survival models and then compare their predictive accuracy to estimates derived via Cox regression and an unadjusted naïve model. Model performance is assessed using integrated Brier scores and a comparison between estimated survival curves. The Cox regression model best predicts average incidence of the outcome over time, whereas CTA survival models best predict either relatively high, or low, incidence of the outcome over time. Classification tree analysis survival models offer many advantages over Cox regression, such as explicit maximization of predictive accuracy, parsimony, statistical robustness, and transparency. Therefore, researchers interested in accurate prognoses and clear decision rules should consider developing models using the CTA-survival framework. © 2017 John Wiley & Sons, Ltd.

  5. German EstSmoke: estimating adult smoking-related costs and consequences of smoking cessation for Germany.

    PubMed

    Sonntag, Diana; Gilbody, Simon; Winkler, Volker; Ali, Shehzad

    2018-01-01

    We compared predicted life-time health-care costs for current, never and ex-smokers in Germany under the current set of tobacco control polices. We compared these economic consequences of the current situation with an alternative in which Germany were to implement more comprehensive tobacco control policies consistent with the World Health Organization (WHO) Framework Convention for Tobacco Control (FCTC) guidelines. German EstSmoke, an adapted version of the UK EstSmoke simulation model, applies the Markov modelling approach. Transition probabilities for (re-)currence of smoking-related diseases were calculated from large German disease-specific registries and the German Health Update (GEDA 2010). Estimations of both health-care costs and effect sizes of smoking cessation policies were taken from recent German studies and discounted at 3.5%/year. Germany. German population of prevalent current, never and ex-smokers in 2009. Life-time cost and outcomes in current, never and ex-smokers. If tobacco control policies are not strengthened, the German smoking population will incur €41.56 billion life-time excess costs compared with never smokers. Implementing tobacco control policies consistent with WHO FCTC guidelines would reduce the difference of life-time costs between current smokers and ex-smokers by at least €1.7 billion. Modelling suggests that the life-time healthcare costs of people in Germany who smoke are substantially greater than those of people who have never smoked. However, more comprehensive tobacco control policies could reduce health-care expenditures for current smokers by at least 4%. © 2017 Society for the Study of Addiction.

  6. Resource allocation decisions in low-income rural households.

    PubMed

    Franklin, D L; Harrell, M W

    1985-05-01

    This paper is based on the theory that a society's nutritional well-being is both a cause and a consequence of the developmental process within that society. An approach to the choices made by poor rural households regarding food acquisition and nurturing behavior is emerging from recent research based on the new economic theory of household production. The central thesis of this approach is that household decisions related to the fulfillment of basic needs are strongly determined by decisions on the allocation of time to household production activities. Summarized are the results of the estimation of a model of household production and consumption behavior with data from a cross-sectional survey of 30 rural communities in Veraguas Province, Panama. The struture of the model consists of allocation of resources to nurturing activities and to production activities. The resources to be allocated are time and market goods, and in theory, these are allocated according to relative prices. The empirical results of this study are generally consistent with the predictions of the neoclassical economic model of household resource allocation. The major conclusions that time allocations and market price conditions matter in the determination of well-being in low-income rural households and, importantly, that nurturing decisions significantly affect the product and factor market behavior of these households form the basis for a discussion on implucations for agricultural and rural development. Programs and policies that seek nutritional improvement should be determined with explicit recognition of the value of time and the importance of timing in the decisions of the poor.

  7. The Primary Care Computer Simulation: Optimal Primary Care Manager Empanelment.

    DTIC Science & Technology

    1997-05-01

    explored in which a team consisted of two providers, two nurses, and a nurse aide . Each team had a specific exam room assigned to them. Additionally, a...team consisting of one provider, one nurse, and one nurse aide was simulated. The model also examined the effects of adding two exam rooms. The study...minutes. The optimal solution, which reduced patient time to below 90 minutes, was the mix of one provider, a nurse, and a nurse aide in which each

  8. Learning complex temporal patterns with resource-dependent spike timing-dependent plasticity.

    PubMed

    Hunzinger, Jason F; Chan, Victor H; Froemke, Robert C

    2012-07-01

    Studies of spike timing-dependent plasticity (STDP) have revealed that long-term changes in the strength of a synapse may be modulated substantially by temporal relationships between multiple presynaptic and postsynaptic spikes. Whereas long-term potentiation (LTP) and long-term depression (LTD) of synaptic strength have been modeled as distinct or separate functional mechanisms, here, we propose a new shared resource model. A functional consequence of our model is fast, stable, and diverse unsupervised learning of temporal multispike patterns with a biologically consistent spiking neural network. Due to interdependencies between LTP and LTD, dendritic delays, and proactive homeostatic aspects of the model, neurons are equipped to learn to decode temporally coded information within spike bursts. Moreover, neurons learn spike timing with few exposures in substantial noise and jitter. Surprisingly, despite having only one parameter, the model also accurately predicts in vitro observations of STDP in more complex multispike trains, as well as rate-dependent effects. We discuss candidate commonalities in natural long-term plasticity mechanisms.

  9. Discrete dynamical system modelling for gene regulatory networks of 5-hydroxymethylfurfural tolerance for ethanologenic yeast.

    PubMed

    Song, M; Ouyang, Z; Liu, Z L

    2009-05-01

    Composed of linear difference equations, a discrete dynamical system (DDS) model was designed to reconstruct transcriptional regulations in gene regulatory networks (GRNs) for ethanologenic yeast Saccharomyces cerevisiae in response to 5-hydroxymethylfurfural (HMF), a bioethanol conversion inhibitor. The modelling aims at identification of a system of linear difference equations to represent temporal interactions among significantly expressed genes. Power stability is imposed on a system model under the normal condition in the absence of the inhibitor. Non-uniform sampling, typical in a time-course experimental design, is addressed by a log-time domain interpolation. A statistically significant DDS model of the yeast GRN derived from time-course gene expression measurements by exposure to HMF, revealed several verified transcriptional regulation events. These events implicate Yap1 and Pdr3, transcription factors consistently known for their regulatory roles by other studies or postulated by independent sequence motif analysis, suggesting their involvement in yeast tolerance and detoxification of the inhibitor.

  10. Maternal and Paternal Resources across Childhood and Adolescence as Predictors of Young Adult Achievement.

    PubMed

    Sun, Xiaoran; McHale, Susan M; Updegraff, Kimberly A

    2017-06-01

    Family experiences have been linked to youth's achievements in childhood and adolescence, but we know less about their long term implications for educational and occupational achievements in young adulthood. Grounded in social capital theory and ecological frameworks, this study tested whether mothers' and fathers' education and occupation attainments, as well as the mean level and cross-time consistency of parental warmth during childhood and adolescence, predicted educational and occupational achievements in young adulthood. We also tested interactions between parental achievement and warmth in predicting these young adult outcomes. Data were collected from mothers, fathers, and firstborn and secondborn siblings in 164 families at up to 11 time points. Predictors came from the first nine annual points (youth age M = 10.52 at Time 1) and outcomes from when young adults averaged 26 years old (firstborns at Time 10, secondborns at Time 11). Results from multilevel models revealed that both mothers' and fathers' educational attainment and warmth consistency from childhood through adolescence predicted young adults' educational attainment. Fathers' occupational prestige predicted sons', but not daughters', prestige. An interaction between mothers' warmth consistency, occupational prestige, and youth gender revealed that, for sons whose mothers' prestige was low, warmth consistency positively predicted their prestige, but this association was nonsignificant when mothers' prestige was high. Conversely, for daughters with mothers high in prestige, warmth consistency was a trend level, positive predictor of daughters' prestige, but was nonsignificant when mothers' prestige was low. Thus, maternal resources appeared to have a cumulative impact on daughters, but the process for sons was compensatory. Discussion focuses on the role of family resources in the gender gap in young adult achievement.

  11. Response time in economic games reflects different types of decision conflict for prosocial and proself individuals

    PubMed Central

    Matsumoto, Yoshie; Kiyonari, Toko; Takagishi, Haruto; Li, Yang; Kanai, Ryota; Sakagami, Masamichi

    2017-01-01

    Behavioral and neuroscientific studies explore two pathways through which internalized social norms promote prosocial behavior. One pathway involves internal control of impulsive selfishness, and the other involves emotion-based prosocial preferences that are translated into behavior when they evade cognitive control for pursuing self-interest. We measured 443 participants’ overall prosocial behavior in four economic games. Participants’ predispositions [social value orientation (SVO)] were more strongly reflected in their overall game behavior when they made decisions quickly than when they spent a longer time. Prosocially (or selfishly) predisposed participants behaved less prosocially (or less selfishly) when they spent more time in decision making, such that their SVO prosociality yielded limited effects in actual behavior in their slow decisions. The increase (or decrease) in slower decision makers was prominent among consistent prosocials (or proselfs) whose strong preference for prosocial (or proself) goals would make it less likely to experience conflict between prosocial and proself goals. The strong effect of RT on behavior in consistent prosocials (or proselfs) suggests that conflict between prosocial and selfish goals alone is not responsible for slow decisions. Specifically, we found that contemplation of the risk of being exploited by others (social risk aversion) was partly responsible for making consistent prosocials (but not consistent proselfs) spend longer time in decision making and behave less prosocially. Conflict between means rather than between goals (immediate versus strategic pursuit of self-interest) was suggested to be responsible for the time-related increase in consistent proselfs’ prosocial behavior. The findings of this study are generally in favor of the intuitive cooperation model of prosocial behavior. PMID:28559334

  12. Maternal and Paternal Resources across Childhood and Adolescence as Predictors of Young Adult Achievement

    PubMed Central

    Sun, Xiaoran; McHale, Susan M.; Updegraff, Kimberly A.

    2017-01-01

    Family experiences have been linked to youth’s achievements in childhood and adolescence, but we know less about their long term implications for educational and occupational achievements in young adulthood. Grounded in social capital theory and ecological frameworks, this study tested whether mothers’ and fathers’ education and occupation attainments, as well as the mean level and cross-time consistency of parental warmth during childhood and adolescence, predicted educational and occupational achievements in young adulthood. We also tested interactions between parental achievement and warmth in predicting these young adult outcomes. Data were collected from mothers, fathers, and firstborn and secondborn siblings in 164 families at up to 11 time points. Predictors came from the first nine annual points (youth age M = 10.52 at Time 1) and outcomes from when young adults averaged 26 years old (firstborns at Time 10, secondborns at Time 11). Results from multilevel models revealed that both mothers’ and fathers’ educational attainment and warmth consistency from childhood through adolescence predicted young adults’ educational attainment. Fathers’ occupational prestige predicted sons’, but not daughters’, prestige. An interaction between mothers’ warmth consistency, occupational prestige, and youth gender revealed that, for sons whose mothers’ prestige was low, warmth consistency positively predicted their prestige, but this association was nonsignificant when mothers’ prestige was high. Conversely, for daughters with mothers high in prestige, warmth consistency was a trend level, positive predictor of daughters’ prestige, but was nonsignificant when mothers’ prestige was low. Thus, maternal resources appeared to have a cumulative impact on daughters, but the process for sons was compensatory. Discussion focuses on the role of family resources in the gender gap in young adult achievement. PMID:28983122

  13. Time-Dependent Testing Evaluation and Modeling for Rubber Stopper Seal Performance.

    PubMed

    Zeng, Qingyu; Zhao, Xia

    2018-01-01

    Sufficient rubber stopper sealing performance throughout the entire sealed product life cycle is essential for maintaining container closure integrity in the parenteral packaging industry. However, prior publications have lacked systematic considerations for the time-dependent influence on sealing performance that results from the viscoelastic characteristics of the rubber stoppers. In this paper, we report results of an effort to study these effects by applying both compression stress relaxation testing and residual seal force testing for time-dependent experimental data collection. These experiments were followed by modeling fit calculations based on the Maxwell-Wiechert theory modified with the Kohlrausch-Williams-Watts stretched exponential function, resulting in a nonlinear, time-dependent sealing force model. By employing both testing evaluations and modeling calculations, an in-depth understanding of the time-dependent effects on rubber stopper sealing force was developed. Both testing and modeling data show good consistency, demonstrating that the sealing force decays exponentially over time and eventually levels off because of the viscoelastic nature of the rubber stoppers. The nonlinearity of stress relaxation derives from the viscoelastic characteristics of the rubber stoppers coupled with the large stopper compression deformation into restrained geometry conditions. The modeling fit with capability to handle actual testing data can be employed as a tool to calculate the compression stress relaxation and residual seal force throughout the entire sealed product life cycle. In addition to being time-dependent, stress relaxation is also experimentally shown to be temperature-dependent. The present work provides a new, integrated methodology framework and some fresh insights to the parenteral packaging industry for practically and proactively considering, designing, setting up, controlling, and managing stopper sealing performance throughout the entire sealed product life cycle. LAY ABSTRACT: Historical publications in the parenteral packaging industry have lacked systematic considerations for the time-dependent influence on the sealing performance that results from effects of viscoelastic characteristic of the rubber stoppers. This study applied compression stress relaxation testing and residual seal force testing for time-dependent experimental data collection. These experiments were followed by modeling fit calculations based on the Maxwell-Wiechert theory modified with the Kohlrausch-Williams-Watts stretched exponential function, resulting in a nonlinear, time-dependent sealing force model. Experimental and modeling data show good consistency, demonstrating that sealing force decays exponentially over time and eventually levels off. The nonlinearity of stress relaxation derives from the viscoelastic characteristics of the rubber stoppers coupled with the large stopper compression deformation into restrained geometry conditions. In addition to being time-dependent stress relaxation, it is also experimentally shown to be temperature-dependent. The present work provides a new, integrated methodology framework and some fresh insights to the industry for practically and proactively considering, designing, setting up, controlling, and managing of the stopper sealing performance throughout the entire sealed product life cycle. © PDA, Inc. 2018.

  14. Many-body Green’s function theory for electron-phonon interactions: The Kadanoff-Baym approach to spectral properties of the Holstein dimer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Säkkinen, Niko; Peng, Yang; Fritz-Haber-Institut der Max-Planck-Gesellschaft, Faradayweg 4-6, 14195 Berlin-Dahlem

    2015-12-21

    We present a Kadanoff-Baym formalism to study time-dependent phenomena for systems of interacting electrons and phonons in the framework of many-body perturbation theory. The formalism takes correctly into account effects of the initial preparation of an equilibrium state and allows for an explicit time-dependence of both the electronic and phononic degrees of freedom. The method is applied to investigate the charge neutral and non-neutral excitation spectra of a homogeneous, two-site, two-electron Holstein model. This is an extension of a previous study of the ground state properties in the Hartree (H), partially self-consistent Born (Gd) and fully self-consistent Born (GD) approximationsmore » published in Säkkinen et al. [J. Chem. Phys. 143, 234101 (2015)]. Here, the homogeneous ground state solution is shown to become unstable for a sufficiently strong interaction while a symmetry-broken ground state solution is shown to be stable in the Hartree approximation. Signatures of this instability are observed for the partially self-consistent Born approximation but are not found for the fully self-consistent Born approximation. By understanding the stability properties, we are able to study the linear response regime by calculating the density-density response function by time-propagation. This amounts to a solution of the Bethe-Salpeter equation with a sophisticated kernel. The results indicate that none of the approximations is able to describe the response function during or beyond the bipolaronic crossover for the parameters investigated. Overall, we provide an extensive discussion on when the approximations are valid and how they fail to describe the studied exact properties of the chosen model system.« less

  15. First Measurements of the HCFC-142b Trend from Atmospheric Chemistry Experiment (ACE) Solar Occultation Spectra

    NASA Technical Reports Server (NTRS)

    Rinsland, Curtis P.; Chiou, Linda; Boone,Chris; Bernath, Peter; Mahieu, Emmanuel

    2009-01-01

    The first measurement of the HCFC-142b (CH3CClF2) trend near the tropopause has been derived from volume mixing ratio (VMR) measurements at northern and southern hemisphere mid-latitudes for the 2004-2008 time period from spaceborne solar occultation observations recorded at 0.02/cm resolution with the ACE (atmospheric chemistry experiment) Fourier transform spectrometer. The HCFC-142b molecule is currently the third most abundant HCFC (hydrochlorofluorocarbon) in the atmosphere and ACE measurements over this time span show a continuous rise in its volume mixing ratio. Monthly average measurements at northern and southern hemisphere midlatitudes have similar increase rates that are consistent with surface trend measurements for a similar time span. A mean northern hemisphere profile for the time span shows a near constant VMR at 8-20km altitude range, consistent on average for the same time span with in situ results. The nearly constant vertical VMR profile also agrees with model predictions of a long lifetime in the lower atmosphere.

  16. Characterization of damaged skin by impedance spectroscopy: chemical damage by dimethyl sulfoxide.

    PubMed

    White, Erick A; Orazem, Mark E; Bunge, Annette L

    2013-10-01

    To relate changes in the electrochemical impedance spectra to the progression and mechanism of skin damage arising from exposure to dimethyl sulfoxide (DMSO). Electrochemical impedance spectra measured before and after human cadaver skin was treated with neat DMSO or phosphate buffered saline (control) for 1 h or less were compared with electrical circuit models representing two contrasting theories describing the progression of DMSO damage. Flux of a model lipophilic compound (p-chloronitrobenzene) was also measured. The impedance spectra collected before and after 1 h treatment with DMSO were consistent with a single circuit model; whereas, the spectra collected after DMSO exposure for 0.25 h were consistent with the model circuits observed before and after DMSO treatment for 1 h combined in series. DMSO treatments did not significantly change the flux of p-chloronitrobenzene compared to control. Impedance measurements of human skin exposed to DMSO for less than about 0.5 h were consistent with the presence of two layers: one damaged irreversibly and one unchanged. The thickness of the damaged layer increased proportional to the square-root of treatment time until about 0.5 h, when DMSO affected the entire stratum corneum. Irreversible DMSO damage altered the lipophilic permeation pathway minimally.

  17. A model for cytoplasmic rheology consistent with magnetic twisting cytometry.

    PubMed

    Butler, J P; Kelly, S M

    1998-01-01

    Magnetic twisting cytometry is gaining wide applicability as a tool for the investigation of the rheological properties of cells and the mechanical properties of receptor-cytoskeletal interactions. Current technology involves the application and release of magnetically induced torques on small magnetic particles bound to or inside cells, with measurements of the resulting angular rotation of the particles. The properties of purely elastic or purely viscous materials can be determined by the angular strain and strain rate, respectively. However, the cytoskeleton and its linkage to cell surface receptors display elastic, viscous, and even plastic deformation, and the simultaneous characterization of these properties using only elastic or viscous models is internally inconsistent. Data interpretation is complicated by the fact that in current technology, the applied torques are not constant in time, but decrease as the particles rotate. This paper describes an internally consistent model consisting of a parallel viscoelastic element in series with a parallel viscoelastic element, and one approach to quantitative parameter evaluation. The unified model reproduces all essential features seen in data obtained from a wide variety of cell populations, and contains the pure elastic, viscoelastic, and viscous cases as subsets.

  18. Robust analysis of semiparametric renewal process models

    PubMed Central

    Lin, Feng-Chang; Truong, Young K.; Fine, Jason P.

    2013-01-01

    Summary A rate model is proposed for a modulated renewal process comprising a single long sequence, where the covariate process may not capture the dependencies in the sequence as in standard intensity models. We consider partial likelihood-based inferences under a semiparametric multiplicative rate model, which has been widely studied in the context of independent and identical data. Under an intensity model, gap times in a single long sequence may be used naively in the partial likelihood with variance estimation utilizing the observed information matrix. Under a rate model, the gap times cannot be treated as independent and studying the partial likelihood is much more challenging. We employ a mixing condition in the application of limit theory for stationary sequences to obtain consistency and asymptotic normality. The estimator's variance is quite complicated owing to the unknown gap times dependence structure. We adapt block bootstrapping and cluster variance estimators to the partial likelihood. Simulation studies and an analysis of a semiparametric extension of a popular model for neural spike train data demonstrate the practical utility of the rate approach in comparison with the intensity approach. PMID:24550568

  19. Modelling the control of interceptive actions.

    PubMed Central

    Beek, P J; Dessing, J C; Peper, C E; Bullock, D

    2003-01-01

    In recent years, several phenomenological dynamical models have been formulated that describe how perceptual variables are incorporated in the control of motor variables. We call these short-route models as they do not address how perception-action patterns might be constrained by the dynamical properties of the sensory, neural and musculoskeletal subsystems of the human action system. As an alternative, we advocate a long-route modelling approach in which the dynamics of these subsystems are explicitly addressed and integrated to reproduce interceptive actions. The approach is exemplified through a discussion of a recently developed model for interceptive actions consisting of a neural network architecture for the online generation of motor outflow commands, based on time-to-contact information and information about the relative positions and velocities of hand and ball. This network is shown to be consistent with both behavioural and neurophysiological data. Finally, some problems are discussed with regard to the question of how the motor outflow commands (i.e. the intended movement) might be modulated in view of the musculoskeletal dynamics. PMID:14561342

  20. A multi-objective genetic algorithm for a mixed-model assembly U-line balancing type-I problem considering human-related issues, training, and learning

    NASA Astrophysics Data System (ADS)

    Rabbani, Masoud; Montazeri, Mona; Farrokhi-Asl, Hamed; Rafiei, Hamed

    2016-12-01

    Mixed-model assembly lines are increasingly accepted in many industrial environments to meet the growing trend of greater product variability, diversification of customer demands, and shorter life cycles. In this research, a new mathematical model is presented considering balancing a mixed-model U-line and human-related issues, simultaneously. The objective function consists of two separate components. The first part of the objective function is related to balance problem. In this part, objective functions are minimizing the cycle time, minimizing the number of workstations, and maximizing the line efficiencies. The second part is related to human issues and consists of hiring cost, firing cost, training cost, and salary. To solve the presented model, two well-known multi-objective evolutionary algorithms, namely non-dominated sorting genetic algorithm and multi-objective particle swarm optimization, have been used. A simple solution representation is provided in this paper to encode the solutions. Finally, the computational results are compared and analyzed.

  1. Failure of self-consistency in the discrete resource model of visual working memory.

    PubMed

    Bays, Paul M

    2018-06-03

    The discrete resource model of working memory proposes that each individual has a fixed upper limit on the number of items they can store at one time, due to division of memory into a few independent "slots". According to this model, responses on short-term memory tasks consist of a mixture of noisy recall (when the tested item is in memory) and random guessing (when the item is not in memory). This provides two opportunities to estimate capacity for each observer: first, based on their frequency of random guesses, and second, based on the set size at which the variability of stored items reaches a plateau. The discrete resource model makes the simple prediction that these two estimates will coincide. Data from eight published visual working memory experiments provide strong evidence against such a correspondence. These results present a challenge for discrete models of working memory that impose a fixed capacity limit. Copyright © 2018 The Author. Published by Elsevier Inc. All rights reserved.

  2. Structure and mechanism of diet specialisation: testing models of individual variation in resource use with sea otters

    USGS Publications Warehouse

    Tinker, M. Tim; Guimarães, Paulo R.; Novak, Mark; Marquitti, Flavia Maria Darcie; Bodkin, James L.; Staedler, Michelle; Bentall, Gena B.; Estes, James A.

    2012-01-01

    Studies of consumer-resource interactions suggest that individual diet specialisation is empirically widespread and theoretically important to the organisation and dynamics of populations and communities. We used weighted networks to analyze the resource use by sea otters, testing three alternative models for how individual diet specialisation may arise. As expected, individual specialisation was absent when otter density was low, but increased at high-otter density. A high-density emergence of nested resource-use networks was consistent with the model assuming individuals share preference ranks. However, a density-dependent emergence of a non-nested modular network for ‘core’ resources was more consistent with the ‘competitive refuge’ model. Individuals from different diet modules showed predictable variation in rank-order prey preferences and handling times of core resources, further supporting the competitive refuge model. Our findings support a hierarchical organisation of diet specialisation and suggest individual use of core and marginal resources may be driven by different selective pressures.

  3. Radioactive models of type 1 supernovae

    NASA Astrophysics Data System (ADS)

    Schurmann, S. R.

    1983-04-01

    In recent years, considerable progress has been made toward understanding Type I supernovae within the context of radioactive energy input. Much effort has gone into determining the peak magnitude of the supernovae, particularly in the B-band, and its relation to the Hubble constant. If the distances inferred for Type I events are at all accurate, and/or the Hubble constant has a value near 50 km per s per Mpc, it is clear that models must reach a peak magnitude approximately -20 in order to be consistent. The present investigation is concerned with models which achieve peak magnitudes near this value and contain 0.8 solar mass of Ni-56. The B-band light curve declines much more rapidly after peak than the bolometric light curve. The mass and velocity of Ni-56 (at least for the A models) are within the region defined by Axelrod (1980) for configurations which produce acceptable spectra at late times. The models are consistent with the absence of a neutron star after the explosion. There remain, however, many difficult problems.

  4. Radioactive models of type 1 supernovae

    NASA Technical Reports Server (NTRS)

    Schurmann, S. R.

    1983-01-01

    In recent years, considerable progress has been made toward understanding Type I supernovae within the context of radioactive energy input. Much effort has gone into determining the peak magnitude of the supernovae, particularly in the B-band, and its relation to the Hubble constant. If the distances inferred for Type I events are at all accurate, and/or the Hubble constant has a value near 50 km per s per Mpc, it is clear that models must reach a peak magnitude approximately -20 in order to be consistent. The present investigation is concerned with models which achieve peak magnitudes near this value and contain 0.8 solar mass of Ni-56. The B-band light curve declines much more rapidly after peak than the bolometric light curve. The mass and velocity of Ni-56 (at least for the A models) are within the region defined by Axelrod (1980) for configurations which produce acceptable spectra at late times. The models are consistent with the absence of a neutron star after the explosion. There remain, however, many difficult problems.

  5. Application of Poisson random effect models for highway network screening.

    PubMed

    Jiang, Ximiao; Abdel-Aty, Mohamed; Alamili, Samer

    2014-02-01

    In recent years, Bayesian random effect models that account for the temporal and spatial correlations of crash data became popular in traffic safety research. This study employs random effect Poisson Log-Normal models for crash risk hotspot identification. Both the temporal and spatial correlations of crash data were considered. Potential for Safety Improvement (PSI) were adopted as a measure of the crash risk. Using the fatal and injury crashes that occurred on urban 4-lane divided arterials from 2006 to 2009 in the Central Florida area, the random effect approaches were compared to the traditional Empirical Bayesian (EB) method and the conventional Bayesian Poisson Log-Normal model. A series of method examination tests were conducted to evaluate the performance of different approaches. These tests include the previously developed site consistence test, method consistence test, total rank difference test, and the modified total score test, as well as the newly proposed total safety performance measure difference test. Results show that the Bayesian Poisson model accounting for both temporal and spatial random effects (PTSRE) outperforms the model that with only temporal random effect, and both are superior to the conventional Poisson Log-Normal model (PLN) and the EB model in the fitting of crash data. Additionally, the method evaluation tests indicate that the PTSRE model is significantly superior to the PLN model and the EB model in consistently identifying hotspots during successive time periods. The results suggest that the PTSRE model is a superior alternative for road site crash risk hotspot identification. Copyright © 2013 Elsevier Ltd. All rights reserved.

  6. Integrating EarthScope Data to Constrain the Long-Term Effects of Tectonism on Continental Lithosphere

    NASA Astrophysics Data System (ADS)

    Porter, R. C.; van der Lee, S.

    2017-12-01

    One of the most significant products of the EarthScope experiment has been the development of new seismic tomography models that take advantage of the consistent station design, regular 70-km station spacing, and wide aperture of the EarthScope Transportable Array (TA) network. These models have led to the discovery and interpretation of additional compositional, thermal, and density anomalies throughout the continental US, especially within tectonically stable regions. The goal of this work is use data from the EarthScope experiment to better elucidate the temporal relationship between tectonic activity and seismic velocities. To accomplish this, we compile several upper-mantle seismic velocity models from the Incorporated Research Institute for Seismology (IRIS) Earth Model Collaboration (EMC) and compare these to a tectonic age model we compiled using geochemical ages from the Interdisciplinary Earth Data Alliance: EarthChem Database. Results from this work confirms quantitatively that the time elapsed since the most recent tectonic event is a dominant influence on seismic velocities within the upper mantle across North America. To further understand this relationship, we apply mineral-physics models for peridotite to estimate upper-mantle temperatures for the continental US from tomographically imaged shear velocities. This work shows that the relationship between the estimated temperatures and the time elapsed since the most recent tectonic event is broadly consistent with plate cooling models, yet shows intriguing scatter. Ultimately, this work constrains the long-term thermal evolution of continental mantle lithosphere.

  7. Optical EVPA rotations in blazars: testing a stochastic variability model with RoboPol data

    NASA Astrophysics Data System (ADS)

    Kiehlmann, S.; Blinov, D.; Pearson, T. J.; Liodakis, I.

    2017-12-01

    We identify rotations of the polarization angle in a sample of blazars observed for three seasons with the RoboPol instrument. A simplistic stochastic variability model is tested against this sample of rotation events. The model is capable of producing samples of rotations with parameters similar to the observed ones, but fails to reproduce the polarization fraction at the same time. Even though we can neither accept nor conclusively reject the model, we point out various aspects of the observations that are fully consistent with a random walk process.

  8. Dynamics of morphological evolution in experimental Escherichia coli populations.

    PubMed

    Cui, F; Yuan, B

    2016-08-30

    Here, we applied a two-stage clonal expansion model of morphological (cell-size) evolution to a long-term evolution experiment with Escherichia coli. Using this model, we derived the incidence function of the appearance of cell-size stability, the waiting time until this morphological stability, and the conditional and unconditional probabilities of morphological stability. After assessing the parameter values, we verified that the calculated waiting time was consistent with the experimental results, demonstrating the effectiveness of the two-stage model. According to the relative contributions of parameters to the incidence function and the waiting time, cell-size evolution is largely determined by the promotion rate, i.e., the clonal expansion rate of selectively advantageous organisms. This rate plays a prominent role in the evolution of cell size in experimental populations, whereas all other evolutionary forces were found to be less influential.

  9. A Self-Consistent Model of the Interacting Ring Current Ions and Electromagnetic ICWs. Initial Results: Waves and Precipitation Fluxes

    NASA Technical Reports Server (NTRS)

    Khazanov, G. V.; Gamayunov, K. V.; Jordanova, V. K.; Krivorutsky, E. N.; Whitaker, Ann F. (Technical Monitor)

    2001-01-01

    Initial results from the new developed model of the interacting ring current ions and ion cyclotron waves are presented. The model described by the system of two bound kinetic equations: one equation describes the ring current ion dynamics, and another one gives wave evolution. Such system gives a self-consistent description of the ring current ions and ion cyclotron waves in a quasilinear approach. Calculating ion-wave relationships, on a global scale under non steady-state conditions during May 2-5, 1998 storm, we presented the data at three time cuts around initial, main, and late recovery phases of May 4, 1998 storm phase. The structure and dynamics of the ring current proton precipitating flux regions and the wave active ones are discussed in detail.

  10. General existence principles for Stieltjes differential equations with applications to mathematical biology

    NASA Astrophysics Data System (ADS)

    López Pouso, Rodrigo; Márquez Albés, Ignacio

    2018-04-01

    Stieltjes differential equations, which contain equations with impulses and equations on time scales as particular cases, simply consist on replacing usual derivatives by derivatives with respect to a nondecreasing function. In this paper we prove new existence results for functional and discontinuous Stieltjes differential equations and we show that such general results have real world applications. Specifically, we show that Stieltjes differential equations are specially suitable to study populations which exhibit dormant states and/or very short (impulsive) periods of reproduction. In particular, we construct two mathematical models for the evolution of a silkworm population. Our first model can be explicitly solved, as it consists on a linear Stieltjes equation. Our second model, more realistic, is nonlinear, discontinuous and functional, and we deduce the existence of solutions by means of a result proven in this paper.

  11. Using Time Series Analysis to Predict Cardiac Arrest in a PICU.

    PubMed

    Kennedy, Curtis E; Aoki, Noriaki; Mariscalco, Michele; Turley, James P

    2015-11-01

    To build and test cardiac arrest prediction models in a PICU, using time series analysis as input, and to measure changes in prediction accuracy attributable to different classes of time series data. Retrospective cohort study. Thirty-one bed academic PICU that provides care for medical and general surgical (not congenital heart surgery) patients. Patients experiencing a cardiac arrest in the PICU and requiring external cardiac massage for at least 2 minutes. None. One hundred three cases of cardiac arrest and 109 control cases were used to prepare a baseline dataset that consisted of 1,025 variables in four data classes: multivariate, raw time series, clinical calculations, and time series trend analysis. We trained 20 arrest prediction models using a matrix of five feature sets (combinations of data classes) with four modeling algorithms: linear regression, decision tree, neural network, and support vector machine. The reference model (multivariate data with regression algorithm) had an accuracy of 78% and 87% area under the receiver operating characteristic curve. The best model (multivariate + trend analysis data with support vector machine algorithm) had an accuracy of 94% and 98% area under the receiver operating characteristic curve. Cardiac arrest predictions based on a traditional model built with multivariate data and a regression algorithm misclassified cases 3.7 times more frequently than predictions that included time series trend analysis and built with a support vector machine algorithm. Although the final model lacks the specificity necessary for clinical application, we have demonstrated how information from time series data can be used to increase the accuracy of clinical prediction models.

  12. Interval Timing Accuracy and Scalar Timing in C57BL/6 Mice

    PubMed Central

    Buhusi, Catalin V.; Aziz, Dyana; Winslow, David; Carter, Rickey E.; Swearingen, Joshua E.; Buhusi, Mona C.

    2010-01-01

    In many species, interval timing behavior is accurate—appropriate estimated durations—and scalar—errors vary linearly with estimated durations. While accuracy has been previously examined, scalar timing has not been yet clearly demonstrated in house mice (Mus musculus), raising concerns about mouse models of human disease. We estimated timing accuracy and precision in C57BL/6 mice, the most used background strain for genetic models of human disease, in a peak-interval procedure with multiple intervals. Both when timing two intervals (Experiment 1) or three intervals (Experiment 2), C57BL/6 mice demonstrated varying degrees of timing accuracy. Importantly, both at individual and group level, their precision varied linearly with the subjective estimated duration. Further evidence for scalar timing was obtained using an intraclass correlation statistic. This is the first report of consistent, reliable scalar timing in a sizable sample of house mice, thus validating the PI procedure as a valuable technique, the intraclass correlation statistic as a powerful test of the scalar property, and the C57BL/6 strain as a suitable background for behavioral investigations of genetically engineered mice modeling disorders of interval timing. PMID:19824777

  13. Long residence times of rapidly decomposable soil organic matter: application of a multi-phase, multi-component, and vertically-resolved model (TOUGHREACTv1) to soil carbon dynamics

    NASA Astrophysics Data System (ADS)

    Riley, W. J.; Maggi, F. M.; Kleber, M.; Torn, M. S.; Tang, J. Y.; Dwivedi, D.; Guerry, N.

    2014-01-01

    Accurate representation of soil organic matter (SOM) dynamics in Earth System Models is critical for future climate prediction, yet large uncertainties exist regarding how, and to what extent, the suite of proposed relevant mechanisms should be included. To investigate how various mechanisms interact to influence SOM storage and dynamics, we developed a SOM reaction network integrated in a one-dimensional, multi-phase, and multi-component reactive transport solver. The model includes representations of bacterial and fungal activity, multiple archetypal polymeric and monomeric carbon substrate groups, aqueous chemistry, aqueous advection and diffusion, gaseous diffusion, and adsorption (and protection) and desorption from the soil mineral phase. The model predictions reasonably matched observed depth-resolved SOM and dissolved organic carbon (DOC) stocks in grassland ecosystems as well as lignin content and fungi to aerobic bacteria ratios. We performed a suite of sensitivity analyses under equilibrium and dynamic conditions to examine the role of dynamic sorption, microbial assimilation rates, and carbon inputs. To our knowledge, observations do not exist to fully test such a complicated model structure or to test the hypotheses used to explain observations of substantial storage of very old SOM below the rooting depth. Nevertheless, we demonstrated that a reasonable combination of sorption parameters, microbial biomass and necromass dynamics, and advective transport can match observations without resorting to an arbitrary depth-dependent decline in SOM turnover rates, as is often done. We conclude that, contrary to assertions derived from existing turnover time based model formulations, observed carbon content and δ14C vertical profiles are consistent with a representation of SOM dynamics consisting of (1) carbon compounds without designated intrinsic turnover times, (2) vertical aqueous transport, and (3) dynamic protection on mineral surfaces.

  14. Personalized long-term prediction of cognitive function: Using sequential assessments to improve model performance.

    PubMed

    Chi, Chih-Lin; Zeng, Wenjun; Oh, Wonsuk; Borson, Soo; Lenskaia, Tatiana; Shen, Xinpeng; Tonellato, Peter J

    2017-12-01

    Prediction of onset and progression of cognitive decline and dementia is important both for understanding the underlying disease processes and for planning health care for populations at risk. Predictors identified in research studies are typically accessed at one point in time. In this manuscript, we argue that an accurate model for predicting cognitive status over relatively long periods requires inclusion of time-varying components that are sequentially assessed at multiple time points (e.g., in multiple follow-up visits). We developed a pilot model to test the feasibility of using either estimated or observed risk factors to predict cognitive status. We developed two models, the first using a sequential estimation of risk factors originally obtained from 8 years prior, then improved by optimization. This model can predict how cognition will change over relatively long time periods. The second model uses observed rather than estimated time-varying risk factors and, as expected, results in better prediction. This model can predict when newly observed data are acquired in a follow-up visit. Performances of both models that are evaluated in10-fold cross-validation and various patient subgroups show supporting evidence for these pilot models. Each model consists of multiple base prediction units (BPUs), which were trained using the same set of data. The difference in usage and function between the two models is the source of input data: either estimated or observed data. In the next step of model refinement, we plan to integrate the two types of data together to flexibly predict dementia status and changes over time, when some time-varying predictors are measured only once and others are measured repeatedly. Computationally, both data provide upper and lower bounds for predictive performance. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. A Novel Ex Vivo Training Model for Acquiring Supermicrosurgical Skills Using a Chicken Leg.

    PubMed

    Cifuentes, Ignacio J; Rodriguez, José R; Yañez, Ricardo A; Salisbury, María C; Cuadra, Álvaro J; Varas, Julian E; Dagnino, Bruno L

    2016-11-01

    Background  Supermicrosurgery is a technique used for dissection and anastomosis of submillimeter diameter vessels. This technique requires precise hand movements and superb eye-hand coordination, making continuous training necessary. Biological in vivo and ex vivo models have been described for this purpose, the latter being more accessible and cost-effective. The aim of this study is to present a new ex vivo training model using a chicken leg. Methods  In 28 chicken legs, an anatomical study was performed. An intramuscular perforator vessel was identified and dissected. Arterial diameters of 0.7, 0.5, and 0.3 mm were identified and consistency of the perforator was assessed. In additional 10 chicken legs, 25 submillimeter arteries were anastomosed using this perforator vessel. Five arteries of 0.3 and 10 of 0.5 mm were anastomosed with nylon 11-0 and 12-0 sutures. Intravascular stent (IVaS) technique and open guide (OG) technique were used in 0.5-mm arteries. A total of 10 arteries of 0.7 mm were anastomosed using 10-0 sutures in a conventional fashion. Dissection and anastomosis time were recorded and patency was tested. Results  We were able to identify 0.7 to 0.3 mm diameter arteries in all the specimens and confirm the consistency of the perforator. The median time for dissection was 13.4 minutes. The median time for anastomosis was 32.3 minutes for 0.3-mm arteries, 24.3 minutes for 0.5-mm arteries using IVaS, 29.5 minutes for the OG technique, and 20.9 minutes for the 0.7 mm diameter arteries. All the anastomoses were permeable. Conclusion  Due to its consistent and adequate diameter vessels, this model is adequate for training supermicrosurgical skills. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  16. Modeling and Comparison of Options for the Disposal of Excess Weapons Plutonium in Russia

    DTIC Science & Technology

    2002-04-01

    fuel LWR cooling time LWR Pu load rate LWR net destruction frac ~ LWR reactors op life mox core frac Excess Separated Pu HTGR Cycle Pu in Waste LWR MOX...reflecting the cycle used in this type of reactor. For the HTGR , the entire core consists of plutonium fuel , therefore a core fraction is not specified...cooling time Time spent fuel unloaded from HTGR reactor must cool before permanently stored 3 years Mox core fraction Fraction of

  17. Time-resolved explosion of intense-laser-heated clusters.

    PubMed

    Kim, K Y; Alexeev, I; Parra, E; Milchberg, H M

    2003-01-17

    We investigate the femtosecond explosive dynamics of intense laser-heated argon clusters by measuring the cluster complex transient polarizability. The time evolution of the polarizability is characteristic of competition in the optical response between supercritical and subcritical density regions of the expanding cluster. The results are consistent with time-resolved Rayleigh scattering measurements, and bear out the predictions of a recent laser-cluster interaction model [H. M. Milchberg, S. J. McNaught, and E. Parra, Phys. Rev. E 64, 056402 (2001)

  18. Anchor Modeling

    NASA Astrophysics Data System (ADS)

    Regardt, Olle; Rönnbäck, Lars; Bergholtz, Maria; Johannesson, Paul; Wohed, Petia

    Maintaining and evolving data warehouses is a complex, error prone, and time consuming activity. The main reason for this state of affairs is that the environment of a data warehouse is in constant change, while the warehouse itself needs to provide a stable and consistent interface to information spanning extended periods of time. In this paper, we propose a modeling technique for data warehousing, called anchor modeling, that offers non-destructive extensibility mechanisms, thereby enabling robust and flexible management of changes in source systems. A key benefit of anchor modeling is that changes in a data warehouse environment only require extensions, not modifications, to the data warehouse. This ensures that existing data warehouse applications will remain unaffected by the evolution of the data warehouse, i.e. existing views and functions will not have to be modified as a result of changes in the warehouse model.

  19. A new class of finite element variational multiscale turbulence models for incompressible magnetohydrodynamics

    DOE PAGES

    Sondak, D.; Shadid, J. N.; Oberai, A. A.; ...

    2015-04-29

    New large eddy simulation (LES) turbulence models for incompressible magnetohydrodynamics (MHD) derived from the variational multiscale (VMS) formulation for finite element simulations are introduced. The new models include the variational multiscale formulation, a residual-based eddy viscosity model, and a mixed model that combines both of these component models. Each model contains terms that are proportional to the residual of the incompressible MHD equations and is therefore numerically consistent. Moreover, each model is also dynamic, in that its effect vanishes when this residual is small. The new models are tested on the decaying MHD Taylor Green vortex at low and highmore » Reynolds numbers. The evaluation of the models is based on comparisons with available data from direct numerical simulations (DNS) of the time evolution of energies as well as energy spectra at various discrete times. Thus a numerical study, on a sequence of meshes, is presented that demonstrates that the large eddy simulation approaches the DNS solution for these quantities with spatial mesh refinement.« less

  20. Thermal Pollution Math Model. Volume 1. Thermal Pollution Model Package Verification and Transfer. [environment impact of thermal discharges from power plants

    NASA Technical Reports Server (NTRS)

    Lee, S. S.; Sengupta, S.

    1980-01-01

    Two three dimensional, time dependent models, one free surface, the other rigid lid, were verified at Anclote Anchorage and Lake Keowee respectively. The first site is a coastal site in northern Florida; the other is a man-made lake in South Carolina. These models describe the dispersion of heated discharges from power plants under the action of ambient conditions. A one dimensional, horizontally-averaged model was also developed and verified at Lake Keowee. The data base consisted of archival in situ measurements and data collected during field missions. The field missions were conducted during winter and summer conditions at each site. Each mission consisted of four infrared scanner flights with supporting ground truth and in situ measurements. At Anclote, special care was taken to characterize the complete tidal cycle. The three dimensional model results compared with IR data for thermal plumes on an average within 1 C root mean square difference. The one dimensional model performed satisfactorily in simulating the 1971-1979 period.

Top