Sample records for shown based model

  1. Multi-scale Modeling and Analysis of Nano-RFID Systems on HPC Setup

    NASA Astrophysics Data System (ADS)

    Pathak, Rohit; Joshi, Satyadhar

    In this paper we have worked out on some the complex modeling aspects such as Multi Scale modeling, MATLAB Sugar based modeling and have shown the complexities involved in the analysis of Nano RFID (Radio Frequency Identification) systems. We have shown the modeling and simulation and demonstrated some novel ideas and library development for Nano RFID. Multi scale modeling plays a very important role in nanotech enabled devices properties of which cannot be explained sometimes by abstraction level theories. Reliability and packaging still remains one the major hindrances in practical implementation of Nano RFID based devices. And to work on them modeling and simulation will play a very important role. CNTs is the future low power material that will replace CMOS and its integration with CMOS, MEMS circuitry will play an important role in realizing the true power in Nano RFID systems. RFID based on innovations in nanotechnology has been shown. MEMS modeling of Antenna, sensors and its integration in the circuitry has been shown. Thus incorporating this we can design a Nano-RFID which can be used in areas like human implantation and complex banking applications. We have proposed modeling of RFID using the concept of multi scale modeling to accurately predict its properties. Also we give the modeling of MEMS devices that are proposed recently that can see possible application in RFID. We have also covered the applications and the advantages of Nano RFID in various areas. RF MEMS has been matured and its devices are being successfully commercialized but taking it to limits of nano domains and integration with singly chip RFID needs a novel approach which is being proposed. We have modeled MEMS based transponder and shown the distribution for multi scale modeling for Nano RFID.

  2. Comparison between a model-based and a conventional pyramid sensor reconstructor.

    PubMed

    Korkiakoski, Visa; Vérinaud, Christophe; Le Louarn, Miska; Conan, Rodolphe

    2007-08-20

    A model of a non-modulated pyramid wavefront sensor (P-WFS) based on Fourier optics has been presented. Linearizations of the model represented as Jacobian matrices are used to improve the P-WFS phase estimates. It has been shown in simulations that a linear approximation of the P-WFS is sufficient in closed-loop adaptive optics. Also a method to compute model-based synthetic P-WFS command matrices is shown, and its performance is compared to the conventional calibration. It was observed that in poor visibility the new calibration is better than the conventional.

  3. Solving probability reasoning based on DNA strand displacement and probability modules.

    PubMed

    Zhang, Qiang; Wang, Xiaobiao; Wang, Xiaojun; Zhou, Changjun

    2017-12-01

    In computation biology, DNA strand displacement technology is used to simulate the computation process and has shown strong computing ability. Most researchers use it to solve logic problems, but it is only rarely used in probabilistic reasoning. To process probabilistic reasoning, a conditional probability derivation model and total probability model based on DNA strand displacement were established in this paper. The models were assessed through the game "read your mind." It has been shown to enable the application of probabilistic reasoning in genetic diagnosis. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Main principles of developing exploitation models of semiconductor devices

    NASA Astrophysics Data System (ADS)

    Gradoboev, A. V.; Simonova, A. V.

    2018-05-01

    The paper represents primary tasks, solutions of which allow to develop the exploitation modes of semiconductor devices taking into account complex and combined influence of ionizing irradiation and operation factors. The structure of the exploitation model of the semiconductor device is presented, which is based on radiation and reliability models. Furthermore, it was shown that the exploitation model should take into account complex and combine influence of various ionizing irradiation types and operation factors. The algorithm of developing the exploitation model of the semiconductor devices is proposed. The possibility of creating the radiation model of Schottky barrier diode, Schottky field-effect transistor and Gunn diode is shown based on the available experimental data. The basic exploitation model of IR-LEDs based upon double AlGaAs heterostructures is represented. The practical application of the exploitation models will allow to output the electronic products with guaranteed operational properties.

  5. Operation Brain Trauma Therapy

    DTIC Science & Technology

    2016-12-01

    either clinical trials in TBI if shown to be highly effective across OBTT, or tested in a precision medicine TBI phenotype (such as contusion) based...clinical trial if shown to be potently effective in one of the models in OBTT (i.e., a model that mimicked a specific clinical TBI phenotype). In... effective drug seen thus far in primary screening albeit with benefit highly model dependent, largely restricted to the CCI model. This suggests

  6. Static and dynamic behaviour of nonlocal elastic bar using integral strain-based and peridynamic models

    NASA Astrophysics Data System (ADS)

    Challamel, Noël

    2018-04-01

    The static and dynamic behaviour of a nonlocal bar of finite length is studied in this paper. The nonlocal integral models considered in this paper are strain-based and relative displacement-based nonlocal models; the latter one is also labelled as a peridynamic model. For infinite media, and for sufficiently smooth displacement fields, both integral nonlocal models can be equivalent, assuming some kernel correspondence rules. For infinite media (or finite media with extended reflection rules), it is also shown that Eringen's differential model can be reformulated into a consistent strain-based integral nonlocal model with exponential kernel, or into a relative displacement-based integral nonlocal model with a modified exponential kernel. A finite bar in uniform tension is considered as a paradigmatic static case. The strain-based nonlocal behaviour of this bar in tension is analyzed for different kernels available in the literature. It is shown that the kernel has to fulfil some normalization and end compatibility conditions in order to preserve the uniform strain field associated with this homogeneous stress state. Such a kernel can be built by combining a local and a nonlocal strain measure with compatible boundary conditions, or by extending the domain outside its finite size while preserving some kinematic compatibility conditions. The same results are shown for the nonlocal peridynamic bar where a homogeneous strain field is also analytically obtained in the elastic bar for consistent compatible kinematic boundary conditions at the vicinity of the end conditions. The results are extended to the vibration of a fixed-fixed finite bar where the natural frequencies are calculated for both the strain-based and the peridynamic models.

  7. A Rigorous Temperature-Dependent Stochastic Modelling and Testing for MEMS-Based Inertial Sensor Errors.

    PubMed

    El-Diasty, Mohammed; Pagiatakis, Spiros

    2009-01-01

    In this paper, we examine the effect of changing the temperature points on MEMS-based inertial sensor random error. We collect static data under different temperature points using a MEMS-based inertial sensor mounted inside a thermal chamber. Rigorous stochastic models, namely Autoregressive-based Gauss-Markov (AR-based GM) models are developed to describe the random error behaviour. The proposed AR-based GM model is initially applied to short stationary inertial data to develop the stochastic model parameters (correlation times). It is shown that the stochastic model parameters of a MEMS-based inertial unit, namely the ADIS16364, are temperature dependent. In addition, field kinematic test data collected at about 17 °C are used to test the performance of the stochastic models at different temperature points in the filtering stage using Unscented Kalman Filter (UKF). It is shown that the stochastic model developed at 20 °C provides a more accurate inertial navigation solution than the ones obtained from the stochastic models developed at -40 °C, -20 °C, 0 °C, +40 °C, and +60 °C. The temperature dependence of the stochastic model is significant and should be considered at all times to obtain optimal navigation solution for MEMS-based INS/GPS integration.

  8. Development of Modified Incompressible Ideal Gas Model for Natural Draft Cooling Tower Flow Simulation

    NASA Astrophysics Data System (ADS)

    Hyhlík, Tomáš

    2018-06-01

    The article deals with the development of incompressible ideal gas like model, which can be used as a part of mathematical model describing natural draft wet-cooling tower flow, heat and mass transfer. It is shown, based on the results of a complex mathematical model of natural draft wet-cooling tower flow, that behaviour of pressure, temperature and density is very similar to the case of hydrostatics of moist air, where heat and mass transfer in the fill zone must be taken into account. The behaviour inside the cooling tower is documented using density, pressure and temperature distributions. The proposed equation for the density is based on the same idea like the incompressible ideal gas model, which is only dependent on temperature, specific humidity and in this case on elevation. It is shown that normalized density difference of the density based on proposed model and density based on the nonsimplified model is in the order of 10-4. The classical incompressible ideal gas model, Boussinesq model and generalised Boussinesq model are also tested. These models show deviation in percentages.

  9. Coarse-graining of proteins based on elastic network models

    NASA Astrophysics Data System (ADS)

    Sinitskiy, Anton V.; Voth, Gregory A.

    2013-08-01

    To simulate molecular processes on biologically relevant length- and timescales, coarse-grained (CG) models of biomolecular systems with tens to even hundreds of residues per CG site are required. One possible way to build such models is explored in this article: an elastic network model (ENM) is employed to define the CG variables. Free energy surfaces are approximated by Taylor series, with the coefficients found by force-matching. CG potentials are shown to undergo renormalization due to roughness of the energy landscape and smoothing of it under coarse-graining. In the case study of hen egg-white lysozyme, the entropy factor is shown to be of critical importance for maintaining the native structure, and a relationship between the proposed ENM-mode-based CG models and traditional CG-bead-based models is discussed. The proposed approach uncovers the renormalizable character of CG models and offers new opportunities for automated and computationally efficient studies of complex free energy surfaces.

  10. An experimentally-informed coarse-grained 3-site-per-nucleotide model of DNA: Structure, thermodynamics, and dynamics of hybridization

    PubMed Central

    Hinckley, Daniel M.; Freeman, Gordon S.; Whitmer, Jonathan K.; de Pablo, Juan J.

    2013-01-01

    A new 3-Site-Per-Nucleotide coarse-grained model for DNA is presented. The model includes anisotropic potentials between bases involved in base stacking and base pair interactions that enable the description of relevant structural properties, including the major and minor grooves. In an improvement over available coarse-grained models, the correct persistence length is recovered for both ssDNA and dsDNA, allowing for simulation of non-canonical structures such as hairpins. DNA melting temperatures, measured for duplexes and hairpins by integrating over free energy surfaces generated using metadynamics simulations, are shown to be in quantitative agreement with experiment for a variety of sequences and conditions. Hybridization rate constants, calculated using forward-flux sampling, are also shown to be in good agreement with experiment. The coarse-grained model presented here is suitable for use in biological and engineering applications, including nucleosome positioning and DNA-templated engineering. PMID:24116642

  11. Pilot modeling and closed-loop analysis of flexible aircraft in the pitch tracking task

    NASA Technical Reports Server (NTRS)

    Schmidt, D. K.

    1983-01-01

    The issue addressed in the appropriate modeling technique for pilot vehicle analysis of large flexible aircraft, when the frequency separation between the rigid-body mode and the dynamic aeroelastic modes is reduced. This situation was shown to have significant effects on pitch-tracking performance and subjective rating of the task, obtained via fixed base simulation. Further, the dynamics in these cases are not well modeled with a rigid-body-like model obtained by including only 'static elastic' effects, for example. It is shown that pilot/vehicle analysis of this data supports the hypothesis that an appropriate pilot-model structure is an optimal-control pilot model of full order. This is in contrast to the contention that a representative model is of reduced order when the subject is controlling high-order dynamics as in a flexible vehicle. The key appears to be in the correct assessment of the pilot's objective of attempting to control 'rigid-body' vehicle response, a response that must be estimated by the pilot from observations contaminated by aeroelastic dynamics. Finally, a model-based metric is shown to correlate well with the pilot's subjective ratings.

  12. Soil moisture modeling review

    NASA Technical Reports Server (NTRS)

    Hildreth, W. W.

    1978-01-01

    A determination of the state of the art in soil moisture transport modeling based on physical or physiological principles was made. It was found that soil moisture models based on physical principles have been under development for more than 10 years. However, these models were shown to represent infiltration and redistribution of soil moisture quite well. Evapotranspiration has not been as adequately incorporated into the models.

  13. Areas with Surface Thermal Anomalies as Detected by ASTER and LANDSAT Data around Pinkerton Hot Springs, Colorado

    DOE Data Explorer

    Khalid Hussein

    2012-02-01

    This map shows areas of anomalous surface temperature in northern Saguache Counties identified from ASTER and LANDSAT thermal data and spatial based insolation model. The temperature for the ASTER data was calculated using the Emissivity Normalization Algorithm that separate temperature from emissivity. The incoming solar radiation was calculated using spatial based insolation model developed by Fu and Rich (1999). Then the temperature due to solar radiation was calculated using emissivity derived from ASTER data. The residual temperature, i.e. temperature due to solar radiation subtracted from ASTER temperature was used to identify thermally anomalous areas. Areas having anomalous temperature in the ASTER data are shown in blue diagonal hatch, while areas having anomalous temperature in the LANDSAT data are shown in magenta on the map. Thermal springs and areas with favorable geochemistry are also shown. Springs or wells having non-favorable geochemistry are shown as blue dots.

  14. Areas with Surface Thermal Anomalies as Detected by ASTER and LANDSAT Data in Northwest Delta, Colorado

    DOE Data Explorer

    Khalid Hussein

    2012-02-01

    This map shows areas of anomalous surface temperature in northern Saguache Counties identified from ASTER and LANDSAT thermal data and spatial based insolation model. The temperature for the ASTER data was calculated using the Emissivity Normalization Algorithm that separate temperature from emissivity. The incoming solar radiation was calculated using spatial based insolation model developed by Fu and Rich (1999). Then the temperature due to solar radiation was calculated using emissivity derived from ASTER data. The residual temperature, i.e. temperature due to solar radiation subtracted from ASTER temperature was used to identify thermally anomalous areas. Areas having anomalous temperature in the ASTER data are shown in blue diagonal hatch, while areas having anomalous temperature in the LANDSAT data are shown in magenta on the map. Thermal springs and areas with favorable geochemistry are also shown. Springs or wells having non-favorable geochemistry are shown as blue dots.

  15. Areas with Surface Thermal Anomalies as Detected by ASTER and LANDSAT Data in Ouray, Colorado

    DOE Data Explorer

    Khalid Hussein

    2012-02-01

    This map shows areas of anomalous surface temperature in Ouray identified from ASTER and LANDSAT thermal data and spatial based insolation model. The temperature for the ASTER data was calculated using the Emissivity Normalization Algorithm that separate temperature from emissivity. The incoming solar radiation was calculated using spatial based insolation model developed by Fu and Rich (1999). Then the temperature due to solar radiation was calculated using emissivity derived from ASTER data. The residual temperature, i.e. temperature due to solar radiation subtracted from ASTER temperature was used to identify thermally anomalous areas. Areas having anomalous temperature in the ASTER data are shown in blue diagonal hatch, while areas having anomalous temperature in the LANDSAT data are shown in magenta on the map. Thermal springs and areas with favorable geochemistry are also shown. Springs or wells having non-favorable geochemistry are shown as blue dots.

  16. Areas with Surface Thermal Anomalies as Detected by ASTER and LANDSAT Data in Southwest Steamboat Springs, Garfield County, Colorado

    DOE Data Explorer

    Khalid Hussein

    2012-02-01

    This map shows areas of anomalous surface temperature around south Steamboat Springs as identified from ASTER and LANDSAT thermal data and spatial based insolation model. The temperature for the ASTER data was calculated using the Emissivity Normalization Algorithm that separate temperature from emissivity. The incoming solar radiation was calculated using spatial based insolation model developed by Fu and Rich (1999). Then the temperature due to solar radiation was calculated using emissivity derived from ASTER data. The residual temperature, i.e. temperature due to solar radiation subtracted from ASTER temperature was used to identify thermally anomalous areas. Areas having anomalous temperature in the ASTER data are shown in blue diagonal hatch, while areas having anomalous temperature in the LANDSAT data are shown in magenta on the map. Thermal springs and areas with favorable geochemistry are also shown. Springs or wells having non-favorable geochemistry are shown as blue dots.

  17. Areas with Surface Thermal Anomalies as Detected by ASTER and LANDSAT Data in Northern Saguache County, Colorado

    DOE Data Explorer

    Khalid Hussein

    2012-02-01

    This map shows areas of anomalous surface temperature in northern Saguache Counties identified from ASTER and LANDSAT thermal data and spatial based insolation model. The temperature for the ASTER data was calculated using the Emissivity Normalization Algorithm that separate temperature from emissivity. The incoming solar radiation was calculated using spatial based insolation model developed by Fu and Rich (1999). Then the temperature due to solar radiation was calculated using emissivity derived from ASTER data. The residual temperature, i.e. temperature due to solar radiation subtracted from ASTER temperature was used to identify thermally anomalous areas. Areas having anomalous temperature in the ASTER data are shown in blue diagonal hatch, while areas having anomalous temperature in the LANDSAT data are shown in magenta on the map. Thermal springs and areas with favorable geochemistry are also shown. Springs or wells having non-favorable geochemistry are shown as blue dots.

  18. A Ground-Based Research Vehicle for Base Drag Studies at Subsonic Speeds

    NASA Technical Reports Server (NTRS)

    Diebler, Corey; Smith, Mark

    2002-01-01

    A ground research vehicle (GRV) has been developed to study the base drag on large-scale vehicles at subsonic speeds. Existing models suggest that base drag is dependent upon vehicle forebody drag, and for certain configurations, the total drag of a vehicle can be reduced by increasing its forebody drag. Although these models work well for small projectile shapes, studies have shown that they do not provide accurate predictions when applied to large-scale vehicles. Experiments are underway at the NASA Dryden Flight Research Center to collect data at Reynolds numbers to a maximum of 3 x 10(exp 7), and to formulate a new model for predicting the base drag of trucks, buses, motor homes, reentry vehicles, and other large-scale vehicles. Preliminary tests have shown errors as great as 70 percent compared to Hoerner's two-dimensional base drag prediction. This report describes the GRV and its capabilities, details the studies currently underway at NASA Dryden, and presents preliminary results of both the effort to formulate a new base drag model and the investigation into a method of reducing total drag by manipulating forebody drag.

  19. Treatment of Electronic Energy Level Transition and Ionization Following the Particle-Based Chemistry Model

    NASA Technical Reports Server (NTRS)

    Liechty, Derek S.; Lewis, Mark

    2010-01-01

    A new method of treating electronic energy level transitions as well as linking ionization to electronic energy levels is proposed following the particle-based chemistry model of Bird. Although the use of electronic energy levels and ionization reactions in DSMC are not new ideas, the current method of selecting what level to transition to, how to reproduce transition rates, and the linking of the electronic energy levels to ionization are, to the author s knowledge, novel concepts. The resulting equilibrium temperatures are shown to remain constant, and the electronic energy level distributions are shown to reproduce the Boltzmann distribution. The electronic energy level transition rates and ionization rates due to electron impacts are shown to reproduce theoretical and measured rates. The rates due to heavy particle impacts, while not as favorable as the electron impact rates, compare favorably to values from the literature. Thus, these new extensions to the particle-based chemistry model of Bird provide an accurate method for predicting electronic energy level transition and ionization rates in gases.

  20. Design Of Computer Based Test Using The Unified Modeling Language

    NASA Astrophysics Data System (ADS)

    Tedyyana, Agus; Danuri; Lidyawati

    2017-12-01

    The Admission selection of Politeknik Negeri Bengkalis through interest and talent search (PMDK), Joint Selection of admission test for state Polytechnics (SB-UMPN) and Independent (UM-Polbeng) were conducted by using paper-based Test (PBT). Paper Based Test model has some weaknesses. They are wasting too much paper, the leaking of the questios to the public, and data manipulation of the test result. This reasearch was Aimed to create a Computer-based Test (CBT) models by using Unified Modeling Language (UML) the which consists of Use Case diagrams, Activity diagram and sequence diagrams. During the designing process of the application, it is important to pay attention on the process of giving the password for the test questions before they were shown through encryption and description process. RSA cryptography algorithm was used in this process. Then, the questions shown in the questions banks were randomized by using the Fisher-Yates Shuffle method. The network architecture used in Computer Based test application was a client-server network models and Local Area Network (LAN). The result of the design was the Computer Based Test application for admission to the selection of Politeknik Negeri Bengkalis.

  1. Charge carrier transport in polycrystalline organic thin film based field effect transistors

    NASA Astrophysics Data System (ADS)

    Rani, Varsha; Sharma, Akanksha; Ghosh, Subhasis

    2016-05-01

    The charge carrier transport mechanism in polycrystalline thin film based organic field effect transistors (OFETs) has been explained using two competing models, multiple trapping and releases (MTR) model and percolation model. It has been shown that MTR model is most suitable for explaining charge carrier transport in grainy polycrystalline organic thin films. The energetic distribution of traps determined independently using Mayer-Neldel rule (MNR) is in excellent agreement with the values obtained by MTR model for copper phthalocyanine and pentacene based OFETs.

  2. Test code for the assessment and improvement of Reynolds stress models

    NASA Technical Reports Server (NTRS)

    Rubesin, M. W.; Viegas, J. R.; Vandromme, D.; Minh, H. HA

    1987-01-01

    An existing two-dimensional, compressible flow, Navier-Stokes computer code, containing a full Reynolds stress turbulence model, was adapted for use as a test bed for assessing and improving turbulence models based on turbulence simulation experiments. To date, the results of using the code in comparison with simulated channel flow and over an oscillating flat plate have shown that the turbulence model used in the code needs improvement for these flows. It is also shown that direct simulation of turbulent flows over a range of Reynolds numbers are needed to guide subsequent improvement of turbulence models.

  3. Selection of Variables in Cluster Analysis: An Empirical Comparison of Eight Procedures

    ERIC Educational Resources Information Center

    Steinley, Douglas; Brusco, Michael J.

    2008-01-01

    Eight different variable selection techniques for model-based and non-model-based clustering are evaluated across a wide range of cluster structures. It is shown that several methods have difficulties when non-informative variables (i.e., random noise) are included in the model. Furthermore, the distribution of the random noise greatly impacts the…

  4. Testing and analysis of internal hardwood log defect prediction models

    Treesearch

    R. Edward Thomas

    2011-01-01

    The severity and location of internal defects determine the quality and value of lumber sawn from hardwood logs. Models have been developed to predict the size and position of internal defects based on external defect indicator measurements. These models were shown to predict approximately 80% of all internal knots based on external knot indicators. However, the size...

  5. Role Stratospheric Balloon Magnetic Surveys in Development of Analytical Global Models of the Geomagnetic Field

    NASA Astrophysics Data System (ADS)

    Brekhov, O. M.; Tsvetkov, Yu. P.; Ivanov, V. V.; Filippov, S. V.; Tsvetkova, N. M.

    2015-09-01

    The results of stratospheric balloon gradient geomagnetic surveys at an altitude of ‘-~3O km with the use of the long (6 km) measuring base oriented along the vertical line are considered. The purposes of these surveys are the study of the magnetic field formed by deep sources, and the estimation of errors in modern analytical models of the geomagnetic field. The independent method of determination of errors in global analytical models of the normal magnetic field of the Earth (MFE) is substantiated. The new technique of identification of magnetic anomalies from surveys on long routes is considered. The analysis of gradient magnetic surveys on board the balloon, revealed the previously unknown features of the geomagnetic field. Using the balloon data, the EMM/720 model of the geomagnetic field (http://www.ngdc.noaa.gov/geomag/EMM) is investigated, and it is shown that this model unsatisfactorily represents the anomalous MFE, at least, at an altitude of 30 km, in the area our surveys. The unsatisfactory quality of aeromagnetic (ground-based) data is also revealed by the method of wavelet analysis of the ground-based and balloon magnetic profiles. It is shown, that the ground-based profiles do not contain inhomogeneities more than 1 30 km in size, whereas the balloon profiles (1000 km in the strike extent) contain inhomogeneities up to 600 km in size an the location of the latte coincides with the location of the satellite magnetic anomaly. On the basis of balloon data is shown, it that low-altitude aeromagnetic surveys, due to fundamental reasons, incorrectly reproduce the magnetic field of deep sources. This prevents the reliable conversion of ground-based magnetic anomalies upward from the surface of the Earth. It is shown, that an adequate global model of magnetic anomalies in the circumterrestrial space, developed up to 720 spherical harmonics, must be constructed only in accordance with the data obtained at satellite and stratospheric altitudes. Such a model can serve as a basis for the refined study of the structure and magnetic properties of the Earth's crust at its deep horizons, in order to search for resources at them, and so on.

  6. An experimentally-informed coarse-grained 3-site-per-nucleotide model of DNA: Structure, thermodynamics, and dynamics of hybridization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hinckley, Daniel M.; Freeman, Gordon S.; Whitmer, Jonathan K.

    2013-10-14

    A new 3-Site-Per-Nucleotide coarse-grained model for DNA is presented. The model includes anisotropic potentials between bases involved in base stacking and base pair interactions that enable the description of relevant structural properties, including the major and minor grooves. In an improvement over available coarse-grained models, the correct persistence length is recovered for both ssDNA and dsDNA, allowing for simulation of non-canonical structures such as hairpins. DNA melting temperatures, measured for duplexes and hairpins by integrating over free energy surfaces generated using metadynamics simulations, are shown to be in quantitative agreement with experiment for a variety of sequences and conditions. Hybridizationmore » rate constants, calculated using forward-flux sampling, are also shown to be in good agreement with experiment. The coarse-grained model presented here is suitable for use in biological and engineering applications, including nucleosome positioning and DNA-templated engineering.« less

  7. A physics-based algorithm for real-time simulation of electrosurgery procedures in minimally invasive surgery.

    PubMed

    Lu, Zhonghua; Arikatla, Venkata S; Han, Zhongqing; Allen, Brian F; De, Suvranu

    2014-12-01

    High-frequency electricity is used in the majority of surgical interventions. However, modern computer-based training and simulation systems rely on physically unrealistic models that fail to capture the interplay of the electrical, mechanical and thermal properties of biological tissue. We present a real-time and physically realistic simulation of electrosurgery by modelling the electrical, thermal and mechanical properties as three iteratively solved finite element models. To provide subfinite-element graphical rendering of vaporized tissue, a dual-mesh dynamic triangulation algorithm based on isotherms is proposed. The block compressed row storage (BCRS) structure is shown to be critical in allowing computationally efficient changes in the tissue topology due to vaporization. We have demonstrated our physics-based electrosurgery cutting algorithm through various examples. Our matrix manipulation algorithms designed for topology changes have shown low computational cost. Our simulator offers substantially greater physical fidelity compared to previous simulators that use simple geometry-based heat characterization. Copyright © 2013 John Wiley & Sons, Ltd.

  8. Automatic detection of echolocation clicks based on a Gabor model of their waveform.

    PubMed

    Madhusudhana, Shyam; Gavrilov, Alexander; Erbe, Christine

    2015-06-01

    Prior research has shown that echolocation clicks of several species of terrestrial and marine fauna can be modelled as Gabor-like functions. Here, a system is proposed for the automatic detection of a variety of such signals. By means of mathematical formulation, it is shown that the output of the Teager-Kaiser Energy Operator (TKEO) applied to Gabor-like signals can be approximated by a Gaussian function. Based on the inferences, a detection algorithm involving the post-processing of the TKEO outputs is presented. The ratio of the outputs of two moving-average filters, a Gaussian and a rectangular filter, is shown to be an effective detection parameter. Detector performance is assessed using synthetic and real (taken from MobySound database) recordings. The detection method is shown to work readily with a variety of echolocation clicks and in various recording scenarios. The system exhibits low computational complexity and operates several times faster than real-time. Performance comparisons are made to other publicly available detectors including pamguard.

  9. Areas with Surface Thermal Anomalies as Detected by ASTER and LANDSAT Data around South Canyon Hot Springs, Garfield County, Colorado

    DOE Data Explorer

    Khalid Hussein

    2012-02-01

    This map shows areas of anomalous surface temperature around South Canyon Hot Springs as identified from ASTER and LANDSAT thermal data and spatial based insolation model. The temperature for the ASTER data was calculated using the Emissivity Normalization Algorithm that separate temperature from emissivity. The incoming solar radiation was calculated using spatial based insolation model developed by Fu and Rich (1999). Then the temperature due to solar radiation was calculated using emissivity derived from ASTER data. The residual temperature, i.e. temperature due to solar radiation subtracted from ASTER temperature was used to identify thermally anomalous areas. Areas having anomalous temperature in the ASTER data are shown in blue diagonal hatch, while areas having anomalous temperature in the LANDSAT data are shown in magenta on the map. Thermal springs and areas with favorable geochemistry are also shown. Springs or wells having non-favorable geochemistry are shown as blue dots.

  10. Modeling spatio-temporal wildfire ignition point patterns

    Treesearch

    Amanda S. Hering; Cynthia L. Bell; Marc G. Genton

    2009-01-01

    We analyze and model the structure of spatio-temporal wildfire ignitions in the St. Johns River Water Management District in northeastern Florida. Previous studies, based on the K-function and an assumption of homogeneity, have shown that wildfire events occur in clusters. We revisit this analysis based on an inhomogeneous K-...

  11. Application of a Tenax Model to Assess Bioavailability of Polychlorinated Biphenyls in Field Sediments

    EPA Science Inventory

    Recent literature has shown that bioavailability-based techniques, such as Tenax extraction, can estimate sediment exposure to benthos. In a previous study by the authors,Tenax extraction was used to create and validate a literature-based Tenax model to predict oligochaete bioac...

  12. Incorporating Video Modeling into a School-Based Intervention for Students with Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Wilson, Kaitlyn P.

    2013-01-01

    Purpose: Video modeling is an intervention strategy that has been shown to be effective in improving the social and communication skills of students with autism spectrum disorders, or ASDs. The purpose of this tutorial is to outline empirically supported, step-by-step instructions for the use of video modeling by school-based speech-language…

  13. A Model for Communications Satellite System Architecture Assessment

    DTIC Science & Technology

    2011-09-01

    This is shown in Equation 4. The total system cost includes all development, acquisition, fielding, operations, maintenance and upgrades, and system...protection. A mathematical model was implemented to enable the analysis of communications satellite system architectures based on multiple system... implemented to enable the analysis of communications satellite system architectures based on multiple system attributes. Utilization of the model in

  14. Modeling and experimental study of resistive switching in vertically aligned carbon nanotubes

    NASA Astrophysics Data System (ADS)

    Ageev, O. A.; Blinov, Yu F.; Ilina, M. V.; Ilin, O. I.; Smirnov, V. A.

    2016-08-01

    Model of the resistive switching in vertically aligned carbon nanotube (VA CNT) taking into account the processes of deformation, polarization and piezoelectric charge accumulation have been developed. Origin of hysteresis in VA CNT-based structure is described. Based on modeling results the VACNTs-based structure has been created. The ration resistance of high-resistance to low-resistance states of the VACNTs-based structure amounts 48. The correlation the modeling results with experimental studies is shown. The results can be used in the development nanoelectronics devices based on VA CNTs, including the nonvolatile resistive random-access memory.

  15. Model-Based Detection of Radioactive Contraband for Harbor Defense Incorporating Compton Scattering Physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Candy, J V; Chambers, D H; Breitfeller, E F

    2010-03-02

    The detection of radioactive contraband is a critical problem is maintaining national security for any country. Photon emissions from threat materials challenge both detection and measurement technologies especially when concealed by various types of shielding complicating the transport physics significantly. This problem becomes especially important when ships are intercepted by U.S. Coast Guard harbor patrols searching for contraband. The development of a sequential model-based processor that captures both the underlying transport physics of gamma-ray emissions including Compton scattering and the measurement of photon energies offers a physics-based approach to attack this challenging problem. The inclusion of a basic radionuclide representationmore » of absorbed/scattered photons at a given energy along with interarrival times is used to extract the physics information available from the noisy measurements portable radiation detection systems used to interdict contraband. It is shown that this physics representation can incorporated scattering physics leading to an 'extended' model-based structure that can be used to develop an effective sequential detection technique. The resulting model-based processor is shown to perform quite well based on data obtained from a controlled experiment.« less

  16. Modelling nonlinear viscoelastic behaviours of loudspeaker suspensions-like structures

    NASA Astrophysics Data System (ADS)

    Maillou, Balbine; Lotton, Pierrick; Novak, Antonin; Simon, Laurent

    2018-03-01

    Mechanical properties of an electrodynamic loudspeaker are mainly determined by its suspensions (surround and spider) that behave nonlinearly and typically exhibit frequency dependent viscoelastic properties such as creep effect. The paper aims at characterizing the mechanical behaviour of electrodynamic loudspeaker suspensions at low frequencies using nonlinear identification techniques developed in recent years. A Generalized Hammerstein based model can take into account both frequency dependency and nonlinear properties. As shown in the paper, the model generalizes existing nonlinear or viscoelastic models commonly used for loudspeaker modelling. It is further experimentally shown that a possible input-dependent law may play a key role in suspension characterization.

  17. Estimating daily climatologies for climate indices derived from climate model data and observations

    PubMed Central

    Mahlstein, Irina; Spirig, Christoph; Liniger, Mark A; Appenzeller, Christof

    2015-01-01

    Climate indices help to describe the past, present, and the future climate. They are usually closer related to possible impacts and are therefore more illustrative to users than simple climate means. Indices are often based on daily data series and thresholds. It is shown that the percentile-based thresholds are sensitive to the method of computation, and so are the climatological daily mean and the daily standard deviation, which are used for bias corrections of daily climate model data. Sample size issues of either the observed reference period or the model data lead to uncertainties in these estimations. A large number of past ensemble seasonal forecasts, called hindcasts, is used to explore these sampling uncertainties and to compare two different approaches. Based on a perfect model approach it is shown that a fitting approach can improve substantially the estimates of daily climatologies of percentile-based thresholds over land areas, as well as the mean and the variability. These improvements are relevant for bias removal in long-range forecasts or predictions of climate indices based on percentile thresholds. But also for climate change studies, the method shows potential for use. Key Points More robust estimates of daily climate characteristics Statistical fitting approach Based on a perfect model approach PMID:26042192

  18. Modeling of layered anisotropic composite material based on effective medium theory

    NASA Astrophysics Data System (ADS)

    Bao, Yang; Song, Jiming

    2018-04-01

    In this paper, we present an efficient method to simulate multilayered anisotropic composite material with effective medium theory. Effective permittivity, permeability and orientation angle for a layered anisotropic composite medium are extracted with this equivalent model. We also derive analytical expressions for effective parameters and orientation angle with low frequency (LF) limit, which will be shown in detail. Numerical results are shown in comparing extracted effective parameters and orientation angle with analytical results from low frequency limit. Good agreements are achieved to demonstrate the accuracy of our efficient model.

  19. Polar frost formation on Ganymede

    NASA Technical Reports Server (NTRS)

    Johnson, R. E.

    1985-01-01

    Voyager photographs have shown the presence of polar frost on Ganymede, a satellite of Jupiter. A number of models have been proposed for the formation of this feature. The models are based on the transport of material from the equatorial to the polar regions. The present paper is concerned with a model regarding the origin and appearance of the Ganymede caps which does not depend on such a transport. The model is based on observations of the surficial changes produced by ion bombardment. It is pointed out that experiments on ion and electron bombardment of water ice at low temperatures have shown that these particles sputter significant quantities of water molecules. In addition, they also change the visual characteristics of the surface significantly. Ion bombardment competing with thermal reprocessing may be sufficient to explain the latitudinal differences observed on Ganymede.

  20. Improving the Efficacy of Appearance-Based Sun Exposure Interventions with the Terror Management Health Model

    PubMed Central

    Morris, Kasey Lynn; Cooper, Douglas P.; Goldenberg, Jamie L.; Arndt, Jamie; Gibbons, Frederick X.

    2014-01-01

    The terror management health model (TMHM) suggests that when thoughts of death are accessible people become increasingly motivated to bolster their self-esteem relative to their health, because doing so offers psychological protection against mortality concerns. Two studies examined sun protection intentions as a function of mortality reminders and an appearance-based intervention. In Study 1, participants given a sun protection message that primed mortality and shown a UV-filtered photo of their face reported greater intentions to use sun protection on their face, and took more sunscreen samples than participants shown a regular photo of their face. In Study 2, reminders of mortality increased participants’ intentions to use facial sun protection when the UV photo was specifically framed as revealing appearance consequences of tanning, compared to when the photo was framed as revealing health consequences, or when no photo was shown. These findings extend the terror management health model, and provide preliminary evidence that appearance-based tanning interventions have a greater influence on sun protection intentions under conditions that prime thoughts of death. We discuss implications of the findings, and highlight the need for additional research examining the applicability to long-term tanning behavior. PMID:24811049

  1. Systems engineering interfaces: A model based approach

    NASA Astrophysics Data System (ADS)

    Fosse, E.; Delp, C. L.

    The engineering of interfaces is a critical function of the discipline of Systems Engineering. Included in interface engineering are instances of interaction. Interfaces provide the specifications of the relevant properties of a system or component that can be connected to other systems or components while instances of interaction are identified in order to specify the actual integration to other systems or components. Current Systems Engineering practices rely on a variety of documents and diagrams to describe interface specifications and instances of interaction. The SysML[1] specification provides a precise model based representation for interfaces and interface instance integration. This paper will describe interface engineering as implemented by the Operations Revitalization Task using SysML, starting with a generic case and culminating with a focus on a Flight System to Ground Interaction. The reusability of the interface engineering approach presented as well as its extensibility to more complex interfaces and interactions will be shown. Model-derived tables will support the case studies shown and are examples of model-based documentation products.

  2. Pollux: Enhancing the Quality of Service of the Global Information Grid (GIG)

    DTIC Science & Technology

    2009-06-01

    and throughput of standard-based and/or COTS-based QoS-enabled pub/sub technologies, including DDS, JMS, Web Services, and CORBA. 2. The DDS QoS...of ser- vice pICKER (QUICKER) model-driven engineering ( MDE ) toolchain shown in Figure 8. QUICKER extends the Platform-Independent Component Modeling

  3. Numerical modelling of distributed vibration sensor based on phase-sensitive OTDR

    NASA Astrophysics Data System (ADS)

    Masoudi, A.; Newson, T. P.

    2017-04-01

    A Distributed Vibration Sensor Based on Phase-Sensitive OTDR is numerically modeled. The advantage of modeling the building blocks of the sensor individually and combining the blocks to analyse the behavior of the sensing system is discussed. It is shown that the numerical model can accurately imitate the response of the experimental setup to dynamic perturbations a signal processing procedure similar to that used to extract the phase information from sensing setup.

  4. Epistemic Gameplay and Discovery in Computational Model-Based Inquiry Activities

    ERIC Educational Resources Information Center

    Wilkerson, Michelle Hoda; Shareff, Rebecca; Laina, Vasiliki; Gravel, Brian

    2018-01-01

    In computational modeling activities, learners are expected to discover the inner workings of scientific and mathematical systems: First elaborating their understandings of a given system through constructing a computer model, then "debugging" that knowledge by testing and refining the model. While such activities have been shown to…

  5. A quantitative model of optimal data selection in Wason's selection task.

    PubMed

    Hattori, Masasi

    2002-10-01

    The optimal data selection model proposed by Oaksford and Chater (1994) successfully formalized Wason's selection task (Wason, 1966). The model, however, involved some questionable assumptions and was also not sufficient as a model of the task because it could not provide quantitative predictions of the card selection frequencies. In this paper, the model was revised to provide quantitative fits to the data. The model can predict the selection frequencies of cards based on a selection tendency function (STF), or conversely, it enables the estimation of subjective probabilities from data. Past experimental data were first re-analysed based on the model. In Experiment 1, the superiority of the revised model was shown. However, when the relationship between antecedent and consequent was forced to deviate from the biconditional form, the model was not supported. In Experiment 2, it was shown that sufficient emphasis on probabilistic information can affect participants' performance. A detailed experimental method to sort participants by probabilistic strategies was introduced. Here, the model was supported by a subgroup of participants who used the probabilistic strategy. Finally, the results were discussed from the viewpoint of adaptive rationality.

  6. On the Connection Between One-and Two-Equation Models of Turbulence

    NASA Technical Reports Server (NTRS)

    Menter, F. R.; Rai, Man Mohan (Technical Monitor)

    1994-01-01

    A formalism will be presented that allows the transformation of two-equation eddy viscosity turbulence models into one-equation models. The transformation is based on an assumption that is widely accepted over a large range of boundary layer flows and that has been shown to actually improve predictions when incorporated into two-equation models of turbulence. Based on that assumption, a new one-equation turbulence model will be derived. The new model will be tested in great detail against a previously introduced one-equation model and against its parent two-equation model.

  7. Practical Application of Model-based Programming and State-based Architecture to Space Missions

    NASA Technical Reports Server (NTRS)

    Horvath, Gregory; Ingham, Michel; Chung, Seung; Martin, Oliver; Williams, Brian

    2006-01-01

    A viewgraph presentation to develop models from systems engineers that accomplish mission objectives and manage the health of the system is shown. The topics include: 1) Overview; 2) Motivation; 3) Objective/Vision; 4) Approach; 5) Background: The Mission Data System; 6) Background: State-based Control Architecture System; 7) Background: State Analysis; 8) Overview of State Analysis; 9) Background: MDS Software Frameworks; 10) Background: Model-based Programming; 10) Background: Titan Model-based Executive; 11) Model-based Execution Architecture; 12) Compatibility Analysis of MDS and Titan Architectures; 13) Integrating Model-based Programming and Execution into the Architecture; 14) State Analysis and Modeling; 15) IMU Subsystem State Effects Diagram; 16) Titan Subsystem Model: IMU Health; 17) Integrating Model-based Programming and Execution into the Software IMU; 18) Testing Program; 19) Computationally Tractable State Estimation & Fault Diagnosis; 20) Diagnostic Algorithm Performance; 21) Integration and Test Issues; 22) Demonstrated Benefits; and 23) Next Steps

  8. Rough surface scattering based on facet model

    NASA Technical Reports Server (NTRS)

    Khamsi, H. R.; Fung, A. K.; Ulaby, F. T.

    1974-01-01

    A model for the radar return from bare ground was developed to calculate the radar cross section of bare ground and the effect of the frequency averaging on the reduction of the variance of the return. It is shown that, by assuming that the distribution of the slope to be Gaussian and that the distribution of the length of the facet to be in the form of the positive side of a Gaussian distribution, the results are in good agreement with experimental data collected by an 8- to 18-GHz radar spectrometer system. It is also shown that information on the exact correlation length of the small structure on the ground is not necessary; an effective correlation length may be calculated based on the facet model and the wavelength of the incident wave.

  9. Dynamic analysis of rotor flex-structure based on nonlinear anisotropic shell models

    NASA Astrophysics Data System (ADS)

    Bauchau, Olivier A.; Chiang, Wuying

    1991-05-01

    In this paper an anisotropic shallow shell model is developed that accommodates transverse shearing deformations and arbitrarily large displacements and rotations, but strains are assumed to remain small. Two kinematic models are developed, the first using two DOF to locate the direction of the normal to the shell's midplane, the second using three. The latter model allows for an automatic compatibility of the shell model with beam models. The shell model is validated by comparing its predictions with several benchmark problems. In actual helicopter rotor blade problems, the shell model of the flex structure is shown to give very different results shown compared to beam models. The lead-lag and torsion modes in particular are strongly affected, whereas flapping modes seem to be less affected.

  10. The fuzzy oil drop model, based on hydrophobicity density distribution, generalizes the influence of water environment on protein structure and function.

    PubMed

    Banach, Mateusz; Konieczny, Leszek; Roterman, Irena

    2014-10-21

    In this paper we show that the fuzzy oil drop model represents a general framework for describing the generation of hydrophobic cores in proteins and thus provides insight into the influence of the water environment upon protein structure and stability. The model has been successfully applied in the study of a wide range of proteins, however this paper focuses specifically on domains representing immunoglobulin-like folds. Here we provide evidence that immunoglobulin-like domains, despite being structurally similar, differ with respect to their participation in the generation of hydrophobic core. It is shown that β-structural fragments in β-barrels participate in hydrophobic core formation in a highly differentiated manner. Quantitatively measured participation in core formation helps explain the variable stability of proteins and is shown to be related to their biological properties. This also includes the known tendency of immunoglobulin domains to form amyloids, as shown using transthyretin to reveal the clear relation between amyloidogenic properties and structural characteristics based on the fuzzy oil drop model. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.

  11. Crystal plasticity simulation of Zirconium tube rolling using multi-grain representative volume element

    NASA Astrophysics Data System (ADS)

    Isaenkova, Margarita; Perlovich, Yuriy; Zhuk, Dmitry; Krymskaya, Olga

    2017-10-01

    The rolling of Zirconium tube is studied by means of the crystal plasticity viscoplastic self-consistent (VPSC) constitutive modeling. This modeling performed by a dislocation-based constitutive model and a spectral solver using open-source simulation of DAMASK kit. The multi-grain representative volume elements with periodic boundary conditions are used to predict the texture evolution and distributions of strain and stresses. Two models for randomly textured and partially rolled material are deformed to 30% reduction in tube wall thickness and 7% reduction in tube diameter. The resulting shapes of the models are shown and distributions of strain are plotted. Also, evolution of grain's shape during deformation is shown.

  12. An Ad-Hoc Adaptive Pilot Model for Pitch Axis Gross Acquisition Tasks

    NASA Technical Reports Server (NTRS)

    Hanson, Curtis E.

    2012-01-01

    An ad-hoc algorithm is presented for real-time adaptation of the well-known crossover pilot model and applied to pitch axis gross acquisition tasks in a generic fighter aircraft. Off-line tuning of the crossover model to human pilot data gathered in a fixed-based high fidelity simulation is first accomplished for a series of changes in aircraft dynamics to provide expected values for model parameters. It is shown that in most cases, for this application, the traditional crossover model can be reduced to a gain and a time delay. The ad-hoc adaptive pilot gain algorithm is shown to have desirable convergence properties for most types of changes in aircraft dynamics.

  13. Design and modeling of flower like microring resonator

    NASA Astrophysics Data System (ADS)

    Razaghi, Mohammad; Laleh, Mohammad Sayfi

    2016-05-01

    This paper presents a novel multi-channel optical filter structure. The proposed design is based on using a set of microring resonators (MRRs) in new formation, named flower like arrangement. It is shown that instead of using 18 MRRs, by using only 5 MRRs in recommended formation, same filtering operation can be achieved. It is shown that with this structure, six filters and four integrated demultiplexers (DEMUXs) are obtained. The simplicity, extensibility and compactness of this structure make it usable in wavelength division multiplexing (WDM) networks. Filter's characteristics such as shape factor (SF), free spectral range (FSR) and stopband rejection ratio can be designed by adjusting microrings' radii and coupling coefficients. To model this structure, signal flow graph method (SFG) based on Mason's rule is used. The modeling method is discussed in depth. Furthermore, the accuracy and applicability of this method are verified through examples and comparison with other modeling schemes.

  14. R symmetries and a heterotic MSSM

    NASA Astrophysics Data System (ADS)

    Kappl, Rolf; Nilles, Hans Peter; Schmitz, Matthias

    2015-02-01

    We employ powerful techniques based on Hilbert and Gröbner bases to analyze particle physics models derived from string theory. Individual models are shown to have a huge landscape of vacua that differ in their phenomenological properties. We explore the (discrete) symmetries of these vacua, the new R symmetry selection rules and their consequences for moduli stabilization.

  15. The Betting Odds Rating System: Using soccer forecasts to forecast soccer.

    PubMed

    Wunderlich, Fabian; Memmert, Daniel

    2018-01-01

    Betting odds are frequently found to outperform mathematical models in sports related forecasting tasks, however the factors contributing to betting odds are not fully traceable and in contrast to rating-based forecasts no straightforward measure of team-specific quality is deducible from the betting odds. The present study investigates the approach of combining the methods of mathematical models and the information included in betting odds. A soccer forecasting model based on the well-known ELO rating system and taking advantage of betting odds as a source of information is presented. Data from almost 15.000 soccer matches (seasons 2007/2008 until 2016/2017) are used, including both domestic matches (English Premier League, German Bundesliga, Spanish Primera Division and Italian Serie A) and international matches (UEFA Champions League, UEFA Europe League). The novel betting odds based ELO model is shown to outperform classic ELO models, thus demonstrating that betting odds prior to a match contain more relevant information than the result of the match itself. It is shown how the novel model can help to gain valuable insights into the quality of soccer teams and its development over time, thus having a practical benefit in performance analysis. Moreover, it is argued that network based approaches might help in further improving rating and forecasting methods.

  16. The Betting Odds Rating System: Using soccer forecasts to forecast soccer

    PubMed Central

    Memmert, Daniel

    2018-01-01

    Betting odds are frequently found to outperform mathematical models in sports related forecasting tasks, however the factors contributing to betting odds are not fully traceable and in contrast to rating-based forecasts no straightforward measure of team-specific quality is deducible from the betting odds. The present study investigates the approach of combining the methods of mathematical models and the information included in betting odds. A soccer forecasting model based on the well-known ELO rating system and taking advantage of betting odds as a source of information is presented. Data from almost 15.000 soccer matches (seasons 2007/2008 until 2016/2017) are used, including both domestic matches (English Premier League, German Bundesliga, Spanish Primera Division and Italian Serie A) and international matches (UEFA Champions League, UEFA Europe League). The novel betting odds based ELO model is shown to outperform classic ELO models, thus demonstrating that betting odds prior to a match contain more relevant information than the result of the match itself. It is shown how the novel model can help to gain valuable insights into the quality of soccer teams and its development over time, thus having a practical benefit in performance analysis. Moreover, it is argued that network based approaches might help in further improving rating and forecasting methods. PMID:29870554

  17. Flatness-based embedded adaptive fuzzy control of turbocharged diesel engines

    NASA Astrophysics Data System (ADS)

    Rigatos, Gerasimos; Siano, Pierluigi; Arsie, Ivan

    2014-10-01

    In this paper nonlinear embedded control for turbocharged Diesel engines is developed with the use of Differential flatness theory and adaptive fuzzy control. It is shown that the dynamic model of the turbocharged Diesel engine is differentially flat and admits dynamic feedback linearization. It is also shown that the dynamic model can be written in the linear Brunovsky canonical form for which a state feedback controller can be easily designed. To compensate for modeling errors and external disturbances an adaptive fuzzy control scheme is implemanted making use of the transformed dynamical system of the diesel engine that is obtained through the application of differential flatness theory. Since only the system's output is measurable the complete state vector has to be reconstructed with the use of a state observer. It is shown that a suitable learning law can be defined for neuro-fuzzy approximators, which are part of the controller, so as to preserve the closed-loop system stability. With the use of Lyapunov stability analysis it is proven that the proposed observer-based adaptive fuzzy control scheme results in H∞ tracking performance.

  18. Mathematical models for nonparametric inferences from line transect data

    USGS Publications Warehouse

    Burnham, K.P.; Anderson, D.R.

    1976-01-01

    A general mathematical theory of line transects is develoepd which supplies a framework for nonparametric density estimation based on either right angle or sighting distances. The probability of observing a point given its right angle distance (y) from the line is generalized to an arbitrary function g(y). Given only that g(O) = 1, it is shown there are nonparametric approaches to density estimation using the observed right angle distances. The model is then generalized to include sighting distances (r). Let f(y/r) be the conditional distribution of right angle distance given sighting distance. It is shown that nonparametric estimation based only on sighting distances requires we know the transformation of r given by f(O/r).

  19. A Person Fit Test for IRT Models for Polytomous Items

    ERIC Educational Resources Information Center

    Glas, C. A. W.; Dagohoy, Anna Villa T.

    2007-01-01

    A person fit test based on the Lagrange multiplier test is presented for three item response theory models for polytomous items: the generalized partial credit model, the sequential model, and the graded response model. The test can also be used in the framework of multidimensional ability parameters. It is shown that the Lagrange multiplier…

  20. Significance Testing in Confirmatory Factor Analytic Models.

    ERIC Educational Resources Information Center

    Khattab, Ali-Maher; Hocevar, Dennis

    Traditionally, confirmatory factor analytic models are tested against a null model of total independence. Using randomly generated factors in a matrix of 46 aptitude tests, this approach is shown to be unlikely to reject even random factors. An alternative null model, based on a single general factor, is suggested. In addition, an index of model…

  1. The Technological Barriers of Using Video Modeling in the Classroom

    ERIC Educational Resources Information Center

    Marino, Desha; Myck-Wayne, Janice

    2015-01-01

    The purpose of this investigation is to identify the technological barriers teachers encounter when attempting to implement video modeling in the classroom. Video modeling is an emerging evidence-based intervention method used with individuals with autism. Research has shown the positive effects video modeling can have on its recipients. Educators…

  2. Modeling, Monitoring and Fault Diagnosis of Spacecraft Air Contaminants

    NASA Technical Reports Server (NTRS)

    Ramirez, W. Fred; Skliar, Mikhail; Narayan, Anand; Morgenthaler, George W.; Smith, Gerald J.

    1996-01-01

    Progress and results in the development of an integrated air quality modeling, monitoring, fault detection, and isolation system are presented. The focus was on development of distributed models of the air contaminants transport, the study of air quality monitoring techniques based on the model of transport process and on-line contaminant concentration measurements, and sensor placement. Different approaches to the modeling of spacecraft air contamination are discussed, and a three-dimensional distributed parameter air contaminant dispersion model applicable to both laminar and turbulent transport is proposed. A two-dimensional approximation of a full scale transport model is also proposed based on the spatial averaging of the three dimensional model over the least important space coordinate. A computer implementation of the transport model is considered and a detailed development of two- and three-dimensional models illustrated by contaminant transport simulation results is presented. The use of a well established Kalman filtering approach is suggested as a method for generating on-line contaminant concentration estimates based on both real time measurements and the model of contaminant transport process. It is shown that high computational requirements of the traditional Kalman filter can render difficult its real-time implementation for high-dimensional transport model and a novel implicit Kalman filtering algorithm is proposed which is shown to lead to an order of magnitude faster computer implementation in the case of air quality monitoring.

  3. A Gaussian-based rank approximation for subspace clustering

    NASA Astrophysics Data System (ADS)

    Xu, Fei; Peng, Chong; Hu, Yunhong; He, Guoping

    2018-04-01

    Low-rank representation (LRR) has been shown successful in seeking low-rank structures of data relationships in a union of subspaces. Generally, LRR and LRR-based variants need to solve the nuclear norm-based minimization problems. Beyond the success of such methods, it has been widely noted that the nuclear norm may not be a good rank approximation because it simply adds all singular values of a matrix together and thus large singular values may dominant the weight. This results in far from satisfactory rank approximation and may degrade the performance of lowrank models based on the nuclear norm. In this paper, we propose a novel nonconvex rank approximation based on the Gaussian distribution function, which has demanding properties to be a better rank approximation than the nuclear norm. Then a low-rank model is proposed based on the new rank approximation with application to motion segmentation. Experimental results have shown significant improvements and verified the effectiveness of our method.

  4. Prospects for immunisation against Marburg and Ebola viruses.

    PubMed

    Geisbert, Thomas W; Bausch, Daniel G; Feldmann, Heinz

    2010-11-01

    For more than 30 years the filoviruses, Marburg virus and Ebola virus, have been associated with periodic outbreaks of hemorrhagic fever that produce severe and often fatal disease. The filoviruses are endemic primarily in resource-poor regions in Central Africa and are also potential agents of bioterrorism. Although no vaccines or antiviral drugs for Marburg or Ebola are currently available, remarkable progress has been made over the last decade in developing candidate preventive vaccines against filoviruses in nonhuman primate models. Due to the generally remote locations of filovirus outbreaks, a single-injection vaccine is desirable. Among the prospective vaccines that have shown efficacy in nonhuman primate models of filoviral hemorrhagic fever, two candidates, one based on a replication-defective adenovirus serotype 5 and the other on a recombinant VSV (rVSV), were shown to provide complete protection to nonhuman primates when administered as a single injection. The rVSV-based vaccine has also shown utility when administered for postexposure prophylaxis against filovirus infections. A VSV-based Ebola vaccine was recently used to manage a potential laboratory exposure. 2010 John Wiley & Sons, Ltd.

  5. Mission Simulation of Space Lidar Measurements for Seasonal and Regional CO2 Variations

    NASA Technical Reports Server (NTRS)

    Kawa, Stephan; Collatz, G. J.; Mao, J.; Abshire, J. B.; Sun, X.; Weaver, C. J.

    2010-01-01

    Results of mission simulation studies are presented for a laser-based atmospheric [82 sounder. The simulations are based on real-time carbon cycle process modeling and data analysis. The mission concept corresponds to the Active Sensing of [82 over Nights, Days, and Seasons (ASCENDS) recommended by the US National Academy of Sciences Decadal Survey of Earth Science and Applications from Space. One prerequisite for meaningful quantitative sensor evaluation is realistic CO2 process modeling across a wide range of scales, i.e., does the model have representative spatial and temporal gradients? Examples of model comparison with data will be shown. Another requirement is a relatively complete description of the atmospheric and surface state, which we have obtained from meteorological data assimilation and satellite measurements from MODIS and [ALIPS0. We use radiative transfer model calculations, an instrument model with representative errors ' and a simple retrieval approach to complete the cycle from "nature" run to "pseudo-data" CO2, Several mission and instrument configuration options are examined/ and the sensitivity to key design variables is shown. We use the simulation framework to demonstrate that within reasonable technological assumptions for the system performance, relatively high measurement precision can be obtained, but errors depend strongly on environmental conditions as well as instrument specifications. Examples are also shown of how the resulting pseudo - measurements might be used to address key carbon cycle science questions.

  6. Assessment of corneal properties based on statistical modeling of OCT speckle.

    PubMed

    Jesus, Danilo A; Iskander, D Robert

    2017-01-01

    A new approach to assess the properties of the corneal micro-structure in vivo based on the statistical modeling of speckle obtained from Optical Coherence Tomography (OCT) is presented. A number of statistical models were proposed to fit the corneal speckle data obtained from OCT raw image. Short-term changes in corneal properties were studied by inducing corneal swelling whereas age-related changes were observed analyzing data of sixty-five subjects aged between twenty-four and seventy-three years. Generalized Gamma distribution has shown to be the best model, in terms of the Akaike's Information Criterion, to fit the OCT corneal speckle. Its parameters have shown statistically significant differences (Kruskal-Wallis, p < 0.001) for short and age-related corneal changes. In addition, it was observed that age-related changes influence the corneal biomechanical behaviour when corneal swelling is induced. This study shows that Generalized Gamma distribution can be utilized to modeling corneal speckle in OCT in vivo providing complementary quantified information where micro-structure of corneal tissue is of essence.

  7. A multivariate quadrature based moment method for LES based modeling of supersonic combustion

    NASA Astrophysics Data System (ADS)

    Donde, Pratik; Koo, Heeseok; Raman, Venkat

    2012-07-01

    The transported probability density function (PDF) approach is a powerful technique for large eddy simulation (LES) based modeling of scramjet combustors. In this approach, a high-dimensional transport equation for the joint composition-enthalpy PDF needs to be solved. Quadrature based approaches provide deterministic Eulerian methods for solving the joint-PDF transport equation. In this work, it is first demonstrated that the numerical errors associated with LES require special care in the development of PDF solution algorithms. The direct quadrature method of moments (DQMOM) is one quadrature-based approach developed for supersonic combustion modeling. This approach is shown to generate inconsistent evolution of the scalar moments. Further, gradient-based source terms that appear in the DQMOM transport equations are severely underpredicted in LES leading to artificial mixing of fuel and oxidizer. To overcome these numerical issues, a semi-discrete quadrature method of moments (SeQMOM) is formulated. The performance of the new technique is compared with the DQMOM approach in canonical flow configurations as well as a three-dimensional supersonic cavity stabilized flame configuration. The SeQMOM approach is shown to predict subfilter statistics accurately compared to the DQMOM approach.

  8. Modeling of the jack rabbit series of experiments with a temperature based reactive burn model

    NASA Astrophysics Data System (ADS)

    Desbiens, Nicolas

    2017-01-01

    The Jack Rabbit experiments, performed by Lawrence Livermore National Laboratory, focus on detonation wave corner turning and shock desensitization. Indeed, while important for safety or charge design, the behaviour of explosives in these regimes is poorly understood. In this paper, our temperature based reactive burn model is calibrated for LX-17 and compared to the Jack Rabbit data. It is shown that our model can reproduce the corner turning and shock desensitization behaviour of four out of the five experiments.

  9. Small-kernel, constrained least-squares restoration of sampled image data

    NASA Technical Reports Server (NTRS)

    Hazra, Rajeeb; Park, Stephen K.

    1992-01-01

    Following the work of Park (1989), who extended a derivation of the Wiener filter based on the incomplete discrete/discrete model to a more comprehensive end-to-end continuous/discrete/continuous model, it is shown that a derivation of the constrained least-squares (CLS) filter based on the discrete/discrete model can also be extended to this more comprehensive continuous/discrete/continuous model. This results in an improved CLS restoration filter, which can be efficiently implemented as a small-kernel convolution in the spatial domain.

  10. Advanced RF Sources Based on Novel Nonlinear Transmission Lines

    DTIC Science & Technology

    2015-01-26

    microwave (HPM) sources. It is also critical to thin film devices and integrated circuits, carbon nanotube based cathodes and interconnects, field emitters ... line model (TLM) in Fig. 6b. Our model is compared with TLM, shown in Fig. 7a. When the interface resistance rc is small, TLM becomes inaccurate...due to current crowding. Fig. 6. (a) Electrical contact including specific interfacial resistivity ρc, and (b) its transmission line model

  11. Evanescent acoustic waves: Production and scattering by resonant targets

    NASA Astrophysics Data System (ADS)

    Osterhoudt, Curtis F.

    Small targets with acoustic resonances which may be excited by incident acoustic planewaves are shown to possess high-Q modes ("organ-pipe" modes) which may be suitable for ocean-based calibration and ranging purposes. The modes are modeled using a double point-source model; this, along with acoustic reciprocity and inversion symmetry, is shown to adequately model the backscattering form functions of the modes at low frequencies. The backscattering form-functions are extended to apply to any bistatic acoustic experiment using the targets when the target response is dominated by the modes in question. An interface between two fluids which each approximate an unbounded half-space has been produced in the laboratory. The fluids have different sound speeds. When sound is incident on this interface at beyond the critical angle from within the first fluid, the second fluid is made to evince a region dominated by evanescent acoustic energy. Such a system is shown to be an possible laboratory-based proxy for a flat sediment bottom in the ocean, or sloped (unrippled) bottom in littoral environments. The evanescent sound field is characterized and shown to have complicated features despite the simplicity of its production. Notable among these features is the presence of dips in the soundfield amplitude, or "quasi-nulls". These are proposed to be extremely important when considering the return from ocean-based experiments. The soundfield features are also shown to be accurately predicted and characterized by wavenumber-integration software. The targets which exhibit organ-pipe modes in the free-field are shown to also be excited by the evanescent waves, and may be used as soundfield probes when the target returns are well characterized. Alternately, if the soundfield is well-known, the target parameters may be extracted from back- or bistatic-scattering experiments in evanescent fields. It is shown that the spatial decay rate as measured by a probe directly in the evanescent field is half that as measured by backscattering experiments on horizontal and vertical cylinders driven at the fundamental mode, and it is demonstrated that this is explained by the principle of acoustic reciprocity.

  12. Surface Temperature Anomalies Derived from Night Time ASTER Data Corrected for Solar and Topographic Effects, Archuleta County

    DOE Data Explorer

    Khalid Hussein

    2012-02-01

    This map shows areas of anomalous surface temperature in Alamosa and Saguache Counties identified from ASTER thermal data and spatial based insolation model. The temperature is calculated using the Emissivity Normalization Algorithm that separate temperature from emissivity. The incoming solar radiation was calculated using spatial based insolation model developed by Fu and Rich (1999). Then the temperature due to solar radiation was calculated using emissivity derived from ASTER data. The residual temperature, i.e. temperature due to solar radiation subtracted from ASTER temperature was used to identify thermally anomalous areas. Areas that had temperature greater than 2o were considered ASTER modeled "very warm modeled surface temperature" are shown in red on the map. Areas that had temperatures between 1o and 2o were considered ASTER modeled "warm modeled surface temperature" are shown in yellow on the map. This map also includes the locations of shallow temperature survey points, locations of springs or wells with favorable geochemistry, faults, transmission lines, and areas of modeled basement weakness "fairways." Note: 'o' is used in this description to represent lowercase sigma.

  13. Surface Temperature Anomalies Derived from Night Time ASTER Data Corrected for Solar and Topographic Effects, San Miguel County, Colorado

    DOE Data Explorer

    Khalid Hussein

    2012-02-01

    This map shows areas of anomalous surface temperature in Alamosa and Saguache Counties identified from ASTER thermal data and spatial based insolation model. The temperature is calculated using the Emissivity Normalization Algorithm that separate temperature from emissivity. The incoming solar radiation was calculated using spatial based insolation model developed by Fu and Rich (1999). Then the temperature due to solar radiation was calculated using emissivity derived from ASTER data. The residual temperature, i.e. temperature due to solar radiation subtracted from ASTER temperature was used to identify thermally anomalous areas. Areas that had temperature greater than 2o were considered ASTER modeled "very warm modeled surface temperature" are shown in red on the map. Areas that had temperatures between 1o and 2o were considered ASTER modeled "warm modeled surface temperature" are shown in yellow on the map. This map also includes the locations of shallow temperature survey points, locations of springs or wells with favorable geochemistry, faults, transmission lines, and areas of modeled basement weakness "fairways." Note: 'o' is used in this description to represent lowercase sigma.

  14. Surface Temperature Anomalies Derived from Night Time ASTER Data Corrected for Solar and Topographic Effects, Fremont County, Colorado

    DOE Data Explorer

    Khalid Hussein

    2012-02-01

    This map shows areas of anomalous surface temperature in Alamosa and Saguache Counties identified from ASTER thermal data and spatial based insolation model. The temperature is calculated using the Emissivity Normalization Algorithm that separate temperature from emissivity. The incoming solar radiation was calculated using spatial based insolation model developed by Fu and Rich (1999). Then the temperature due to solar radiation was calculated using emissivity derived from ASTER data. The residual temperature, i.e. temperature due to solar radiation subtracted from ASTER temperature was used to identify thermally anomalous areas. Areas that had temperature greater than 2o were considered ASTER modeled "very warm modeled surface temperature" are shown in red on the map. Areas that had temperatures between 1o and 2o were considered ASTER modeled "warm modeled surface temperature" are shown in yellow on the map. This map also includes the locations of shallow temperature survey points, locations of springs or wells with favorable geochemistry, faults, transmission lines, and areas of modeled basement weakness "fairways." Note: 'o' is used in this description to represent lowercase sigma.

  15. Surface Temperature Anomalies Derived from Night Time ASTER Data Corrected for Solar and Topographic Effects, Routt County, Colorado

    DOE Data Explorer

    Khalid Hussein

    2012-02-01

    This map shows areas of anomalous surface temperature in Alamosa and Saguache Counties identified from ASTER thermal data and spatial based insolation model. The temperature is calculated using the Emissivity Normalization Algorithm that separate temperature from emissivity. The incoming solar radiation was calculated using spatial based insolation model developed by Fu and Rich (1999). Then the temperature due to solar radiation was calculated using emissivity derived from ASTER data. The residual temperature, i.e. temperature due to solar radiation subtracted from ASTER temperature was used to identify thermally anomalous areas. Areas that had temperature greater than 2o were considered ASTER modeled "very warm modeled surface temperature" are shown in red on the map. Areas that had temperatures between 1o and 2o were considered ASTER modeled"warm modeled surface temperature" are shown in yellow on the map. This map also includes the locations of shallow temperature survey points, locations of springs or wells with favorable geochemistry, faults, transmission lines, and areas of modeled basement weakness "fairways." Note: 'o' is used in this description to represent lowercase sigma.

  16. Surface Temperature Anomalies Derived from Night Time ASTER Data Corrected for Solar and Topographic Effects, Alamosa and Saguache Counties, Colorado

    DOE Data Explorer

    Khalid Hussein

    2012-02-01

    This map shows areas of anomalous surface temperature in Alamosa and Saguache Counties identified from ASTER thermal data and spatial based insolation model. The temperature is calculated using the Emissivity Normalization Algorithm that separate temperature from emissivity. The incoming solar radiation was calculated using spatial based insolation model developed by Fu and Rich (1999). Then the temperature due to solar radiation was calculated using emissivity derived from ASTER data. The residual temperature, i.e. temperature due to solar radiation subtracted from ASTER temperature was used to identify thermally anomalous areas. Areas that had temperature greater than 2o were considered ASTER modeled "very warm modeled surface temperature" are shown in red on the map. Areas that had temperatures between 1o and 2o were considered ASTER modeled "warm modeled surface temperature" are shown in yellow on the map. This map also includes the locations of shallow temperature survey points, locations of springs or wells with favorable geochemistry, faults, transmission lines, and areas of modeled basement weakness "fairways." Note: 'o' is used in this description to represent lowercase sigma.

  17. Surface Temperature Anomalies Derived from Night Time ASTER Data Corrected for Solar and Topographic Effects, Dolores County

    DOE Data Explorer

    Khalid Hussein

    2012-02-01

    This map shows areas of anomalous surface temperature in Alamosa and Saguache Counties identified from ASTER thermal data and spatial based insolation model. The temperature is calculated using the Emissivity Normalization Algorithm that separate temperature from emissivity. The incoming solar radiation was calculated using spatial based insolation model developed by Fu and Rich (1999). Then the temperature due to solar radiation was calculated using emissivity derived from ASTER data. The residual temperature, i.e. temperature due to solar radiation subtracted from ASTER temperature was used to identify thermally anomalous areas. Areas that had temperature greater than 2o were considered ASTER modeled "very warm modeled surface temperature" are shown in red on the map. Areas that had temperatures between 1o and 2o were considered ASTER modeled "warm modeled surface temperature" are shown in yellow on the map. This map also includes the locations of shallow temperature survey points, locations of springs or wells with favorable geochemistry, faults, transmission lines, and areas of modeled basement weakness "fairways." Note: 'o' is used in this description to represent lowercase sigma.

  18. A Microstructure-Based Constitutive Model for Superplastic Forming

    NASA Astrophysics Data System (ADS)

    Jafari Nedoushan, Reza; Farzin, Mahmoud; Mashayekhi, Mohammad; Banabic, Dorel

    2012-11-01

    A constitutive model is proposed for simulations of hot metal forming processes. This model is constructed based on dominant mechanisms that take part in hot forming and includes intergranular deformation, grain boundary sliding, and grain boundary diffusion. A Taylor type polycrystalline model is used to predict intergranular deformation. Previous works on grain boundary sliding and grain boundary diffusion are extended to drive three-dimensional macro stress-strain rate relationships for each mechanism. In these relationships, the effect of grain size is also taken into account. The proposed model is first used to simulate step strain-rate tests and the results are compared with experimental data. It is shown that the model can be used to predict flow stresses for various grain sizes and strain rates. The yield locus is then predicted for multiaxial stress states, and it is observed that it is very close to the von Mises yield criterion. It is also shown that the proposed model can be directly used to simulate hot forming processes. Bulge forming process and gas pressure tray forming are simulated, and the results are compared with experimental data.

  19. Genomic selection models double the accuracy of predicted breeding values for bacterial cold water disease resistance compared to a traditional pedigree-based model in rainbow trout aquaculture

    USDA-ARS?s Scientific Manuscript database

    Previously we have shown that bacterial cold water disease (BCWD) resistance in rainbow trout can be improved using traditional family-based selection, but progress has been limited to exploiting only between-family genetic variation. Genomic selection (GS) is a new alternative enabling exploitation...

  20. Analysis of habitat-selection rules using an individual-based model

    Treesearch

    Steven F. Railsback; Bret C. Harvey

    2002-01-01

    Abstract - Despite their promise for simulating natural complexity,individual-based models (IBMs) are rarely used for ecological research or resource management. Few IBMs have been shown to reproduce realistic patterns of behavior by individual organisms.To test our IBM of stream salmonids and draw conclusions about foraging theory,we analyzed the IBM ’s ability to...

  1. Computer Synthesis Approaches of Hyperboloid Gear Drives with Linear Contact

    NASA Astrophysics Data System (ADS)

    Abadjiev, Valentin; Kawasaki, Haruhisa

    2014-09-01

    The computer design has improved forming different type software for scientific researches in the field of gearing theory as well as performing an adequate scientific support of the gear drives manufacture. Here are attached computer programs that are based on mathematical models as a result of scientific researches. The modern gear transmissions require the construction of new mathematical approaches to their geometric, technological and strength analysis. The process of optimization, synthesis and design is based on adequate iteration procedures to find out an optimal solution by varying definite parameters. The study is dedicated to accepted methodology in the creation of soft- ware for the synthesis of a class high reduction hyperboloid gears - Spiroid and Helicon ones (Spiroid and Helicon are trademarks registered by the Illinois Tool Works, Chicago, Ill). The developed basic computer products belong to software, based on original mathematical models. They are based on the two mathematical models for the synthesis: "upon a pitch contact point" and "upon a mesh region". Computer programs are worked out on the basis of the described mathematical models, and the relations between them are shown. The application of the shown approaches to the synthesis of commented gear drives is illustrated.

  2. Cellular-based modeling of oscillatory dynamics in brain networks.

    PubMed

    Skinner, Frances K

    2012-08-01

    Oscillatory, population activities have long been known to occur in our brains during different behavioral states. We know that many different cell types exist and that they contribute in distinct ways to the generation of these activities. I review recent papers that involve cellular-based models of brain networks, most of which include theta, gamma and sharp wave-ripple activities. To help organize the modeling work, I present it from a perspective of three different types of cellular-based modeling: 'Generic', 'Biophysical' and 'Linking'. Cellular-based modeling is taken to encompass the four features of experiment, model development, theory/analyses, and model usage/computation. The three modeling types are shown to include these features and interactions in different ways. Copyright © 2012 Elsevier Ltd. All rights reserved.

  3. Micromolecular modeling

    NASA Technical Reports Server (NTRS)

    Guillet, J. E.

    1984-01-01

    A reaction kinetics based model of the photodegradation process, which measures all important rate constants, and a computerized model capable of predicting the photodegradation rate and failure modes of a 30 year period, were developed. It is shown that the computerized photodegradation model for polyethylene correctly predicts failure of ELVAX 15 and cross linked ELVAX 150 on outdoor exposure. It is indicated that cross linking ethylene vinyl acetate (EVA) does not significantly change its degradation rate. It is shown that the effect of the stabilizer package is approximately equivalent on both polymers. The computerized model indicates that peroxide decomposers and UV absorbers are the most effective stabilizers. It is found that a combination of UV absorbers and a hindered amine light stabilizer (HALS) is the most effective stabilizer system.

  4. Composite Stress Rupture: A New Reliability Model Based on Strength Decay

    NASA Technical Reports Server (NTRS)

    Reeder, James R.

    2012-01-01

    A model is proposed to estimate reliability for stress rupture of composite overwrap pressure vessels (COPVs) and similar composite structures. This new reliability model is generated by assuming a strength degradation (or decay) over time. The model suggests that most of the strength decay occurs late in life. The strength decay model will be shown to predict a response similar to that predicted by a traditional reliability model for stress rupture based on tests at a single stress level. In addition, the model predicts that even though there is strength decay due to proof loading, a significant overall increase in reliability is gained by eliminating any weak vessels, which would fail early. The model predicts that there should be significant periods of safe life following proof loading, because time is required for the strength to decay from the proof stress level to the subsequent loading level. Suggestions for testing the strength decay reliability model have been made. If the strength decay reliability model predictions are shown through testing to be accurate, COPVs may be designed to carry a higher level of stress than is currently allowed, which will enable the production of lighter structures

  5. Data Driven Model Development for the Supersonic Semispan Transport (S(sup 4)T)

    NASA Technical Reports Server (NTRS)

    Kukreja, Sunil L.

    2011-01-01

    We investigate two common approaches to model development for robust control synthesis in the aerospace community; namely, reduced order aeroservoelastic modelling based on structural finite-element and computational fluid dynamics based aerodynamic models and a data-driven system identification procedure. It is shown via analysis of experimental Super- Sonic SemiSpan Transport (S4T) wind-tunnel data using a system identification approach it is possible to estimate a model at a fixed Mach, which is parsimonious and robust across varying dynamic pressures.

  6. DEVS representation of dynamical systems - Event-based intelligent control. [Discrete Event System Specification

    NASA Technical Reports Server (NTRS)

    Zeigler, Bernard P.

    1989-01-01

    It is shown how systems can be advantageously represented as discrete-event models by using DEVS (discrete-event system specification), a set-theoretic formalism. Such DEVS models provide a basis for the design of event-based logic control. In this control paradigm, the controller expects to receive confirming sensor responses to its control commands within definite time windows determined by its DEVS model of the system under control. The event-based contral paradigm is applied in advanced robotic and intelligent automation, showing how classical process control can be readily interfaced with rule-based symbolic reasoning systems.

  7. Particle-based membrane model for mesoscopic simulation of cellular dynamics

    NASA Astrophysics Data System (ADS)

    Sadeghi, Mohsen; Weikl, Thomas R.; Noé, Frank

    2018-01-01

    We present a simple and computationally efficient coarse-grained and solvent-free model for simulating lipid bilayer membranes. In order to be used in concert with particle-based reaction-diffusion simulations, the model is purely based on interacting and reacting particles, each representing a coarse patch of a lipid monolayer. Particle interactions include nearest-neighbor bond-stretching and angle-bending and are parameterized so as to reproduce the local membrane mechanics given by the Helfrich energy density over a range of relevant curvatures. In-plane fluidity is implemented with Monte Carlo bond-flipping moves. The physical accuracy of the model is verified by five tests: (i) Power spectrum analysis of equilibrium thermal undulations is used to verify that the particle-based representation correctly captures the dynamics predicted by the continuum model of fluid membranes. (ii) It is verified that the input bending stiffness, against which the potential parameters are optimized, is accurately recovered. (iii) Isothermal area compressibility modulus of the membrane is calculated and is shown to be tunable to reproduce available values for different lipid bilayers, independent of the bending rigidity. (iv) Simulation of two-dimensional shear flow under a gravity force is employed to measure the effective in-plane viscosity of the membrane model and show the possibility of modeling membranes with specified viscosities. (v) Interaction of the bilayer membrane with a spherical nanoparticle is modeled as a test case for large membrane deformations and budding involved in cellular processes such as endocytosis. The results are shown to coincide well with the predicted behavior of continuum models, and the membrane model successfully mimics the expected budding behavior. We expect our model to be of high practical usability for ultra coarse-grained molecular dynamics or particle-based reaction-diffusion simulations of biological systems.

  8. Simple nonlinear modelling of earthquake response in torsionally coupled R/C structures: A preliminary study

    NASA Astrophysics Data System (ADS)

    Saiidi, M.

    1982-07-01

    The equivalent of a single degree of freedom (SDOF) nonlinear model, the Q-model-13, was examined. The study intended to: (1) determine the seismic response of a torsionally coupled building based on the multidegree of freedom (MDOF) and (SDOF) nonlinear models; and (2) develop a simple SDOF nonlinear model to calculate displacement history of structures with eccentric centers of mass and stiffness. It is shown that planar models are able to yield qualitative estimates of the response of the building. The model is used to estimate the response of a hypothetical six-story frame wall reinforced concrete building with torsional coupling, using two different earthquake intensities. It is shown that the Q-Model-13 can lead to a satisfactory estimate of the response of the structure in both cases.

  9. A hybrid model for traffic flow and crowd dynamics with random individual properties.

    PubMed

    Schleper, Veronika

    2015-04-01

    Based on an established mathematical model for the behavior of large crowds, a new model is derived that is able to take into account the statistical variation of individual maximum walking speeds. The same model is shown to be valid also in traffic flow situations, where for instance the statistical variation of preferred maximum speeds can be considered. The model involves explicit bounds on the state variables, such that a special Riemann solver is derived that is proved to respect the state constraints. Some care is devoted to a valid construction of random initial data, necessary for the use of the new model. The article also includes a numerical method that is shown to respect the bounds on the state variables and illustrative numerical examples, explaining the properties of the new model in comparison with established models.

  10. A Model Based Mars Climate Database for the Mission Design

    NASA Technical Reports Server (NTRS)

    2005-01-01

    A viewgraph presentation on a model based climate database is shown. The topics include: 1) Why a model based climate database?; 2) Mars Climate Database v3.1 Who uses it ? (approx. 60 users!); 3) The new Mars Climate database MCD v4.0; 4) MCD v4.0: what's new ? 5) Simulation of Water ice clouds; 6) Simulation of Water ice cycle; 7) A new tool for surface pressure prediction; 8) Acces to the database MCD 4.0; 9) How to access the database; and 10) New web access

  11. Temperature variation effects on stochastic characteristics for low-cost MEMS-based inertial sensor error

    NASA Astrophysics Data System (ADS)

    El-Diasty, M.; El-Rabbany, A.; Pagiatakis, S.

    2007-11-01

    We examine the effect of varying the temperature points on MEMS inertial sensors' noise models using Allan variance and least-squares spectral analysis (LSSA). Allan variance is a method of representing root-mean-square random drift error as a function of averaging times. LSSA is an alternative to the classical Fourier methods and has been applied successfully by a number of researchers in the study of the noise characteristics of experimental series. Static data sets are collected at different temperature points using two MEMS-based IMUs, namely MotionPakII and Crossbow AHRS300CC. The performance of the two MEMS inertial sensors is predicted from the Allan variance estimation results at different temperature points and the LSSA is used to study the noise characteristics and define the sensors' stochastic model parameters. It is shown that the stochastic characteristics of MEMS-based inertial sensors can be identified using Allan variance estimation and LSSA and the sensors' stochastic model parameters are temperature dependent. Also, the Kaiser window FIR low-pass filter is used to investigate the effect of de-noising stage on the stochastic model. It is shown that the stochastic model is also dependent on the chosen cut-off frequency.

  12. Ultrasound thermography: A new temperature reconstruction model and in vivo results

    NASA Astrophysics Data System (ADS)

    Bayat, Mahdi; Ballard, John R.; Ebbini, Emad S.

    2017-03-01

    The recursive echo strain filter (RESF) model is presented as a new echo shift-based ultrasound temperature estimation model. The model is shown to have an infinite impulse response (IIR) filter realization of a differentitor-integrator operator. This model is then used for tracking sub-therapeutic temperature changes due to high intensity focused ultrasound (HIFU) shots in the hind limb of the Copenhagen rats in vivo. In addition to the reconstruction filter, a motion compensation method is presented which takes advantage of the deformation field outside the region of interest to correct the motion errors during temperature tracking. The combination of the RESF model and motion compensation algorithm is shown to greatly enhance the accuracy of the in vivo temperature estimation using ultrasound echo shifts.

  13. On the use of the energy probability distribution zeros in the study of phase transitions

    NASA Astrophysics Data System (ADS)

    Mól, L. A. S.; Rodrigues, R. G. M.; Stancioli, R. A.; Rocha, J. C. S.; Costa, B. V.

    2018-04-01

    This contribution is devoted to cover some technical aspects related to the use of the recently proposed energy probability distribution zeros in the study of phase transitions. This method is based on the partial knowledge of the partition function zeros and has been shown to be extremely efficient to precisely locate phase transition temperatures. It is based on an iterative method in such a way that the transition temperature can be approached at will. The iterative method will be detailed and some convergence issues that has been observed in its application to the 2D Ising model and to an artificial spin ice model will be shown, together with ways to circumvent them.

  14. Mathematical models for non-parametric inferences from line transect data

    USGS Publications Warehouse

    Burnham, K.P.; Anderson, D.R.

    1976-01-01

    A general mathematical theory of line transects is developed which supplies a framework for nonparametric density estimation based on either right angle or sighting distances. The probability of observing a point given its right angle distance (y) from the line is generalized to an arbitrary function g(y). Given only that g(0) = 1, it is shown there are nonparametric approaches to density estimation using the observed right angle distances. The model is then generalized to include sighting distances (r). Let f(y I r) be the conditional distribution of right angle distance given sighting distance. It is shown that nonparametric estimation based only on sighting distances requires we know the transformation of r given by f(0 I r).

  15. Effect of Pt Doping on Nucleation and Crystallization in Li2O.2SiO2 Glass: Experimental Measurements and Computer Modeling

    NASA Technical Reports Server (NTRS)

    Narayan, K. Lakshmi; Kelton, K. F.; Ray, C. S.

    1996-01-01

    Heterogeneous nucleation and its effects on the crystallization of lithium disilicate glass containing small amounts of Pt are investigated. Measurements of the nucleation frequencies and induction times with and without Pt are shown to be consistent with predictions based on the classical nucleation theory. A realistic computer model for the transformation is presented. Computed differential thermal analysis data (such as crystallization rates as a function of time and temperature) are shown to be in good agreement with experimental results. This modeling provides a new, more quantitative method for analyzing calorimetric data.

  16. Designing for Damage: Robust Flight Control Design using Sliding Mode Techniques

    NASA Technical Reports Server (NTRS)

    Vetter, T. K.; Wells, S. R.; Hess, Ronald A.; Bacon, Barton (Technical Monitor); Davidson, John (Technical Monitor)

    2002-01-01

    A brief review of sliding model control is undertaken, with particular emphasis upon the effects of neglected parasitic dynamics. Sliding model control design is interpreted in the frequency domain. The inclusion of asymptotic observers and control 'hedging' is shown to reduce the effects of neglected parasitic dynamics. An investigation into the application of observer-based sliding mode control to the robust longitudinal control of a highly unstable is described. The sliding mode controller is shown to exhibit stability and performance robustness superior to that of a classical loop-shaped design when significant changes in vehicle and actuator dynamics are employed to model airframe damage.

  17. Comparing Mapped Plot Estimators

    Treesearch

    Paul C. Van Deusen

    2006-01-01

    Two alternative derivations of estimators for mean and variance from mapped plots are compared by considering the models that support the estimators and by simulation. It turns out that both models lead to the same estimator for the mean but lead to very different variance estimators. The variance estimators based on the least valid model assumptions are shown to...

  18. Understanding of Relation Structures of Graphical Models by Lower Secondary Students

    ERIC Educational Resources Information Center

    van Buuren, Onne; Heck, André; Ellermeijer, Ton

    2016-01-01

    A learning path has been developed on system dynamical graphical modelling, integrated into the Dutch lower secondary physics curriculum. As part of the developmental research for this learning path, students' understanding of the relation structures shown in the diagrams of graphical system dynamics based models has been investigated. One of our…

  19. Thermodynamic analysis of biofuels as fuels for high temperature fuel cells

    NASA Astrophysics Data System (ADS)

    Milewski, Jarosław; Bujalski, Wojciech; Lewandowski, Janusz

    2011-11-01

    Based on mathematical modeling and numerical simulations, applicativity of various biofuels on high temperature fuel cell performance are presented. Governing equations of high temperature fuel cell modeling are given. Adequate simulators of both solid oxide fuel cell (SOFC) and molten carbonate fuel cell (MCFC) have been done and described. Performance of these fuel cells with different biofuels is shown. Some characteristics are given and described. Advantages and disadvantages of various biofuels from the system performance point of view are pointed out. An analysis of various biofuels as potential fuels for SOFC and MCFC is presented. The results are compared with both methane and hydrogen as the reference fuels. The biofuels are characterized by both lower efficiency and lower fuel utilization factors compared with methane. The presented results are based on a 0D mathematical model in the design point calculation. The governing equations of the model are also presented. Technical and financial analysis of high temperature fuel cells (SOFC and MCFC) are shown. High temperature fuel cells can be fed by biofuels like: biogas, bioethanol, and biomethanol. Operational costs and possible incomes of those installation types were estimated and analyzed. A comparison against classic power generation units is shown. A basic indicator net present value (NPV) for projects was estimated and commented.

  20. Thermodynamic analysis of biofuels as fuels for high temperature fuel cells

    NASA Astrophysics Data System (ADS)

    Milewski, Jarosław; Bujalski, Wojciech; Lewandowski, Janusz

    2013-02-01

    Based on mathematical modeling and numerical simulations, applicativity of various biofuels on high temperature fuel cell performance are presented. Governing equations of high temperature fuel cell modeling are given. Adequate simulators of both solid oxide fuel cell (SOFC) and molten carbonate fuel cell (MCFC) have been done and described. Performance of these fuel cells with different biofuels is shown. Some characteristics are given and described. Advantages and disadvantages of various biofuels from the system performance point of view are pointed out. An analysis of various biofuels as potential fuels for SOFC and MCFC is presented. The results are compared with both methane and hydrogen as the reference fuels. The biofuels are characterized by both lower efficiency and lower fuel utilization factors compared with methane. The presented results are based on a 0D mathematical model in the design point calculation. The governing equations of the model are also presented. Technical and financial analysis of high temperature fuel cells (SOFC and MCFC) are shown. High temperature fuel cells can be fed by biofuels like: biogas, bioethanol, and biomethanol. Operational costs and possible incomes of those installation types were estimated and analyzed. A comparison against classic power generation units is shown. A basic indicator net present value (NPV) for projects was estimated and commented.

  1. Impact of different NWM-derived mapping functions on VLBI and GPS analysis

    NASA Astrophysics Data System (ADS)

    Nikolaidou, Thalia; Balidakis, Kyriakos; Nievinski, Felipe; Santos, Marcelo; Schuh, Harald

    2018-06-01

    In recent years, numerical weather models have shown the potential to provide a good representation of the electrically neutral atmosphere. This fact has been exploited for the modeling of space geodetic observations. The Vienna Mapping Functions 1 (VMF1) are the NWM-based model recommended by the latest IERS Conventions. The VMF1 are being produced 6 hourly based on the European Centre for Medium-Range Weather Forecasts operational model. UNB-VMF1 provide meteorological parameters aiding neutral atmosphere modeling for VLBI and GNSS, based on the same concept but utilizing the Canadian Meteorological Centre model. This study presents comparisons between the VMF1 and the UNB-VMF1 in both delay and position domains, using global networks of VLBI and GPS stations. It is shown that the zenith delays agree better than 3.5 mm (hydrostatic) and 20 mm (wet) which implies an equivalent predicted height error of less than 2 mm. In the position domain and VLBI analysis, comparison of the weighted root-mean-square error (wrms) of the height component showed a maximum difference of 1.7 mm. For 48% of the stations, the use of VMF1 reduced the height wrms of the stations by 2.6% on average compared to a respective reduction of 1.7% for 41% of the stations employing the UNB-VMF1. For the subset of VLBI stations participating in a large number of sessions, neither mapping function outranked the other. GPS analysis using Precise Point Positioning had a sub-mm respective difference, while the wrms of the individual solutions had a maximum value of 12 mm for the 1-year-long analysis. A clear advantage of one NWM over the other was not shown, and the statistics proved that the two mapping functions yield equal results in geodetic analysis.

  2. Kinetics of DSB rejoining and formation of simple chromosome exchange aberrations

    NASA Technical Reports Server (NTRS)

    Cucinotta, F. A.; Nikjoo, H.; O'Neill, P.; Goodhead, D. T.

    2000-01-01

    PURPOSE: To investigate the role of kinetics in the processing of DNA double strand breaks (DSB), and the formation of simple chromosome exchange aberrations following X-ray exposures to mammalian cells based on an enzymatic approach. METHODS: Using computer simulations based on a biochemical approach, rate-equations that describe the processing of DSB through the formation of a DNA-enzyme complex were formulated. A second model that allows for competition between two processing pathways was also formulated. The formation of simple exchange aberrations was modelled as misrepair during the recombination of single DSB with undamaged DNA. Non-linear coupled differential equations corresponding to biochemical pathways were solved numerically by fitting to experimental data. RESULTS: When mediated by a DSB repair enzyme complex, the processing of single DSB showed a complex behaviour that gives the appearance of fast and slow components of rejoining. This is due to the time-delay caused by the action time of enzymes in biomolecular reactions. It is shown that the kinetic- and dose-responses of simple chromosome exchange aberrations are well described by a recombination model of DSB interacting with undamaged DNA when aberration formation increases with linear dose-dependence. Competition between two or more recombination processes is shown to lead to the formation of simple exchange aberrations with a dose-dependence similar to that of a linear quadratic model. CONCLUSIONS: Using a minimal number of assumptions, the kinetics and dose response observed experimentally for DSB rejoining and the formation of simple chromosome exchange aberrations are shown to be consistent with kinetic models based on enzymatic reaction approaches. A non-linear dose response for simple exchange aberrations is possible in a model of recombination of DNA containing a DSB with undamaged DNA when two or more pathways compete for DSB repair.

  3. Family of new operations equivalency of neuro-fuzzy logic: optoelectronic realization and applications

    NASA Astrophysics Data System (ADS)

    Krasilenko, Vladimir G.; Nikolsky, Alexander I.; Yatskovsky, Victor I.; Ogorodnik, K. V.; Lischenko, Sergey

    2002-07-01

    The perspective of neural networks equivalental models (EM) base on vector-matrix procedure with basic operations of continuous and neuro-fuzzy logic (equivalence, absolute difference) are shown. Capacity on base EMs exceeded the amount of neurons in 2.5 times. This is larger than others neural networks paradigms. Amount neurons of this neural networks on base EMs may be 10 - 20 thousands. The base operations in EMs are normalized equivalency operations. The family of new operations equivalency and non-equivalency of neuro-fuzzy logic's, which we have elaborated on the based of such generalized operations of fuzzy-logic's as fuzzy negation, t-norm and s-norm are shown. Generalized rules of construction of new functions (operations) equivalency which uses relations of t-norm and s-norm to fuzzy negation are proposed. Among these elements the following should be underlined: (1) the element which fulfills the operation of limited difference; (2) the element which algebraic product (intensifier with controlled coefficient of transmission or multiplier of analog signals); (3) the element which fulfills a sample summarizing (uniting) of signals (including the one during normalizing). Synthesized structures which realize on the basic of these elements the whole spectrum of required operations: t-norm, s-norm and new operations equivalency are shown. These realization on the basic of new multifunctional optoelectronical BISPIN- devices (MOEBD) represent the circuit with constant and pulse optical input signals. They are modeling the operation of limited difference. These circuits realize frequency- dynamic neuron models and neural networks. Experimental results of these MOEBD and equivalency circuits, which fulfill the limited difference operation are discussed. For effective realization of neural networks on the basic of EMs as it is shown in report, picture elements are required as main nodes to implement element operations equivalence ('non-equivalence') of neuro-fuzzy logic's.

  4. Model-Based Reasoning in Humans Becomes Automatic with Training.

    PubMed

    Economides, Marcos; Kurth-Nelson, Zeb; Lübbert, Annika; Guitart-Masip, Marc; Dolan, Raymond J

    2015-09-01

    Model-based and model-free reinforcement learning (RL) have been suggested as algorithmic realizations of goal-directed and habitual action strategies. Model-based RL is more flexible than model-free but requires sophisticated calculations using a learnt model of the world. This has led model-based RL to be identified with slow, deliberative processing, and model-free RL with fast, automatic processing. In support of this distinction, it has recently been shown that model-based reasoning is impaired by placing subjects under cognitive load--a hallmark of non-automaticity. Here, using the same task, we show that cognitive load does not impair model-based reasoning if subjects receive prior training on the task. This finding is replicated across two studies and a variety of analysis methods. Thus, task familiarity permits use of model-based reasoning in parallel with other cognitive demands. The ability to deploy model-based reasoning in an automatic, parallelizable fashion has widespread theoretical implications, particularly for the learning and execution of complex behaviors. It also suggests a range of important failure modes in psychiatric disorders.

  5. Interactive Inverse Groundwater Modeling - Addressing User Fatigue

    NASA Astrophysics Data System (ADS)

    Singh, A.; Minsker, B. S.

    2006-12-01

    This paper builds on ongoing research on developing an interactive and multi-objective framework to solve the groundwater inverse problem. In this work we solve the classic groundwater inverse problem of estimating a spatially continuous conductivity field, given field measurements of hydraulic heads. The proposed framework is based on an interactive multi-objective genetic algorithm (IMOGA) that not only considers quantitative measures such as calibration error and degree of regularization, but also takes into account expert knowledge about the structure of the underlying conductivity field expressed as subjective rankings of potential conductivity fields by the expert. The IMOGA converges to the optimal Pareto front representing the best trade- off among the qualitative as well as quantitative objectives. However, since the IMOGA is a population-based iterative search it requires the user to evaluate hundreds of solutions. This leads to the problem of 'user fatigue'. We propose a two step methodology to combat user fatigue in such interactive systems. The first step is choosing only a few highly representative solutions to be shown to the expert for ranking. Spatial clustering is used to group the search space based on the similarity of the conductivity fields. Sampling is then carried out from different clusters to improve the diversity of solutions shown to the user. Once the expert has ranked representative solutions from each cluster a machine learning model is used to 'learn user preference' and extrapolate these for the solutions not ranked by the expert. We investigate different machine learning models such as Decision Trees, Bayesian learning model, and instance based weighting to model user preference. In addition, we also investigate ways to improve the performance of these models by providing information about the spatial structure of the conductivity fields (which is what the expert bases his or her rank on). Results are shown for each of these machine learning models and the advantages and disadvantages for each approach are discussed. These results indicate that using the proposed two-step methodology leads to significant reduction in user-fatigue without deteriorating the solution quality of the IMOGA.

  6. Flash flood forecasting using simplified hydrological models, radar rainfall forecasts and data assimilation

    NASA Astrophysics Data System (ADS)

    Smith, P. J.; Beven, K.; Panziera, L.

    2012-04-01

    The issuing of timely flood alerts may be dependant upon the ability to predict future values of water level or discharge at locations where observations are available. Catchments at risk of flash flooding often have a rapid natural response time, typically less then the forecast lead time desired for issuing alerts. This work focuses on the provision of short-range (up to 6 hours lead time) predictions of discharge in small catchments based on utilising radar forecasts to drive a hydrological model. An example analysis based upon the Verzasca catchment (Ticino, Switzerland) is presented. Parsimonious time series models with a mechanistic interpretation (so called Data-Based Mechanistic model) have been shown to provide reliable accurate forecasts in many hydrological situations. In this study such a model is developed to predict the discharge at an observed location from observed precipitation data. The model is shown to capture the snow melt response at this site. Observed discharge data is assimilated to improve the forecasts, of up to two hours lead time, that can be generated from observed precipitation. To generate forecasts with greater lead time ensemble precipitation forecasts are utilised. In this study the Nowcasting ORographic precipitation in the Alps (NORA) product outlined in more detail elsewhere (Panziera et al. Q. J. R. Meteorol. Soc. 2011; DOI:10.1002/qj.878) is utilised. NORA precipitation forecasts are derived from historical analogues based on the radar field and upper atmospheric conditions. As such, they avoid the need to explicitly model the evolution of the rainfall field through for example Lagrangian diffusion. The uncertainty in the forecasts is represented by characterisation of the joint distribution of the observed discharge, the discharge forecast using the (in operational conditions unknown) future observed precipitation and that forecast utilising the NORA ensembles. Constructing the joint distribution in this way allows the full historic record of data at the site to inform the predictive distribution. It is shown that, in part due to the limited availability of forecasts, the uncertainty in the relationship between the NORA based forecasts and other variates dominated the resulting predictive uncertainty.

  7. Model predictive control based on reduced order models applied to belt conveyor system.

    PubMed

    Chen, Wei; Li, Xin

    2016-11-01

    In the paper, a model predictive controller based on reduced order model is proposed to control belt conveyor system, which is an electro-mechanics complex system with long visco-elastic body. Firstly, in order to design low-degree controller, the balanced truncation method is used for belt conveyor model reduction. Secondly, MPC algorithm based on reduced order model for belt conveyor system is presented. Because of the error bound between the full-order model and reduced order model, two Kalman state estimators are applied in the control scheme to achieve better system performance. Finally, the simulation experiments are shown that balanced truncation method can significantly reduce the model order with high-accuracy and model predictive control based on reduced-model performs well in controlling the belt conveyor system. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  8. Malaria transmission rates estimated from serological data.

    PubMed Central

    Burattini, M. N.; Massad, E.; Coutinho, F. A.

    1993-01-01

    A mathematical model was used to estimate malaria transmission rates based on serological data. The model is minimally stochastic and assumes an age-dependent force of infection for malaria. The transmission rates estimated were applied to a simple compartmental model in order to mimic the malaria transmission. The model has shown a good retrieving capacity for serological and parasite prevalence data. PMID:8270011

  9. From Agents to Continuous Change via Aesthetics: Learning Mechanics with Visual Agent-Based Computational Modeling

    ERIC Educational Resources Information Center

    Sengupta, Pratim; Farris, Amy Voss; Wright, Mason

    2012-01-01

    Novice learners find motion as a continuous process of change challenging to understand. In this paper, we present a pedagogical approach based on agent-based, visual programming to address this issue. Integrating agent-based programming, in particular, Logo programming, with curricular science has been shown to be challenging in previous research…

  10. Expression for time travel based on diffusive wave theory: applicability and considerations

    NASA Astrophysics Data System (ADS)

    Aguilera, J. C.; Escauriaza, C. R.; Passalacqua, P.; Gironas, J. A.

    2017-12-01

    Prediction of hydrological response is of utmost importance when dealing with urban planning, risk assessment, or water resources management issues. With the advent of climate change, special care must be taken with respect to variations in rainfall and runoff due to rising temperature averages. Nowadays, while typical workstations have adequate power to run distributed routing hydrological models, it is still not enough for modeling on-the-fly, a crucial ability in a natural disaster context, where rapid decisions must be made. Semi-distributed time travel models, which compute a watershed's hydrograph without explicitly solving the full shallow water equations, appear as an attractive approach to rainfall-runoff modeling since, like fully distributed models, also superimpose a grid on the watershed, and compute runoff based on cell parameter values. These models are heavily dependent on the travel time expression for an individual cell. Many models make use of expressions based on kinematic wave theory, which is not applicable in cases where watershed storage is important, such as mild slopes. This work presents a new expression for concentration times in overland flow, based on diffusive wave theory, which considers not only the effects of storage but also the effects on upstream contribution. Setting upstream contribution equal to zero gives an expression consistent with previous work on diffusive wave theory; on the other hand, neglecting storage effects (i.e.: diffusion,) is shown to be equivalent to kinematic wave theory, currently used in many spatially distributed time travel models. The newly found expression is shown to be dependent on plane discretization, particularly when dealing with very non-kinematic cases. This is shown to be the result of upstream contribution, which gets larger downstream, versus plane length. This result also provides some light on the limits on applicability of the expression: when a certain kinematic threshold is reached, the expression is no longer valid, and one must fall back to kinematic wave theory, for lack of a better option. This expression could be used for improving currently published spatially distributed time travel models, since they would become applicable in many new cases.

  11. New molecular descriptors based on local properties at the molecular surface and a boiling-point model derived from them.

    PubMed

    Ehresmann, Bernd; de Groot, Marcel J; Alex, Alexander; Clark, Timothy

    2004-01-01

    New molecular descriptors based on statistical descriptions of the local ionization potential, local electron affinity, and the local polarizability at the surface of the molecule are proposed. The significance of these descriptors has been tested by calculating them for the Maybridge database in addition to our set of 26 descriptors reported previously. The new descriptors show little correlation with those already in use. Furthermore, the principal components of the extended set of descriptors for the Maybridge data show that especially the descriptors based on the local electron affinity extend the variance in our set of descriptors, which we have previously shown to be relevant to physical properties. The first nine principal components are shown to be most significant. As an example of the usefulness of the new descriptors, we have set up a QSPR model for boiling points using both the old and new descriptors.

  12. Methodologies for validating ray-based forward model using finite element method in ultrasonic array data simulation

    NASA Astrophysics Data System (ADS)

    Zhang, Jie; Nixon, Andrew; Barber, Tom; Budyn, Nicolas; Bevan, Rhodri; Croxford, Anthony; Wilcox, Paul

    2018-04-01

    In this paper, a methodology of using finite element (FE) model to validate a ray-based model in the simulation of full matrix capture (FMC) ultrasonic array data set is proposed. The overall aim is to separate signal contributions from different interactions in FE results for easier comparing each individual component in the ray-based model results. This is achieved by combining the results from multiple FE models of the system of interest that include progressively more geometrical features while preserving the same mesh structure. It is shown that the proposed techniques allow the interactions from a large number of different ray-paths to be isolated in FE results and compared directly to the results from a ray-based forward model.

  13. Free-Suspension Residual Flexibility Testing of Space Station Pathfinder: Comparison to Fixed-Base Results

    NASA Technical Reports Server (NTRS)

    Tinker, Michael L.

    1998-01-01

    Application of the free-suspension residual flexibility modal test method to the International Space Station Pathfinder structure is described. The Pathfinder, a large structure of the general size and weight of Space Station module elements, was also tested in a large fixed-base fixture to simulate Shuttle Orbiter payload constraints. After correlation of the Pathfinder finite element model to residual flexibility test data, the model was coupled to a fixture model, and constrained modes and frequencies were compared to fixed-base test. modes. The residual flexibility model compared very favorably to results of the fixed-base test. This is the first known direct comparison of free-suspension residual flexibility and fixed-base test results for a large structure. The model correlation approach used by the author for residual flexibility data is presented. Frequency response functions (FRF) for the regions of the structure that interface with the environment (a test fixture or another structure) are shown to be the primary tools for model correlation that distinguish or characterize the residual flexibility approach. A number of critical issues related to use of the structure interface FRF for correlating the model are then identified and discussed, including (1) the requirement of prominent stiffness lines, (2) overcoming problems with measurement noise which makes the antiresonances or minima in the functions difficult to identify, and (3) the use of interface stiffness and lumped mass perturbations to bring the analytical responses into agreement with test data. It is shown that good comparison of analytical-to-experimental FRF is the key to obtaining good agreement of the residual flexibility values.

  14. ANN modeling of DNA sequences: new strategies using DNA shape code.

    PubMed

    Parbhane, R V; Tambe, S S; Kulkarni, B D

    2000-09-01

    Two new encoding strategies, namely, wedge and twist codes, which are based on the DNA helical parameters, are introduced to represent DNA sequences in artificial neural network (ANN)-based modeling of biological systems. The performance of the new coding strategies has been evaluated by conducting three case studies involving mapping (modeling) and classification applications of ANNs. The proposed coding schemes have been compared rigorously and shown to outperform the existing coding strategies especially in situations wherein limited data are available for building the ANN models.

  15. Mg I as a probe of the solar chromosphere - The atomic model

    NASA Technical Reports Server (NTRS)

    Mauas, Pablo J.; Avrett, Eugene H.; Loeser, Rudolf

    1988-01-01

    This paper presents a complete atomic model for Mg I line synthesis, where all the atomic parameters are based on recent experimental and theoretical data. It is shown how the computed profiles at 4571 A and 5173 A are influenced by the choice of these parameters and the number of levels included in the model atom. In addition, observed profiles of the 5173 A b2 line and theoretical profiles for comparison (based on a recent atmospheric model for the average quiet sun) are presented.

  16. Introducing Model-Based System Engineering Transforming System Engineering through Model-Based Systems Engineering

    DTIC Science & Technology

    2014-03-31

    BPMN ).  This  is  when  the   MITRE  Acquisition  Guidance  Model  (AGM)  model  effort  was...developed   using   the   iGrafx6   tool   with   BPMN   [12].   The   AGM   provides   a   high-­‐level   characterization  of...the  activities,  events  and  messages  using  a   BPMN  notation  as  shown  in  Figure  14.  It

  17. Exponential growth kinetics for Polyporus versicolor and Pleurotus ostreatus in submerged culture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carroad, P.A.; Wilke, C.R.

    1977-04-01

    Simple mathematical models for a batch culture of pellet-forming fungi in submerged culture were tested on growth data for Polyporus versicolor (ATCC 12679) and Pleurotus ostreatus (ATCC 9415). A kinetic model based on a growth rate proportional to the two-thirds power of the cell mass was shown to be satisfactory. A model based on a growth rate directly proportional to the cell mass fitted the data equally well, however, and may be preferable because of mathematical simplicity.

  18. Introducing Model Based Systems Engineering Transforming System Engineering through Model-Based Systems Engineering

    DTIC Science & Technology

    2014-03-31

    BPMN ).  This  is  when  the...to  a  model-­‐centric   approach.     The   AGM   was   developed   using   the   iGrafx6   tool   with   BPMN   [12... BPMN  notation  as  shown  in  Figure  14.  It   provides  a  time-­‐sequenced  perspective  on  the  process

  19. Analysis of axial-induction-based wind plant control using an engineering and a high-order wind plant model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Annoni, Jennifer; Gebraad, Pieter M. O.; Scholbrock, Andrew K.

    2015-08-14

    Wind turbines are typically operated to maximize their performance without considering the impact of wake effects on nearby turbines. Wind plant control concepts aim to increase overall wind plant performance by coordinating the operation of the turbines. This paper focuses on axial-induction-based wind plant control techniques, in which the generator torque or blade pitch degrees of freedom of the wind turbines are adjusted. The paper addresses discrepancies between a high-order wind plant model and an engineering wind plant model. Changes in the engineering model are proposed to better capture the effects of axial-induction-based control shown in the high-order model.

  20. Fatigue of concrete subjected to biaxial loading in the tension region

    NASA Astrophysics Data System (ADS)

    Subramaniam, Kolluru V. L.

    Rigid airport pavement structures are subjected to repeated high-amplitude loads resulting from passing aircraft. The resulting stress-state in the concrete is a biaxial combination of compression and tension. It is of interest to model the response of plain concrete to such loading conditions and develop accurate fatigue-based material models for implementation in mechanistic pavement design procedures. The objective of this work is to characterize the quasi-static and low-cycle fatigue response of concrete subjected to biaxial stresses in the tensile-compression-tension (t-C-T) region, where the principal tensile stress is larger in magnitude than the principal compressive stress. An experimental investigation of material behavior in the biaxial t-C-T region is conducted. The experimental setup consists of the following test configurations: (a) notched concrete beams tested in three-point bend configuration, and (b) hollow concrete cylinders subjected to torsion with or without superimposed axial tensile force. The damage imparted to the material is examined using mechanical measurements and an independent nondestructive evaluation (NDE) technique based on vibration measurements. The failure of concrete in t-C-T region is shown to be a local phenomenon under quasi-static and fatigue loading, wherein the specimen fails owing to a single crack. The crack propagation is studied using the principles of fracture mechanics. It is shown that the crack propagation resulting from the t-C-T loading can be predicted using mode I fracture parameters. It is observed that crack growth in constant amplitude fatigue loading is a two-phase process: a deceleration phase followed by an acceleration stage. The quasi-static load envelope is shown to predict the crack length at fatigue failure. A fracture-based fatigue failure criterion is proposed, wherein the fatigue failure can be predicted using the critical mode I stress intensity factor. A material model for the damage evolution during fatigue loading of concrete in terms of crack propagation is proposed. The crack growth acceleration stage is shown to follow Paris law. The model parameters obtained from uniaxial fatigue tests are shown to be sufficient for predicting the considered biaxial fatigue response.

  1. Creating "Intelligent" Climate Model Ensemble Averages Using a Process-Based Framework

    NASA Astrophysics Data System (ADS)

    Baker, N. C.; Taylor, P. C.

    2014-12-01

    The CMIP5 archive contains future climate projections from over 50 models provided by dozens of modeling centers from around the world. Individual model projections, however, are subject to biases created by structural model uncertainties. As a result, ensemble averaging of multiple models is often used to add value to model projections: consensus projections have been shown to consistently outperform individual models. Previous reports for the IPCC establish climate change projections based on an equal-weighted average of all model projections. However, certain models reproduce climate processes better than other models. Should models be weighted based on performance? Unequal ensemble averages have previously been constructed using a variety of mean state metrics. What metrics are most relevant for constraining future climate projections? This project develops a framework for systematically testing metrics in models to identify optimal metrics for unequal weighting multi-model ensembles. A unique aspect of this project is the construction and testing of climate process-based model evaluation metrics. A climate process-based metric is defined as a metric based on the relationship between two physically related climate variables—e.g., outgoing longwave radiation and surface temperature. Metrics are constructed using high-quality Earth radiation budget data from NASA's Clouds and Earth's Radiant Energy System (CERES) instrument and surface temperature data sets. It is found that regional values of tested quantities can vary significantly when comparing weighted and unweighted model ensembles. For example, one tested metric weights the ensemble by how well models reproduce the time-series probability distribution of the cloud forcing component of reflected shortwave radiation. The weighted ensemble for this metric indicates lower simulated precipitation (up to .7 mm/day) in tropical regions than the unweighted ensemble: since CMIP5 models have been shown to overproduce precipitation, this result could indicate that the metric is effective in identifying models which simulate more realistic precipitation. Ultimately, the goal of the framework is to identify performance metrics for advising better methods for ensemble averaging models and create better climate predictions.

  2. Becoming Syntactic

    ERIC Educational Resources Information Center

    Chang, Franklin; Dell, Gary S.; Bock, Kathryn

    2006-01-01

    Psycholinguistic research has shown that the influence of abstract syntactic knowledge on performance is shaped by particular sentences that have been experienced. To explore this idea, the authors applied a connectionist model of sentence production to the development and use of abstract syntax. The model makes use of (a) error-based learning to…

  3. The Two-Capacitor Problem Revisited: A Mechanical Harmonic Oscillator Model Approach

    ERIC Educational Resources Information Center

    Lee, Keeyung

    2009-01-01

    The well-known two-capacitor problem, in which exactly half the stored energy disappears when a charged capacitor is connected to an identical capacitor, is discussed based on the mechanical harmonic oscillator model approach. In the mechanical harmonic oscillator model, it is shown first that "exactly half" the work done by a constant applied…

  4. Assessing the toxicity of Pb- and Sn-based perovskite solar cells in model organism Danio rerio

    NASA Astrophysics Data System (ADS)

    Babayigit, Aslihan; Duy Thanh, Dinh; Ethirajan, Anitha; Manca, Jean; Muller, Marc; Boyen, Hans-Gerd; Conings, Bert

    2016-01-01

    Intensive development of organometal halide perovskite solar cells has lead to a dramatic surge in power conversion efficiency up to 20%. Unfortunately, the most efficient perovskite solar cells all contain lead (Pb), which is an unsettling flaw that leads to severe environmental concerns and is therefore a stumbling block envisioning their large-scale application. Aiming for the retention of favorable electro-optical properties, tin (Sn) has been considered the most likely substitute. Preliminary studies have however shown that Sn-based perovskites are highly unstable and, moreover, Sn is also enlisted as a harmful chemical, with similar concerns regarding environment and health. To bring more clarity into the appropriateness of both metals in perovskite solar cells, we provide a case study with systematic comparison regarding the environmental impact of Pb- and Sn-based perovskites, using zebrafish (Danio Rerio) as model organism. Uncovering an unexpected route of intoxication in the form of acidification, it is shown that Sn based perovskite may not be the ideal Pb surrogate.

  5. Conceptual Integration of Covalent Bond Models by Algerian Students

    ERIC Educational Resources Information Center

    Salah, Hazzi; Dumon, Alain

    2014-01-01

    The concept of covalent bonding is characterized by an interconnected knowledge framework based on Lewis and quantum models of atoms and molecules. Several research studies have shown that students at all levels of chemistry learning find the quantum model to be one of the most difficult subjects to understand. We have tried in this paper to…

  6. Assessing the toxicity of Pb- and Sn-based perovskite solar cells in model organism Danio rerio

    PubMed Central

    Babayigit, Aslihan; Duy Thanh, Dinh; Ethirajan, Anitha; Manca, Jean; Muller, Marc; Boyen, Hans-Gerd; Conings, Bert

    2016-01-01

    Intensive development of organometal halide perovskite solar cells has lead to a dramatic surge in power conversion efficiency up to 20%. Unfortunately, the most efficient perovskite solar cells all contain lead (Pb), which is an unsettling flaw that leads to severe environmental concerns and is therefore a stumbling block envisioning their large-scale application. Aiming for the retention of favorable electro-optical properties, tin (Sn) has been considered the most likely substitute. Preliminary studies have however shown that Sn-based perovskites are highly unstable and, moreover, Sn is also enlisted as a harmful chemical, with similar concerns regarding environment and health. To bring more clarity into the appropriateness of both metals in perovskite solar cells, we provide a case study with systematic comparison regarding the environmental impact of Pb- and Sn-based perovskites, using zebrafish (Danio Rerio) as model organism. Uncovering an unexpected route of intoxication in the form of acidification, it is shown that Sn based perovskite may not be the ideal Pb surrogate. PMID:26759068

  7. Data Driven Model Development for the SuperSonic SemiSpan Transport (S(sup 4)T)

    NASA Technical Reports Server (NTRS)

    Kukreja, Sunil L.

    2011-01-01

    In this report, we will investigate two common approaches to model development for robust control synthesis in the aerospace community; namely, reduced order aeroservoelastic modelling based on structural finite-element and computational fluid dynamics based aerodynamic models, and a data-driven system identification procedure. It is shown via analysis of experimental SuperSonic SemiSpan Transport (S4T) wind-tunnel data that by using a system identification approach it is possible to estimate a model at a fixed Mach, which is parsimonious and robust across varying dynamic pressures.

  8. A rheological model for elastohydrodynamic contacts based on primary laboratory data

    NASA Technical Reports Server (NTRS)

    Bair, S.; Winer, W. O.

    1979-01-01

    A shear rheological model based on primary laboratory data is proposed for concentrated contact lubrication. The model is a Maxwell model modified with a limiting shear stress. Three material properties are required: Low shear stress viscosity, limiting elastic shear modulus, and the limiting shear stress the material can withstand. All three are functions of temperature and pressure. In applying the model to EHD contacts the predicted response possesses the characteristics expected from several experiments reported in the literature and, in one specific case where direct comparison could be made, good numerical agreement is shown.

  9. Cogenerating and pre-annihilating dark matter by a new gauge interaction in a unified model

    DOE PAGES

    Barr, S. M.; Scherrer, Robert J.

    2016-05-31

    Here, grand unified theories based on large groups (with rank ≥ 6) are a natural context for dark matter models. They contain Standard-Model-singlet fermions that could be dark matter candidates, and can contain new non-abelian interactions whose sphalerons convert baryons, leptons, and dark matter into each other, ''cogenerating" a dark matter asymmetry comparable to the baryon asymmetry. In this paper it is shown that the same non-abelian interactions can ''pre-annihilate" the symmetric component of heavy dark matter particles χ, which then decay late into light stable dark matter particles ζ that inherit their asymmetry. We derive cosmological constraints on themore » parameters of such models. The mass of χ must be < 3000 TeV and their decays must happen when 2 × 10 –7 < T dec/mχ < 10 –4. It is shown that such decays can come from d=5 operators with coefficients of order 1/MGUT or 1/M Pℓ. We present a simple realization of our model based on the group SU(7).« less

  10. Prospects for Alpha Particle Heating in JET in the Hot Ion Regime

    NASA Astrophysics Data System (ADS)

    Cordey, J. G.; Keilhacker, M.; Watkins, M. L.

    1987-01-01

    The prospects for alpha particle heating in JET are discussed. A computational model is developed to represent adequately the neutron yield from JET plasmas heated by neutral beam injection. This neutral beam model, augmented by a simple plasma model, is then used to determine the neutron yields and fusion Q-values anticipated for different heating schemes in future operation of JET with tritium. The relative importance of beam-thermal and thermal-thermal reactions is pointed out and the dependence of the results on, for example, plasma density, temperature, energy confinement and purity is shown. Full 1½-D transport code calculations, based on models developed for ohmic, ICRF and NBI heated JET discharges, are used also to provide a power scan for JET operation in tritium in the low density, high ion temperature regime. The results are shown to be in good agreement with the estimates made using the simple plasma model and indicate that, based on present knowledge, a fusion Q-value in the plasma centre above unity should be achieved in JET.

  11. Assessment of corneal properties based on statistical modeling of OCT speckle

    PubMed Central

    Jesus, Danilo A.; Iskander, D. Robert

    2016-01-01

    A new approach to assess the properties of the corneal micro-structure in vivo based on the statistical modeling of speckle obtained from Optical Coherence Tomography (OCT) is presented. A number of statistical models were proposed to fit the corneal speckle data obtained from OCT raw image. Short-term changes in corneal properties were studied by inducing corneal swelling whereas age-related changes were observed analyzing data of sixty-five subjects aged between twenty-four and seventy-three years. Generalized Gamma distribution has shown to be the best model, in terms of the Akaike’s Information Criterion, to fit the OCT corneal speckle. Its parameters have shown statistically significant differences (Kruskal-Wallis, p < 0.001) for short and age-related corneal changes. In addition, it was observed that age-related changes influence the corneal biomechanical behaviour when corneal swelling is induced. This study shows that Generalized Gamma distribution can be utilized to modeling corneal speckle in OCT in vivo providing complementary quantified information where micro-structure of corneal tissue is of essence. PMID:28101409

  12. Model-Based Fatigue Prognosis of Fiber-Reinforced Laminates Exhibiting Concurrent Damage Mechanisms

    NASA Technical Reports Server (NTRS)

    Corbetta, M.; Sbarufatti, C.; Saxena, A.; Giglio, M.; Goebel, K.

    2016-01-01

    Prognostics of large composite structures is a topic of increasing interest in the field of structural health monitoring for aerospace, civil, and mechanical systems. Along with recent advancements in real-time structural health data acquisition and processing for damage detection and characterization, model-based stochastic methods for life prediction are showing promising results in the literature. Among various model-based approaches, particle-filtering algorithms are particularly capable in coping with uncertainties associated with the process. These include uncertainties about information on the damage extent and the inherent uncertainties of the damage propagation process. Some efforts have shown successful applications of particle filtering-based frameworks for predicting the matrix crack evolution and structural stiffness degradation caused by repetitive fatigue loads. Effects of other damage modes such as delamination, however, are not incorporated in these works. It is well established that delamination and matrix cracks not only co-exist in most laminate structures during the fatigue degradation process but also affect each other's progression. Furthermore, delamination significantly alters the stress-state in the laminates and accelerates the material degradation leading to catastrophic failure. Therefore, the work presented herein proposes a particle filtering-based framework for predicting a structure's remaining useful life with consideration of multiple co-existing damage-mechanisms. The framework uses an energy-based model from the composite modeling literature. The multiple damage-mode model has been shown to suitably estimate the energy release rate of cross-ply laminates as affected by matrix cracks and delamination modes. The model is also able to estimate the reduction in stiffness of the damaged laminate. This information is then used in the algorithms for life prediction capabilities. First, a brief summary of the energy-based damage model is provided. Then, the paper describes how the model is embedded within the prognostic framework and how the prognostics performance is assessed using observations from run-to-failure experiments

  13. Merging first principle structure studies and few-body reaction formalism

    NASA Astrophysics Data System (ADS)

    Crespo, R.; Cravo, E.; Arriaga, A.; Wiringa, R.; Deltuva, A.; Diego, R.

    2018-02-01

    Calculations for nucleon knockout from a 7Li beam due to the collision with a proton target at 400 MeV/u are shown based on ab initio Quantum Monte Carlo (QMC) and conventional shell-model nuclear structure approaches to describe the relative motion between the knockout particle and the heavy fragment of the projectile. Structure effects on the total cross section are shown.

  14. Adaptive estimation of state of charge and capacity with online identified battery model for vanadium redox flow battery

    NASA Astrophysics Data System (ADS)

    Wei, Zhongbao; Tseng, King Jet; Wai, Nyunt; Lim, Tuti Mariana; Skyllas-Kazacos, Maria

    2016-11-01

    Reliable state estimate depends largely on an accurate battery model. However, the parameters of battery model are time varying with operating condition variation and battery aging. The existing co-estimation methods address the model uncertainty by integrating the online model identification with state estimate and have shown improved accuracy. However, the cross interference may arise from the integrated framework to compromise numerical stability and accuracy. Thus this paper proposes the decoupling of model identification and state estimate to eliminate the possibility of cross interference. The model parameters are online adapted with the recursive least squares (RLS) method, based on which a novel joint estimator based on extended Kalman Filter (EKF) is formulated to estimate the state of charge (SOC) and capacity concurrently. The proposed joint estimator effectively compresses the filter order which leads to substantial improvement in the computational efficiency and numerical stability. Lab scale experiment on vanadium redox flow battery shows that the proposed method is highly authentic with good robustness to varying operating conditions and battery aging. The proposed method is further compared with some existing methods and shown to be superior in terms of accuracy, convergence speed, and computational cost.

  15. A Cyber-Attack Detection Model Based on Multivariate Analyses

    NASA Astrophysics Data System (ADS)

    Sakai, Yuto; Rinsaka, Koichiro; Dohi, Tadashi

    In the present paper, we propose a novel cyber-attack detection model based on two multivariate-analysis methods to the audit data observed on a host machine. The statistical techniques used here are the well-known Hayashi's quantification method IV and cluster analysis method. We quantify the observed qualitative audit event sequence via the quantification method IV, and collect similar audit event sequence in the same groups based on the cluster analysis. It is shown in simulation experiments that our model can improve the cyber-attack detection accuracy in some realistic cases where both normal and attack activities are intermingled.

  16. Spacecraft Stabilization and Control for Capture of Non-Cooperative Space Objects

    NASA Technical Reports Server (NTRS)

    Joshi, Suresh; Kelkar, Atul G.

    2014-01-01

    This paper addresses stabilization and control issues in autonomous capture and manipulation of non-cooperative space objects such as asteroids, space debris, and orbital spacecraft in need of servicing. Such objects are characterized by unknown mass-inertia properties, unknown rotational motion, and irregular shapes, which makes it a challenging control problem. The problem is further compounded by the presence of inherent nonlinearities, signi cant elastic modes with low damping, and parameter uncertainties in the spacecraft. Robust dissipativity-based control laws are presented and are shown to provide global asymptotic stability in spite of model uncertainties and nonlinearities. It is shown that robust stabilization can be accomplished via model-independent dissipativity-based controllers using thrusters alone, while stabilization with attitude and position control can be accomplished using thrusters and torque actuators.

  17. A probabilistic union model with automatic order selection for noisy speech recognition.

    PubMed

    Jancovic, P; Ming, J

    2001-09-01

    A critical issue in exploiting the potential of the sub-band-based approach to robust speech recognition is the method of combining the sub-band observations, for selecting the bands unaffected by noise. A new method for this purpose, i.e., the probabilistic union model, was recently introduced. This model has been shown to be capable of dealing with band-limited corruption, requiring no knowledge about the band position and statistical distribution of the noise. A parameter within the model, which we call its order, gives the best results when it equals the number of noisy bands. Since this information may not be available in practice, in this paper we introduce an automatic algorithm for selecting the order, based on the state duration pattern generated by the hidden Markov model (HMM). The algorithm has been tested on the TIDIGITS database corrupted by various types of additive band-limited noise with unknown noisy bands. The results have shown that the union model equipped with the new algorithm can achieve a recognition performance similar to that achieved when the number of noisy bands is known. The results show a very significant improvement over the traditional full-band model, without requiring prior information on either the position or the number of noisy bands. The principle of the algorithm for selecting the order based on state duration may also be applied to other sub-band combination methods.

  18. Applying the relaxation model of interfacial heat transfer to calculate the liquid outflow with supercritical initial parameters

    NASA Astrophysics Data System (ADS)

    Alekseev, M. V.; Vozhakov, I. S.; Lezhnin, S. I.; Pribaturin, N. A.

    2017-09-01

    A comparative numerical simulation of the supercritical fluid outflow on the thermodynamic equilibrium and non-equilibrium relaxation models of phase transition for different times of relaxation has been performed. The model for the fixed relaxation time based on the experimentally determined radius of liquid droplets was compared with the model of dynamically changing relaxation time, calculated by the formula (7) and depending on local parameters. It is shown that the relaxation time varies significantly depending on the thermodynamic conditions of the two-phase medium in the course of outflowing. The application of the proposed model with dynamic relaxation time leads to qualitatively correct results. The model can be used for both vaporization and condensation processes. It is shown that the model can be improved on the basis of processing experimental data on the distribution of the droplet sizes formed during the breaking up of the liquid jet.

  19. A Critical Plane-energy Model for Multiaxial Fatigue Life Prediction of Homogeneous and Heterogeneous Materials

    NASA Astrophysics Data System (ADS)

    Wei, Haoyang

    A new critical plane-energy model is proposed in this thesis for multiaxial fatigue life prediction of homogeneous and heterogeneous materials. Brief review of existing methods, especially on the critical plane-based and energy-based methods, are given first. Special focus is on one critical plane approach which has been shown to work for both brittle and ductile metals. The key idea is to automatically change the critical plane orientation with respect to different materials and stress states. One potential drawback of the developed model is that it needs an empirical calibration parameter for non-proportional multiaxial loadings since only the strain terms are used and the out-of-phase hardening cannot be considered. The energy-based model using the critical plane concept is proposed with help of the Mroz-Garud hardening rule to explicitly include the effect of non-proportional hardening under fatigue cyclic loadings. Thus, the empirical calibration for non-proportional loading is not needed since the out-of-phase hardening is naturally included in the stress calculation. The model predictions are compared with experimental data from open literature and it is shown the proposed model can work for both proportional and non-proportional loadings without the empirical calibration. Next, the model is extended for the fatigue analysis of heterogeneous materials integrating with finite element method. Fatigue crack initiation of representative volume of heterogeneous materials is analyzed using the developed critical plane-energy model and special focus is on the microstructure effect on the multiaxial fatigue life predictions. Several conclusions and future work is drawn based on the proposed study.

  20. Least-squares model-based halftoning

    NASA Astrophysics Data System (ADS)

    Pappas, Thrasyvoulos N.; Neuhoff, David L.

    1992-08-01

    A least-squares model-based approach to digital halftoning is proposed. It exploits both a printer model and a model for visual perception. It attempts to produce an 'optimal' halftoned reproduction, by minimizing the squared error between the response of the cascade of the printer and visual models to the binary image and the response of the visual model to the original gray-scale image. Conventional methods, such as clustered ordered dither, use the properties of the eye only implicitly, and resist printer distortions at the expense of spatial and gray-scale resolution. In previous work we showed that our printer model can be used to modify error diffusion to account for printer distortions. The modified error diffusion algorithm has better spatial and gray-scale resolution than conventional techniques, but produces some well known artifacts and asymmetries because it does not make use of an explicit eye model. Least-squares model-based halftoning uses explicit eye models and relies on printer models that predict distortions and exploit them to increase, rather than decrease, both spatial and gray-scale resolution. We have shown that the one-dimensional least-squares problem, in which each row or column of the image is halftoned independently, can be implemented with the Viterbi's algorithm. Unfortunately, no closed form solution can be found in two dimensions. The two-dimensional least squares solution is obtained by iterative techniques. Experiments show that least-squares model-based halftoning produces more gray levels and better spatial resolution than conventional techniques. We also show that the least- squares approach eliminates the problems associated with error diffusion. Model-based halftoning can be especially useful in transmission of high quality documents using high fidelity gray-scale image encoders. As we have shown, in such cases halftoning can be performed at the receiver, just before printing. Apart from coding efficiency, this approach permits the halftoner to be tuned to the individual printer, whose characteristics may vary considerably from those of other printers, for example, write-black vs. write-white laser printers.

  1. Modelling of induced electric fields based on incompletely known magnetic fields

    NASA Astrophysics Data System (ADS)

    Laakso, Ilkka; De Santis, Valerio; Cruciani, Silvano; Campi, Tommaso; Feliziani, Mauro

    2017-08-01

    Determining the induced electric fields in the human body is a fundamental problem in bioelectromagnetics that is important for both evaluation of safety of electromagnetic fields and medical applications. However, existing techniques for numerical modelling of induced electric fields require detailed information about the sources of the magnetic field, which may be unknown or difficult to model in realistic scenarios. Here, we show how induced electric fields can accurately be determined in the case where the magnetic fields are known only approximately, e.g. based on field measurements. The robustness of our approach is shown in numerical simulations for both idealized and realistic scenarios featuring a personalized MRI-based head model. The approach allows for modelling of the induced electric fields in biological bodies directly based on real-world magnetic field measurements.

  2. Cholesterol and Copper Affect Learning and Memory in the Rabbit

    PubMed Central

    Schreurs, Bernard G.

    2013-01-01

    A rabbit model of Alzheimer's disease based on feeding a cholesterol diet for eight weeks shows sixteen hallmarks of the disease including beta amyloid accumulation and learning and memory changes. Although we have shown that feeding 2% cholesterol and adding copper to the drinking water can retard learning, other studies have shown that feeding dietary cholesterol before learning can improve acquisition and feeding cholesterol after learning can degrade long-term memory. We explore the development of this model, the issues surrounding the role of copper, and the particular contributions of the late D. Larry Sparks. PMID:24073355

  3. Implementation of a channelized Hotelling observer model to assess image quality of x-ray angiography systems.

    PubMed

    Favazza, Christopher P; Fetterly, Kenneth A; Hangiandreou, Nicholas J; Leng, Shuai; Schueler, Beth A

    2015-01-01

    Evaluation of flat-panel angiography equipment through conventional image quality metrics is limited by the scope of standard spatial-domain image quality metric(s), such as contrast-to-noise ratio and spatial resolution, or by restricted access to appropriate data to calculate Fourier domain measurements, such as modulation transfer function, noise power spectrum, and detective quantum efficiency. Observer models have been shown capable of overcoming these limitations and are able to comprehensively evaluate medical-imaging systems. We present a spatial domain-based channelized Hotelling observer model to calculate the detectability index (DI) of our different sized disks and compare the performance of different imaging conditions and angiography systems. When appropriate, changes in DIs were compared to expectations based on the classical Rose model of signal detection to assess linearity of the model with quantum signal-to-noise ratio (SNR) theory. For these experiments, the estimated uncertainty of the DIs was less than 3%, allowing for precise comparison of imaging systems or conditions. For most experimental variables, DI changes were linear with expectations based on quantum SNR theory. DIs calculated for the smallest objects demonstrated nonlinearity with quantum SNR theory due to system blur. Two angiography systems with different detector element sizes were shown to perform similarly across the majority of the detection tasks.

  4. Unsupervised Learning in an Ensemble of Spiking Neural Networks Mediated by ITDP.

    PubMed

    Shim, Yoonsik; Philippides, Andrew; Staras, Kevin; Husbands, Phil

    2016-10-01

    We propose a biologically plausible architecture for unsupervised ensemble learning in a population of spiking neural network classifiers. A mixture of experts type organisation is shown to be effective, with the individual classifier outputs combined via a gating network whose operation is driven by input timing dependent plasticity (ITDP). The ITDP gating mechanism is based on recent experimental findings. An abstract, analytically tractable model of the ITDP driven ensemble architecture is derived from a logical model based on the probabilities of neural firing events. A detailed analysis of this model provides insights that allow it to be extended into a full, biologically plausible, computational implementation of the architecture which is demonstrated on a visual classification task. The extended model makes use of a style of spiking network, first introduced as a model of cortical microcircuits, that is capable of Bayesian inference, effectively performing expectation maximization. The unsupervised ensemble learning mechanism, based around such spiking expectation maximization (SEM) networks whose combined outputs are mediated by ITDP, is shown to perform the visual classification task well and to generalize to unseen data. The combined ensemble performance is significantly better than that of the individual classifiers, validating the ensemble architecture and learning mechanisms. The properties of the full model are analysed in the light of extensive experiments with the classification task, including an investigation into the influence of different input feature selection schemes and a comparison with a hierarchical STDP based ensemble architecture.

  5. New Theoretical Model of Nerve Conduction in Unmyelinated Nerves

    PubMed Central

    Akaishi, Tetsuya

    2017-01-01

    Nerve conduction in unmyelinated fibers has long been described based on the equivalent circuit model and cable theory. However, without the change in ionic concentration gradient across the membrane, there would be no generation or propagation of the action potential. Based on this concept, we employ a new conductive model focusing on the distribution of voltage-gated sodium ion channels and Coulomb force between electrolytes. Based on this new model, the propagation of the nerve conduction was suggested to take place far before the generation of action potential at each channel. We theoretically showed that propagation of action potential, which is enabled by the increasing Coulomb force produced by inflowing sodium ions, from one sodium ion channel to the next sodium channel would be inversely proportionate to the density of sodium channels on the axon membrane. Because the longitudinal number of sodium ion channel would be proportionate to the square root of channel density, the conduction velocity of unmyelinated nerves is theoretically shown to be proportionate to the square root of channel density. Also, from a viewpoint of equilibrium state of channel importation and degeneration, channel density was suggested to be proportionate to axonal diameter. Based on these simple basis, conduction velocity in unmyelinated nerves was theoretically shown to be proportionate to the square root of axonal diameter. This new model would also enable us to acquire more accurate and understandable vision on the phenomena in unmyelinated nerves in addition to the conventional electric circuit model and cable theory. PMID:29081751

  6. Unsupervised Learning in an Ensemble of Spiking Neural Networks Mediated by ITDP

    PubMed Central

    Staras, Kevin

    2016-01-01

    We propose a biologically plausible architecture for unsupervised ensemble learning in a population of spiking neural network classifiers. A mixture of experts type organisation is shown to be effective, with the individual classifier outputs combined via a gating network whose operation is driven by input timing dependent plasticity (ITDP). The ITDP gating mechanism is based on recent experimental findings. An abstract, analytically tractable model of the ITDP driven ensemble architecture is derived from a logical model based on the probabilities of neural firing events. A detailed analysis of this model provides insights that allow it to be extended into a full, biologically plausible, computational implementation of the architecture which is demonstrated on a visual classification task. The extended model makes use of a style of spiking network, first introduced as a model of cortical microcircuits, that is capable of Bayesian inference, effectively performing expectation maximization. The unsupervised ensemble learning mechanism, based around such spiking expectation maximization (SEM) networks whose combined outputs are mediated by ITDP, is shown to perform the visual classification task well and to generalize to unseen data. The combined ensemble performance is significantly better than that of the individual classifiers, validating the ensemble architecture and learning mechanisms. The properties of the full model are analysed in the light of extensive experiments with the classification task, including an investigation into the influence of different input feature selection schemes and a comparison with a hierarchical STDP based ensemble architecture. PMID:27760125

  7. A model for life predictions of nickel-base superalloys in high-temperature low cycle fatigue

    NASA Technical Reports Server (NTRS)

    Romanoski, Glenn R.; Pelloux, Regis M.; Antolovich, Stephen D.

    1988-01-01

    Extensive characterization of low-cycle fatigue damage mechanisms was performed on polycrystalline Rene 80 and IN100 tested in the temperature range from 871 to 1000 C. Low-cycle fatigue life was found to be dominated by propagation of microcracks to a critical size governed by the maximum tensile stress. A model was developed which incorporates a threshold stress for crack extension, a stress-based crack growth expression, and a failure criterion. The mathematical equivalence between this mechanistically based model and the strain-life low-cycle fatigue law was demonstrated using cyclic stress-strain relationships. The model was shown to correlate the high-temperature low-cycle fatigue data of the different nickel-base superalloys considered in this study.

  8. Fire flame detection based on GICA and target tracking

    NASA Astrophysics Data System (ADS)

    Rong, Jianzhong; Zhou, Dechuang; Yao, Wei; Gao, Wei; Chen, Juan; Wang, Jian

    2013-04-01

    To improve the video fire detection rate, a robust fire detection algorithm based on the color, motion and pattern characteristics of fire targets was proposed, which proved a satisfactory fire detection rate for different fire scenes. In this fire detection algorithm: (a) a rule-based generic color model was developed based on analysis on a large quantity of flame pixels; (b) from the traditional GICA (Geometrical Independent Component Analysis) model, a Cumulative Geometrical Independent Component Analysis (C-GICA) model was developed for motion detection without static background and (c) a BP neural network fire recognition model based on multi-features of the fire pattern was developed. Fire detection tests on benchmark fire video clips of different scenes have shown the robustness, accuracy and fast-response of the algorithm.

  9. Moist air state above counterflow wet-cooling tower fill based on Merkel, generalised Merkel and Klimanek & Białecky models

    NASA Astrophysics Data System (ADS)

    Hyhlík, Tomáš

    2017-09-01

    The article deals with an evaluation of moist air state above counterflow wet-cooling tower fill. The results based on Klimanek & Białecky model are compared with results of Merkel model and generalised Merkel model. Based on the numerical simulation it is shown that temperature is predicted correctly by using generalised Merkel model in the case of saturated or super-saturated air above the fill, but the temperature is underpredicted in the case of unsaturated moist air above the fill. The classical Merkel model always under predicts temperature above the fill. The density of moist air above the fill, which is calculated using generalised Merkel model, is strongly over predicted in the case of unsaturated moist air above the fill.

  10. Numerical dissipation vs. subgrid-scale modelling for large eddy simulation

    NASA Astrophysics Data System (ADS)

    Dairay, Thibault; Lamballais, Eric; Laizet, Sylvain; Vassilicos, John Christos

    2017-05-01

    This study presents an alternative way to perform large eddy simulation based on a targeted numerical dissipation introduced by the discretization of the viscous term. It is shown that this regularisation technique is equivalent to the use of spectral vanishing viscosity. The flexibility of the method ensures high-order accuracy while controlling the level and spectral features of this purely numerical viscosity. A Pao-like spectral closure based on physical arguments is used to scale this numerical viscosity a priori. It is shown that this way of approaching large eddy simulation is more efficient and accurate than the use of the very popular Smagorinsky model in standard as well as in dynamic version. The main strength of being able to correctly calibrate numerical dissipation is the possibility to regularise the solution at the mesh scale. Thanks to this property, it is shown that the solution can be seen as numerically converged. Conversely, the two versions of the Smagorinsky model are found unable to ensure regularisation while showing a strong sensitivity to numerical errors. The originality of the present approach is that it can be viewed as implicit large eddy simulation, in the sense that the numerical error is the source of artificial dissipation, but also as explicit subgrid-scale modelling, because of the equivalence with spectral viscosity prescribed on a physical basis.

  11. Assessing the Validity of the Simplified Potential Energy Clock Model for Modeling Glass-Ceramics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jamison, Ryan Dale; Grillet, Anne M.; Stavig, Mark E.

    Glass-ceramic seals may be the future of hermetic connectors at Sandia National Laboratories. They have been shown capable of surviving higher temperatures and pressures than amorphous glass seals. More advanced finite-element material models are required to enable model-based design and provide evidence that the hermetic connectors can meet design requirements. Glass-ceramics are composite materials with both crystalline and amorphous phases. The latter gives rise to (non-linearly) viscoelastic behavior. Given their complex microstructures, glass-ceramics may be thermorheologically complex, a behavior outside the scope of currently implemented constitutive models at Sandia. However, it was desired to assess if the Simplified Potential Energymore » Clock (SPEC) model is capable of capturing the material response. Available data for SL 16.8 glass-ceramic was used to calibrate the SPEC model. Model accuracy was assessed by comparing model predictions with shear moduli temperature dependence and high temperature 3-point bend creep data. It is shown that the model can predict the temperature dependence of the shear moduli and 3- point bend creep data. Analysis of the results is presented. Suggestions for future experiments and model development are presented. Though further calibration is likely necessary, SPEC has been shown capable of modeling glass-ceramic behavior in the glass transition region but requires further analysis below the transition region.« less

  12. Combining global and local approximations

    NASA Technical Reports Server (NTRS)

    Haftka, Raphael T.

    1991-01-01

    A method based on a linear approximation to a scaling factor, designated the 'global-local approximation' (GLA) method, is presented and shown capable of extending the range of usefulness of derivative-based approximations to a more refined model. The GLA approach refines the conventional scaling factor by means of a linearly varying, rather than constant, scaling factor. The capabilities of the method are demonstrated for a simple beam example with a crude and more refined FEM model.

  13. LiDAR based prediction of forest biomass using hierarchical models with spatially varying coefficients

    Treesearch

    Chad Babcock; Andrew O. Finley; John B. Bradford; Randy Kolka; Richard Birdsey; Michael G. Ryan

    2015-01-01

    Many studies and production inventory systems have shown the utility of coupling covariates derived from Light Detection and Ranging (LiDAR) data with forest variables measured on georeferenced inventory plots through regression models. The objective of this study was to propose and assess the use of a Bayesian hierarchical modeling framework that accommodates both...

  14. A New Perspective of Negotiation-Based Dialog to Enhance Metacognitive Skills in the Context of Open Learner Models

    ERIC Educational Resources Information Center

    Suleman, Raja M.; Mizoguchi, Riichiro; Ikeda, Mitsuru

    2016-01-01

    Negotiation mechanism using conversational agents (chatbots) has been used in Open Learner Models (OLM) to enhance learner model accuracy and provide opportunities for learner reflection. Using chatbots that allow for natural language discussions has shown positive learning gains in students. Traditional OLMs assume a learner to be able to manage…

  15. Automated sample plan selection for OPC modeling

    NASA Astrophysics Data System (ADS)

    Casati, Nathalie; Gabrani, Maria; Viswanathan, Ramya; Bayraktar, Zikri; Jaiswal, Om; DeMaris, David; Abdo, Amr Y.; Oberschmidt, James; Krause, Andreas

    2014-03-01

    It is desired to reduce the time required to produce metrology data for calibration of Optical Proximity Correction (OPC) models and also maintain or improve the quality of the data collected with regard to how well that data represents the types of patterns that occur in real circuit designs. Previous work based on clustering in geometry and/or image parameter space has shown some benefit over strictly manual or intuitive selection, but leads to arbitrary pattern exclusion or selection which may not be the best representation of the product. Forming the pattern selection as an optimization problem, which co-optimizes a number of objective functions reflecting modelers' insight and expertise, has shown to produce models with equivalent quality to the traditional plan of record (POR) set but in a less time.

  16. Equivalent-Continuum Modeling With Application to Carbon Nanotubes

    NASA Technical Reports Server (NTRS)

    Odegard, Gregory M.; Gates, Thomas S.; Nicholson, Lee M.; Wise, Kristopher E.

    2002-01-01

    A method has been proposed for developing structure-property relationships of nano-structured materials. This method serves as a link between computational chemistry and solid mechanics by substituting discrete molecular structures with equivalent-continuum models. It has been shown that this substitution may be accomplished by equating the vibrational potential energy of a nano-structured material with the strain energy of representative truss and continuum models. As important examples with direct application to the development and characterization of single-walled carbon nanotubes and the design of nanotube-based devices, the modeling technique has been applied to determine the effective-continuum geometry and bending rigidity of a graphene sheet. A representative volume element of the chemical structure of graphene has been substituted with equivalent-truss and equivalent continuum models. As a result, an effective thickness of the continuum model has been determined. This effective thickness has been shown to be significantly larger than the interatomic spacing of graphite. The effective thickness has been shown to be significantly larger than the inter-planar spacing of graphite. The effective bending rigidity of the equivalent-continuum model of a graphene sheet was determined by equating the vibrational potential energy of the molecular model of a graphene sheet subjected to cylindrical bending with the strain energy of an equivalent continuum plate subjected to cylindrical bending.

  17. Modelling and identification for control of gas bearings

    NASA Astrophysics Data System (ADS)

    Theisen, Lukas R. S.; Niemann, Hans H.; Santos, Ilmar F.; Galeazzi, Roberto; Blanke, Mogens

    2016-03-01

    Gas bearings are popular for their high speed capabilities, low friction and clean operation, but suffer from poor damping, which poses challenges for safe operation in presence of disturbances. Feedback control can achieve enhanced damping but requires low complexity models of the dominant dynamics over its entire operating range. Models from first principles are complex and sensitive to parameter uncertainty. This paper presents an experimental technique for "in situ" identification of a low complexity model of a rotor-bearing-actuator system and demonstrates identification over relevant ranges of rotational speed and gas injection pressure. This is obtained using parameter-varying linear models that are found to capture the dominant dynamics. The approach is shown to be easily applied and to suit subsequent control design. Based on the identified models, decentralised proportional control is designed and shown to obtain the required damping in theory and in a laboratory test rig.

  18. The rabbit as a model for studying lung disease and stem cell therapy.

    PubMed

    Kamaruzaman, Nurfatin Asyikhin; Kardia, Egi; Kamaldin, Nurulain 'Atikah; Latahir, Ahmad Zaeri; Yahaya, Badrul Hisham

    2013-01-01

    No single animal model can reproduce all of the human features of both acute and chronic lung diseases. However, the rabbit is a reliable model and clinically relevant facsimile of human disease. The similarities between rabbits and humans in terms of airway anatomy and responses to inflammatory mediators highlight the value of this species in the investigation of lung disease pathophysiology and in the development of therapeutic agents. The inflammatory responses shown by the rabbit model, especially in the case of asthma, are comparable with those that occur in humans. The allergic rabbit model has been used extensively in drug screening tests, and this model and humans appear to be sensitive to similar drugs. In addition, recent studies have shown that the rabbit serves as a good platform for cell delivery for the purpose of stem-cell-based therapy.

  19. The Rabbit as a Model for Studying Lung Disease and Stem Cell Therapy

    PubMed Central

    Kamaruzaman, Nurfatin Asyikhin; Kamaldin, Nurulain ‘Atikah; Latahir, Ahmad Zaeri; Yahaya, Badrul Hisham

    2013-01-01

    No single animal model can reproduce all of the human features of both acute and chronic lung diseases. However, the rabbit is a reliable model and clinically relevant facsimile of human disease. The similarities between rabbits and humans in terms of airway anatomy and responses to inflammatory mediators highlight the value of this species in the investigation of lung disease pathophysiology and in the development of therapeutic agents. The inflammatory responses shown by the rabbit model, especially in the case of asthma, are comparable with those that occur in humans. The allergic rabbit model has been used extensively in drug screening tests, and this model and humans appear to be sensitive to similar drugs. In addition, recent studies have shown that the rabbit serves as a good platform for cell delivery for the purpose of stem-cell-based therapy. PMID:23653896

  20. Color Sparse Representations for Image Processing: Review, Models, and Prospects.

    PubMed

    Barthélemy, Quentin; Larue, Anthony; Mars, Jérôme I

    2015-11-01

    Sparse representations have been extended to deal with color images composed of three channels. A review of dictionary-learning-based sparse representations for color images is made here, detailing the differences between the models, and comparing their results on the real and simulated data. These models are considered in a unifying framework that is based on the degrees of freedom of the linear filtering/transformation of the color channels. Moreover, this allows it to be shown that the scalar quaternionic linear model is equivalent to constrained matrix-based color filtering, which highlights the filtering implicitly applied through this model. Based on this reformulation, the new color filtering model is introduced, using unconstrained filters. In this model, spatial morphologies of color images are encoded by atoms, and colors are encoded by color filters. Color variability is no longer captured in increasing the dictionary size, but with color filters, this gives an efficient color representation.

  1. Flatness-based control in successive loops for stabilization of heart's electrical activity

    NASA Astrophysics Data System (ADS)

    Rigatos, Gerasimos; Melkikh, Alexey

    2016-12-01

    The article proposes a new flatness-based control method implemented in successive loops which allows for stabilization of the heart's electrical activity. Heart's pacemaking function is modeled as a set of coupled oscillators which potentially can exhibit chaotic behavior. It is shown that this model satisfies differential flatness properties. Next, the control and stabilization of this model is performed with the use of flatness-based control implemented in cascading loops. By applying a per-row decomposition of the state-space model of the coupled oscillators a set of nonlinear differential equations is obtained. Differential flatness properties are shown to hold for the subsystems associated with the each one of the aforementioned differential equations and next a local flatness-based controller is designed for each subsystem. For the i-th subsystem, state variable xi is chosen to be the flat output and state variable xi+1 is taken to be a virtual control input. Then the value of the virtual control input which eliminates the output tracking error for the i-th subsystem becomes reference setpoint for the i + 1-th subsystem. In this manner the control of the entire state-space model is performed by successive flatness-based control loops. By arriving at the n-th row of the state-space model one computes the control input that can be actually exerted on the aforementioned biosystem. This real control input of the coupled oscillators' system, contains recursively all virtual control inputs associated with the previous n - 1 rows of the state-space model. This control approach achieves asymptotically the elimination of the chaotic oscillation effects and the stabilization of the heart's pulsation rhythm. The stability of the proposed control scheme is proven with the use of Lyapunov analysis.

  2. Multivariate Non-Symmetric Stochastic Models for Spatial Dependence Models

    NASA Astrophysics Data System (ADS)

    Haslauer, C. P.; Bárdossy, A.

    2017-12-01

    A copula based multivariate framework allows more flexibility to describe different kind of dependences than what is possible using models relying on the confining assumption of symmetric Gaussian models: different quantiles can be modelled with a different degree of dependence; it will be demonstrated how this can be expected given process understanding. maximum likelihood based multivariate quantitative parameter estimation yields stable and reliable results; not only improved results in cross-validation based measures of uncertainty are obtained but also a more realistic spatial structure of uncertainty compared to second order models of dependence; as much information as is available is included in the parameter estimation: incorporation of censored measurements (e.g., below detection limit, or ones that are above the sensitive range of the measurement device) yield to more realistic spatial models; the proportion of true zeros can be jointly estimated with and distinguished from censored measurements which allow estimates about the age of a contaminant in the system; secondary information (categorical and on the rational scale) has been used to improve the estimation of the primary variable; These copula based multivariate statistical techniques are demonstrated based on hydraulic conductivity observations at the Borden (Canada) site, the MADE site (USA), and a large regional groundwater quality data-set in south-west Germany. Fields of spatially distributed K were simulated with identical marginal simulation, identical second order spatial moments, yet substantially differing solute transport characteristics when numerical tracer tests were performed. A statistical methodology is shown that allows the delineation of a boundary layer separating homogenous parts of a spatial data-set. The effects of this boundary layer (macro structure) and the spatial dependence of K (micro structure) on solute transport behaviour is shown.

  3. Regional climate model downscaling may improve the prediction of alien plant species distributions

    NASA Astrophysics Data System (ADS)

    Liu, Shuyan; Liang, Xin-Zhong; Gao, Wei; Stohlgren, Thomas J.

    2014-12-01

    Distributions of invasive species are commonly predicted with species distribution models that build upon the statistical relationships between observed species presence data and climate data. We used field observations, climate station data, and Maximum Entropy species distribution models for 13 invasive plant species in the United States, and then compared the models with inputs from a General Circulation Model (hereafter GCM-based models) and a downscaled Regional Climate Model (hereafter, RCM-based models).We also compared species distributions based on either GCM-based or RCM-based models for the present (1990-1999) to the future (2046-2055). RCM-based species distribution models replicated observed distributions remarkably better than GCM-based models for all invasive species under the current climate. This was shown for the presence locations of the species, and by using four common statistical metrics to compare modeled distributions. For two widespread invasive taxa ( Bromus tectorum or cheatgrass, and Tamarix spp. or tamarisk), GCM-based models failed miserably to reproduce observed species distributions. In contrast, RCM-based species distribution models closely matched observations. Future species distributions may be significantly affected by using GCM-based inputs. Because invasive plants species often show high resilience and low rates of local extinction, RCM-based species distribution models may perform better than GCM-based species distribution models for planning containment programs for invasive species.

  4. Constrained reduced-order models based on proper orthogonal decomposition

    DOE PAGES

    Reddy, Sohail R.; Freno, Brian Andrew; Cizmas, Paul G. A.; ...

    2017-04-09

    A novel approach is presented to constrain reduced-order models (ROM) based on proper orthogonal decomposition (POD). The Karush–Kuhn–Tucker (KKT) conditions were applied to the traditional reduced-order model to constrain the solution to user-defined bounds. The constrained reduced-order model (C-ROM) was applied and validated against the analytical solution to the first-order wave equation. C-ROM was also applied to the analysis of fluidized beds. Lastly, it was shown that the ROM and C-ROM produced accurate results and that C-ROM was less sensitive to error propagation through time than the ROM.

  5. Modeling the electrostatic field localization in nanostructures based on DLC films using the tunneling microscopy methods

    NASA Astrophysics Data System (ADS)

    Yakunin, Alexander N.; Aban'shin, Nikolay P.; Avetisyan, Yuri A.; Akchurin, Georgy G.; Akchurin, Garif G.

    2018-04-01

    A model for calculating the electrostatic field in the system "probe of a tunnel microscope - a nanostructure based on a DLC film" was developed. A finite-element modeling of the localization of the field was carried out, taking into account the morphological and topological features of the nanostructure. The obtained results and their interpretation contribute to the development of the concepts to the model of tunnel electric transport processes. The possibility for effective usage of the tunneling microscopy methods in the development of new nanophotonic devices is shown.

  6. Hydrological time series modeling: A comparison between adaptive neuro-fuzzy, neural network and autoregressive techniques

    NASA Astrophysics Data System (ADS)

    Lohani, A. K.; Kumar, Rakesh; Singh, R. D.

    2012-06-01

    SummaryTime series modeling is necessary for the planning and management of reservoirs. More recently, the soft computing techniques have been used in hydrological modeling and forecasting. In this study, the potential of artificial neural networks and neuro-fuzzy system in monthly reservoir inflow forecasting are examined by developing and comparing monthly reservoir inflow prediction models, based on autoregressive (AR), artificial neural networks (ANNs) and adaptive neural-based fuzzy inference system (ANFIS). To take care the effect of monthly periodicity in the flow data, cyclic terms are also included in the ANN and ANFIS models. Working with time series flow data of the Sutlej River at Bhakra Dam, India, several ANN and adaptive neuro-fuzzy models are trained with different input vectors. To evaluate the performance of the selected ANN and adaptive neural fuzzy inference system (ANFIS) models, comparison is made with the autoregressive (AR) models. The ANFIS model trained with the input data vector including previous inflows and cyclic terms of monthly periodicity has shown a significant improvement in the forecast accuracy in comparison with the ANFIS models trained with the input vectors considering only previous inflows. In all cases ANFIS gives more accurate forecast than the AR and ANN models. The proposed ANFIS model coupled with the cyclic terms is shown to provide better representation of the monthly inflow forecasting for planning and operation of reservoir.

  7. Basic Diagnosis and Prediction of Persistent Contrail Occurrence using High-resolution Numerical Weather Analyses/Forecasts and Logistic Regression. Part II: Evaluation of Sample Models

    NASA Technical Reports Server (NTRS)

    Duda, David P.; Minnis, Patrick

    2009-01-01

    Previous studies have shown that probabilistic forecasting may be a useful method for predicting persistent contrail formation. A probabilistic forecast to accurately predict contrail formation over the contiguous United States (CONUS) is created by using meteorological data based on hourly meteorological analyses from the Advanced Regional Prediction System (ARPS) and from the Rapid Update Cycle (RUC) as well as GOES water vapor channel measurements, combined with surface and satellite observations of contrails. Two groups of logistic models were created. The first group of models (SURFACE models) is based on surface-based contrail observations supplemented with satellite observations of contrail occurrence. The second group of models (OUTBREAK models) is derived from a selected subgroup of satellite-based observations of widespread persistent contrails. The mean accuracies for both the SURFACE and OUTBREAK models typically exceeded 75 percent when based on the RUC or ARPS analysis data, but decreased when the logistic models were derived from ARPS forecast data.

  8. Comparing in Cylinder Pressure Modelling of a DI Diesel Engine Fuelled on Alternative Fuel Using Two Tabulated Chemistry Approaches.

    PubMed

    Ngayihi Abbe, Claude Valery; Nzengwa, Robert; Danwe, Raidandi

    2014-01-01

    The present work presents the comparative simulation of a diesel engine fuelled on diesel fuel and biodiesel fuel. Two models, based on tabulated chemistry, were implemented for the simulation purpose and results were compared with experimental data obtained from a single cylinder diesel engine. The first model is a single zone model based on the Krieger and Bormann combustion model while the second model is a two-zone model based on Olikara and Bormann combustion model. It was shown that both models can predict well the engine's in-cylinder pressure as well as its overall performances. The second model showed a better accuracy than the first, while the first model was easier to implement and faster to compute. It was found that the first method was better suited for real time engine control and monitoring while the second one was better suited for engine design and emission prediction.

  9. Cumulus cloud model estimates of trace gas transports

    NASA Technical Reports Server (NTRS)

    Garstang, Michael; Scala, John; Simpson, Joanne; Tao, Wei-Kuo; Thompson, A.; Pickering, K. E.; Harris, R.

    1989-01-01

    Draft structures in convective clouds are examined with reference to the results of the NASA Amazon Boundary Layer Experiments (ABLE IIa and IIb) and calculations based on a multidimensional time dependent dynamic and microphysical numerical cloud model. It is shown that some aspects of the draft structures can be calculated from measurements of the cloud environment. Estimated residence times in the lower regions of the cloud based on surface observations (divergence and vertical velocities) are within the same order of magnitude (about 20 min) as model trajectory estimates.

  10. A cloud and radiation model-based algorithm for rainfall retrieval from SSM/I multispectral microwave measurements

    NASA Technical Reports Server (NTRS)

    Xiang, Xuwu; Smith, Eric A.; Tripoli, Gregory J.

    1992-01-01

    A hybrid statistical-physical retrieval scheme is explored which combines a statistical approach with an approach based on the development of cloud-radiation models designed to simulate precipitating atmospheres. The algorithm employs the detailed microphysical information from a cloud model as input to a radiative transfer model which generates a cloud-radiation model database. Statistical procedures are then invoked to objectively generate an initial guess composite profile data set from the database. The retrieval algorithm has been tested for a tropical typhoon case using Special Sensor Microwave/Imager (SSM/I) data and has shown satisfactory results.

  11. An Extension of SIC Predictions to the Wiener Coactive Model

    PubMed Central

    Houpt, Joseph W.; Townsend, James T.

    2011-01-01

    The survivor interaction contrasts (SIC) is a powerful measure for distinguishing among candidate models of human information processing. One class of models to which SIC analysis can apply are the coactive, or channel summation, models of human information processing. In general, parametric forms of coactive models assume that responses are made based on the first passage time across a fixed threshold of a sum of stochastic processes. Previous work has shown that that the SIC for a coactive model based on the sum of Poisson processes has a distinctive down-up-down form, with an early negative region that is smaller than the later positive region. In this note, we demonstrate that a coactive process based on the sum of two Wiener processes has the same SIC form. PMID:21822333

  12. An Extension of SIC Predictions to the Wiener Coactive Model.

    PubMed

    Houpt, Joseph W; Townsend, James T

    2011-06-01

    The survivor interaction contrasts (SIC) is a powerful measure for distinguishing among candidate models of human information processing. One class of models to which SIC analysis can apply are the coactive, or channel summation, models of human information processing. In general, parametric forms of coactive models assume that responses are made based on the first passage time across a fixed threshold of a sum of stochastic processes. Previous work has shown that that the SIC for a coactive model based on the sum of Poisson processes has a distinctive down-up-down form, with an early negative region that is smaller than the later positive region. In this note, we demonstrate that a coactive process based on the sum of two Wiener processes has the same SIC form.

  13. A primer on thermodynamic-based models for deciphering transcriptional regulatory logic.

    PubMed

    Dresch, Jacqueline M; Richards, Megan; Ay, Ahmet

    2013-09-01

    A rigorous analysis of transcriptional regulation at the DNA level is crucial to the understanding of many biological systems. Mathematical modeling has offered researchers a new approach to understanding this central process. In particular, thermodynamic-based modeling represents the most biophysically informed approach aimed at connecting DNA level regulatory sequences to the expression of specific genes. The goal of this review is to give biologists a thorough description of the steps involved in building, analyzing, and implementing a thermodynamic-based model of transcriptional regulation. The data requirements for this modeling approach are described, the derivation for a specific regulatory region is shown, and the challenges and future directions for the quantitative modeling of gene regulation are discussed. Copyright © 2013 Elsevier B.V. All rights reserved.

  14. Scalable Entity-Based Modeling of Population-Based Systems, Final LDRD Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cleary, A J; Smith, S G; Vassilevska, T K

    2005-01-27

    The goal of this project has been to develop tools, capabilities and expertise in the modeling of complex population-based systems via scalable entity-based modeling (EBM). Our initial focal application domain has been the dynamics of large populations exposed to disease-causing agents, a topic of interest to the Department of Homeland Security in the context of bioterrorism. In the academic community, discrete simulation technology based on individual entities has shown initial success, but the technology has not been scaled to the problem sizes or computational resources of LLNL. Our developmental emphasis has been on the extension of this technology to parallelmore » computers and maturation of the technology from an academic to a lab setting.« less

  15. Multichannel Speech Enhancement Based on Generalized Gamma Prior Distribution with Its Online Adaptive Estimation

    NASA Astrophysics Data System (ADS)

    Dat, Tran Huy; Takeda, Kazuya; Itakura, Fumitada

    We present a multichannel speech enhancement method based on MAP speech spectral magnitude estimation using a generalized gamma model of speech prior distribution, where the model parameters are adapted from actual noisy speech in a frame-by-frame manner. The utilization of a more general prior distribution with its online adaptive estimation is shown to be effective for speech spectral estimation in noisy environments. Furthermore, the multi-channel information in terms of cross-channel statistics are shown to be useful to better adapt the prior distribution parameters to the actual observation, resulting in better performance of speech enhancement algorithm. We tested the proposed algorithm in an in-car speech database and obtained significant improvements of the speech recognition performance, particularly under non-stationary noise conditions such as music, air-conditioner and open window.

  16. Ice water path estimation and characterization using passive microwave radiometry

    NASA Technical Reports Server (NTRS)

    Vivekanandan, J.; Turk, J.; Bringi, V. N.

    1991-01-01

    Model computations of top-of-atmospheric microwave brightness temperatures T(B) from layers of precipitation-sized ice of variable bulk density and ice water content (IWC) are presented. It is shown that the 85-GHz T(B) depends essentially on the ice optical thickness. The results demonstrate the potential usefulness of scattering-based channels for characterizing the ice phase and suggest a top-down methodology for retrieval of cloud vertical structure and precipitation estimation from multifrequency passive microwave measurements. Attention is also given to radiative transfer model results based on the multiparameter radar data initialization from the Cooperative Huntsville Meteorological Experiment (COHMEX) in northern Alabama. It is shown that brightness temperature warming effects due to the inclusion of a cloud liquid water profile are especially significant at 85 GHz during later stages of cloud evolution.

  17. Replication: A "Model" Approach to the Healthy Development of Young Children

    ERIC Educational Resources Information Center

    Krieg, Iris; Lewis, Jan

    2005-01-01

    The authors, directors of the Chicago-based Pritzker Early Childhood Foundation, advocate "replication," the adaptation of a successful model program or practice to new locations or to new populations. Studies have shown that successful replications of early childhood programs that help at-risk children and their families can have long-term,…

  18. You've Shown the Program Model Is Effective. Now What?

    ERIC Educational Resources Information Center

    Ellickson, Phyllis L.

    2014-01-01

    Rigorous tests of theory-based programs require faithful implementation. Otherwise, lack of results might be attributable to faulty program delivery, faulty theory, or both. However, once the evidence indicates the model works and merits broader dissemination, implementation issues do not fade away. How can developers enhance the likelihood that…

  19. Neural Coding of Relational Invariance in Speech: Human Language Analogs to the Barn Owl.

    ERIC Educational Resources Information Center

    Sussman, Harvey M.

    1989-01-01

    The neuronal model shown to code sound-source azimuth in the barn owl by H. Wagner et al. in 1987 is used as the basis for a speculative brain-based human model, which can establish contrastive phonetic categories to solve the problem of perception "non-invariance." (SLD)

  20. New Models of Teacher Compensation: "Lessons Learned from Six States". Issue Brief

    ERIC Educational Resources Information Center

    NGA Center for Best Practices, 2011

    2011-01-01

    Governors and other state leaders have shown an increasing interest in creating new models of teacher compensation that would reward educators based on their contributions to student learning. To support this interest, the National Governors Association (NGA) hosted a policy academy focusing on that issue. The policy academy provided teams with…

  1. Community Violence Exposure and Aggression among Urban Adolescents: Testing a Cognitive Mediator Model

    ERIC Educational Resources Information Center

    McMahon, Susan D.; Felix, Erika D.; Halpert, Jane A.; Petropoulos, Lara A. N.

    2009-01-01

    Past research has shown that exposure to violence leads to aggressive behavior, but few community-based studies have examined theoretical models illustrating the mediating social cognitive processes that explain this relation with youth exposed to high rates of violence. This study examines the impact of community violence on behavior through…

  2. Simulation Studies of Satellite Laser CO2 Mission Concepts

    NASA Technical Reports Server (NTRS)

    Kawa, Stephan Randy; Mao, J.; Abshire, J. B.; Collatz, G. J.; Sun X.; Weaver, C. J.

    2011-01-01

    Results of mission simulation studies are presented for a laser-based atmospheric CO2 sounder. The simulations are based on real-time carbon cycle process modeling and data analysis. The mission concept corresponds to ASCENDS as recommended by the US National Academy of Sciences Decadal Survey. Compared to passive sensors, active (lidar) sensing of CO2 from space has several potentially significant advantages that hold promise to advance CO2 measurement capability in the next decade. Although the precision and accuracy requirements remain at unprecedented levels of stringency, analysis of possible instrument technology indicates that such sensors are more than feasible. Radiative transfer model calculations, an instrument model with representative errors, and a simple retrieval approach complete the cycle from "nature" run to "pseudodata" CO2. Several mission and instrument configuration options are examined, and the sensitivity to key design variables is shown. Examples are also shown of how the resulting pseudo-measurements might be used to address key carbon cycle science questions.

  3. Canonical formalism for modelling and control of rigid body dynamics.

    PubMed

    Gurfil, P

    2005-12-01

    This paper develops a new paradigm for stabilization of rigid-body dynamics. The state-space model is formulated using canonical elements, known as the Serret-Andoyer (SA) variables, thus far scarcely used for engineering applications. The main feature of the SA formalism is the reduction of the dynamics via the underlying symmetry stemming from conservation of angular momentum and rotational kinetic energy. The controllability of the system model is examined using the notion of accessibility, and is shown to be accessible from all points. Based on the accessibility proof, two nonlinear asymptotic feedback stabilizers are developed: a damping feedback is designed based on the Jurdjevic-Quinn method, and a Hamiltonian controller is derived by using the Hamiltonian as a natural Lyapunov function for the closed-loop dynamics. It is shown that the Hamiltonian control is both passive and inverse optimal with respect to a meaningful performance index. The performance of the new controllers is examined and compared using simulations of realistic scenarios from the satellite attitude dynamics field.

  4. Analysis of CAD Model-based Visual Tracking for Microassembly using a New Block Set for MATLAB/Simulink

    NASA Astrophysics Data System (ADS)

    Kudryavtsev, Andrey V.; Laurent, Guillaume J.; Clévy, Cédric; Tamadazte, Brahim; Lutz, Philippe

    2015-10-01

    Microassembly is an innovative alternative to the microfabrication process of MOEMS, which is quite complex. It usually implies the use of microrobots controlled by an operator. The reliability of this approach has been already confirmed for micro-optical technologies. However, the characterization of assemblies has shown that the operator is the main source of inaccuracies in the teleoperated microassembly. Therefore, there is great interest in automating the microassembly process. One of the constraints of automation in microscale is the lack of high precision sensors capable to provide the full information about the object position. Thus, the usage of visual-based feedback represents a very promising approach allowing to automate the microassembly process. The purpose of this article is to characterize the techniques of object position estimation based on the visual data, i.e., visual tracking techniques from the ViSP library. These algorithms enables a 3-D object pose using a single view of the scene and the CAD model of the object. The performance of three main types of model-based trackers is analyzed and quantified: edge-based, texture-based and hybrid tracker. The problems of visual tracking in microscale are discussed. The control of the micromanipulation station used in the framework of our project is performed using a new Simulink block set. Experimental results are shown and demonstrate the possibility to obtain the repeatability below 1 µm.

  5. The statistical overlap theory of chromatography using power law (fractal) statistics.

    PubMed

    Schure, Mark R; Davis, Joe M

    2011-12-30

    The chromatographic dimensionality was recently proposed as a measure of retention time spacing based on a power law (fractal) distribution. Using this model, a statistical overlap theory (SOT) for chromatographic peaks is developed that estimates the number of peak maxima as a function of the chromatographic dimension, saturation and scale. Power law models exhibit a threshold region whereby below a critical saturation value no loss of peak maxima due to peak fusion occurs as saturation increases. At moderate saturation, behavior is similar to the random (Poisson) peak model. At still higher saturation, the power law model shows loss of peaks nearly independent of the scale and dimension of the model. The physicochemical meaning of the power law scale parameter is discussed and shown to be equal to the Boltzmann-weighted free energy of transfer over the scale limits. The scale is discussed. Small scale range (small β) is shown to generate more uniform chromatograms. Large scale range chromatograms (large β) are shown to give occasional large excursions of retention times; this is a property of power laws where "wild" behavior is noted to occasionally occur. Both cases are shown to be useful depending on the chromatographic saturation. A scale-invariant model of the SOT shows very simple relationships between the fraction of peak maxima and the saturation, peak width and number of theoretical plates. These equations provide much insight into separations which follow power law statistics. Copyright © 2011 Elsevier B.V. All rights reserved.

  6. Influence of channel base current and varying return stroke speed on the calculated fields of three important return stroke models

    NASA Technical Reports Server (NTRS)

    Thottappillil, Rajeev; Uman, Martin A.; Diendorfer, Gerhard

    1991-01-01

    Compared here are the calculated fields of the Traveling Current Source (TCS), Modified Transmission Line (MTL), and the Diendorfer-Uman (DU) models with a channel base current assumed in Nucci et al. on the one hand and with the channel base current assumed in Diendorfer and Uman on the other hand. The characteristics of the field wave shapes are shown to be very sensitive to the channel base current, especially the field zero crossing at 100 km for the TCS and DU models, and the magnetic hump after the initial peak at close range for the TCS models. Also, the DU model is theoretically extended to include any arbitrarily varying return stroke speed with height. A brief discussion is presented on the effects of an exponentially decreasing speed with height on the calculated fields for the TCS, MTL, and DU models.

  7. Stochastic simulation and decadal prediction of hydroclimate in the Western Himalayas

    NASA Astrophysics Data System (ADS)

    Robertson, A. W.; Chekroun, M. D.; Cook, E.; D'Arrigo, R.; Ghil, M.; Greene, A. M.; Holsclaw, T.; Kondrashov, D. A.; Lall, U.; Lu, M.; Smyth, P.

    2012-12-01

    Improved estimates of climate over the next 10 to 50 years are needed for long-term planning in water resource and flood management. However, the task of effectively incorporating the results of climate change research into decision-making face a ``double conflict of scales'': the temporal scales of climate model projections are too long, while their usable spatial scales (global to planetary) are much larger than those needed for actual decision making (at the regional to local level). This work is designed to help tackle this ``double conflict'' in the context of water management over monsoonal Asia, based on dendroclimatic multi-century reconstructions of drought indices and river flows. We identify low-frequency modes of variability with time scales from interannual to interdecadal based on these series, and then generate future scenarios based on (a) empirical model decadal predictions, and (b) stochastic simulations generated with autoregressive models that reproduce the power spectrum of the data. Finally, we consider how such scenarios could be used to develop reservoir optimization models. Results will be presented based on multi-century Upper Indus river discharge reconstructions that exhibit a strong periodicity near 27 years that is shown to yield some retrospective forecasting skill over the 1700-2000 period, at a 15-yr yield time. Stochastic simulations of annual PDSI drought index values over the Upper Indus basin are constructed using Empirical Model Reduction; their power spectra are shown to be quite realistic, with spectral peaks near 5--8 years.

  8. 3D shape decomposition and comparison for gallbladder modeling

    NASA Astrophysics Data System (ADS)

    Huang, Weimin; Zhou, Jiayin; Liu, Jiang; Zhang, Jing; Yang, Tao; Su, Yi; Law, Gim Han; Chui, Chee Kong; Chang, Stephen

    2011-03-01

    This paper presents an approach to gallbladder shape comparison by using 3D shape modeling and decomposition. The gallbladder models can be used for shape anomaly analysis and model comparison and selection in image guided robotic surgical training, especially for laparoscopic cholecystectomy simulation. The 3D shape of a gallbladder is first represented as a surface model, reconstructed from the contours segmented in CT data by a scheme of propagation based voxel learning and classification. To better extract the shape feature, the surface mesh is further down-sampled by a decimation filter and smoothed by a Taubin algorithm, followed by applying an advancing front algorithm to further enhance the regularity of the mesh. Multi-scale curvatures are then computed on the regularized mesh for the robust saliency landmark localization on the surface. The shape decomposition is proposed based on the saliency landmarks and the concavity, measured by the distance from the surface point to the convex hull. With a given tolerance the 3D shape can be decomposed and represented as 3D ellipsoids, which reveal the shape topology and anomaly of a gallbladder. The features based on the decomposed shape model are proposed for gallbladder shape comparison, which can be used for new model selection. We have collected 19 sets of abdominal CT scan data with gallbladders, some shown in normal shape and some in abnormal shapes. The experiments have shown that the decomposed shapes reveal important topology features.

  9. Adaptive Non-Interventional Heuristics for Covariation Detection in Causal Induction: Model Comparison and Rational Analysis

    ERIC Educational Resources Information Center

    Hattori, Masasi; Oaksford, Mike

    2007-01-01

    In this article, 41 models of covariation detection from 2 x 2 contingency tables were evaluated against past data in the literature and against data from new experiments. A new model was also included based on a limiting case of the normative phi-coefficient under an extreme rarity assumption, which has been shown to be an important factor in…

  10. DEPSCOR: Research on ARL’s Intelligent Control Architecture: Hierarchical Hybrid-Model Based Design, Verification, Simulation, and Synthesis of Mission Control for Autonomous Underwater Vehicles

    DTIC Science & Technology

    2007-02-01

    shown in Figure 13 and the abstracted commanded environment is shown in Figure 14. Abort? Start Intl End itmi! Aborti Figure 13: Driver for loiter module...module in UPPAAL Aborti ? start Idle *- SteerToPoirt lot er<=2 Stee Doý2 I Abort? 65 66 Figure 14: Stub for loiter module module in UPPAAL Queries

  11. Lagrangian Transport Model Forecasts as Useful Support of the Flight Planning During the Intercontinental Transport and Chemical Transformation 2002 (ITCT 2k2) Measurement Campaign

    NASA Astrophysics Data System (ADS)

    Forster, C.; Cooper, O.; Stohl, A.; Eckhardt, S.; James, P.; Dunlea, E.; Nicks, D. K.; Holloway, J. S.; Hübler, G.; Parrish, D. D.; Ryerson, T. B.; Trainer, M.

    2002-12-01

    In this study, the Lagrangian tracer transport model FLEXPART is shown to be a useful forecasting tool for the flight planning during the ITCT 2k2 (Intercontinental Transport and Chemical Transformation 2002) aircraft measurement campaign. The advantages of this model are that it requires only a short computation time, has a finer spatial resolution and does not suffer numerical diffusion compared to chemistry transport models (CTMs). It is a compromise between simple trajectory calculations and complex CTMs that makes best use of available computer hardware. During the campaign FLEXPART provided three-day forecasts for four different anthropogenic CO tracers: Asian, North American, Japanese, and European. The forecasts were based on data from the Aviation model (AVN) of the National Center for Environmental Prediction (NCEP) and relied on the EDGAR emission inventory for the base year 1990. In two case studies, the forecast abilities of FLEXPART are analysed and discussed by comparing the forecasts with measurement data, results from the post analysis modelling, infrared satellite images, and backward trajectories calculated with two different Lagrangian trajectory models. It is shown that intercontinental transport and dispersion of pollution plumes were qualitatively well predicted, and the aircraft could successfully be directed into the polluted air masses.

  12. Modeling of Wall-Bounded Complex Flows and Free Shear Flows

    NASA Technical Reports Server (NTRS)

    Shih, Tsan-Hsing; Zhu, Jiang; Lumley, John L.

    1994-01-01

    Various wall-bounded flows with complex geometries and free shear flows have been studied with a newly developed realizable Reynolds stress algebraic equation model. The model development is based on the invariant theory in continuum mechanics. This theory enables us to formulate a general constitutive relation for the Reynolds stresses. Pope was the first to introduce this kind of constitutive relation to turbulence modeling. In our study, realizability is imposed on the truncated constitutive relation to determine the coefficients so that, unlike the standard k-E eddy viscosity model, the present model will not produce negative normal stresses in any situations of rapid distortion. The calculations based on the present model have shown an encouraging success in modeling complex turbulent flows.

  13. A rationale for human operator pulsive control behavior

    NASA Technical Reports Server (NTRS)

    Hess, R. A.

    1979-01-01

    When performing tracking tasks which involve demanding controlled elements such as those with K/s-squared dynamics, the human operator often develops discrete or pulsive control outputs. A dual-loop model of the human operator is discussed, the dominant adaptive feature of which is the explicit appearance of an internal model of the manipulator-controlled element dynamics in an inner feedback loop. Using this model, a rationale for pulsive control behavior is offered which is based upon the assumption that the human attempts to reduce the computational burden associated with time integration of sensory inputs. It is shown that such time integration is a natural consequence of having an internal representation of the K/s-squared-controlled element dynamics in the dual-loop model. A digital simulation is discussed in which a modified form of the dual-loop model is shown to be capable of producing pulsive control behavior qualitively comparable to that obtained in experiment.

  14. Passenger ride quality determined from commercial airline flights

    NASA Technical Reports Server (NTRS)

    Richards, L. G.; Kuhlthau, A. R.; Jacobson, I. D.

    1975-01-01

    The University of Virginia ride-quality research program is reviewed. Data from two flight programs, involving seven types of aircraft, are considered in detail. An apparatus for measuring physical variations in the flight environment and recording the subjective reactions of test subjects is described. Models are presented for predicting the comfort response of test subjects from the physical data, and predicting the overall comfort reaction of test subjects from their moment by moment responses. The correspondence of mean passenger comfort judgments and test subject response is shown. Finally, the models of comfort response based on data from the 5-point and 7-point comfort scales are shown to correspond.

  15. Aromatic hydroxylation by cytochrome P450: model calculations of mechanism and substituent effects.

    PubMed

    Bathelt, Christine M; Ridder, Lars; Mulholland, Adrian J; Harvey, Jeremy N

    2003-12-10

    The mechanism and selectivity of aromatic hydroxylation by cytochrome P450 enzymes is explored using new B3LYP density functional theory computations. The calculations, using a realistic porphyrin model system, show that rate-determining addition of compound I to an aromatic carbon atom proceeds via a transition state with partial radical and cationic character. Reactivity is shown to depend strongly on ring substituents, with both electron-withdrawing and -donating groups strongly decreasing the addition barrier in the para position, and it is shown that the calculated barrier heights can be reproduced by a new dual-parameter equation based on radical and cationic Hammett sigma parameters.

  16. Experimental characterization and microstructure linked modeling of mechanical behavior of ultra-thin aluminum foils used in packaging

    NASA Astrophysics Data System (ADS)

    Tabourot, Laurent; Charleux, Ludovic; Balland, Pascale; Sène, Ndèye Awa; Andreasson, Eskil

    2018-05-01

    This paper is based on the hypothesis that introducing distribution of mechanical properties is beneficial for modeling all kinds of mechanical behavior, even of ordinary metallic materials. To bring proof of its admissibility, it has to be first shown that modeling based on this assertion is able to efficiently describe standard mechanical behavior of materials. Searching for typical study case, it has been assessed that at a low scale, yield stresses could be strongly distributed in ultrathin aluminum foils used in packaging industry, offering opportunities to identifying their distribution and showing its role on the mechanical properties. Considering initially reduced modeling allow to establish a valuable connection between the hardening curve and the distribution of local yield stresses. This serves for finding initial value of distribution parameters in a more sophisticated identification procedure. With finally limited number of representative classes of local yield stresses, concretely 3 is enough, it is shown that a 3D finite element simulation involving limited numbers of elements returns realistic behavior of an ultrathin aluminum foil exerted to tensile test, in reference to experimental results. This gives way to large possibilities in modeling in order to give back complex experimental evidence.

  17. A parallel implementation of an off-lattice individual-based model of multicellular populations

    NASA Astrophysics Data System (ADS)

    Harvey, Daniel G.; Fletcher, Alexander G.; Osborne, James M.; Pitt-Francis, Joe

    2015-07-01

    As computational models of multicellular populations include ever more detailed descriptions of biophysical and biochemical processes, the computational cost of simulating such models limits their ability to generate novel scientific hypotheses and testable predictions. While developments in microchip technology continue to increase the power of individual processors, parallel computing offers an immediate increase in available processing power. To make full use of parallel computing technology, it is necessary to develop specialised algorithms. To this end, we present a parallel algorithm for a class of off-lattice individual-based models of multicellular populations. The algorithm divides the spatial domain between computing processes and comprises communication routines that ensure the model is correctly simulated on multiple processors. The parallel algorithm is shown to accurately reproduce the results of a deterministic simulation performed using a pre-existing serial implementation. We test the scaling of computation time, memory use and load balancing as more processes are used to simulate a cell population of fixed size. We find approximate linear scaling of both speed-up and memory consumption on up to 32 processor cores. Dynamic load balancing is shown to provide speed-up for non-regular spatial distributions of cells in the case of a growing population.

  18. Analyzing and designing object-oriented missile simulations with concurrency

    NASA Astrophysics Data System (ADS)

    Randorf, Jeffrey Allen

    2000-11-01

    A software object model for the six degree-of-freedom missile modeling domain is presented. As a precursor, a domain analysis of the missile modeling domain was started, based on the Feature-Oriented Domain Analysis (FODA) technique described by the Software Engineering Institute (SEI). It was subsequently determined the FODA methodology is functionally equivalent to the Object Modeling Technique. The analysis used legacy software documentation and code from the ENDOSIM, KDEC, and TFrames 6-DOF modeling tools, including other technical literature. The SEI Object Connection Architecture (OCA) was the template for designing the object model. Three variants of the OCA were considered---a reference structure, a recursive structure, and a reference structure with augmentation for flight vehicle modeling. The reference OCA design option was chosen for maintaining simplicity while not compromising the expressive power of the OMT model. The missile architecture was then analyzed for potential areas of concurrent computing. It was shown how protected objects could be used for data passing between OCA object managers, allowing concurrent access without changing the OCA reference design intent or structure. The implementation language was the 1995 release of Ada. OCA software components were shown how to be expressed as Ada child packages. While acceleration of several low level and other high operations level are possible on proper hardware, there was a 33% degradation of 4th order Runge-Kutta integrator performance of two simultaneous ordinary differential equations using Ada tasking on a single processor machine. The Defense Department's High Level Architecture was introduced and explained in context with the OCA. It was shown the HLA and OCA were not mutually exclusive architectures, but complimentary. HLA was shown as an interoperability solution, with the OCA as an architectural vehicle for software reuse. Further directions for implementing a 6-DOF missile modeling environment are discussed.

  19. Transport coefficient computation based on input/output reduced order models

    NASA Astrophysics Data System (ADS)

    Hurst, Joshua L.

    The guiding purpose of this thesis is to address the optimal material design problem when the material description is a molecular dynamics model. The end goal is to obtain a simplified and fast model that captures the property of interest such that it can be used in controller design and optimization. The approach is to examine model reduction analysis and methods to capture a specific property of interest, in this case viscosity, or more generally complex modulus or complex viscosity. This property and other transport coefficients are defined by a input/output relationship and this motivates model reduction techniques that are tailored to preserve input/output behavior. In particular Singular Value Decomposition (SVD) based methods are investigated. First simulation methods are identified that are amenable to systems theory analysis. For viscosity, these models are of the Gosling and Lees-Edwards type. They are high order nonlinear Ordinary Differential Equations (ODEs) that employ Periodic Boundary Conditions. Properties can be calculated from the state trajectories of these ODEs. In this research local linear approximations are rigorously derived and special attention is given to potentials that are evaluated with Periodic Boundary Conditions (PBC). For the Gosling description LTI models are developed from state trajectories but are found to have limited success in capturing the system property, even though it is shown that full order LTI models can be well approximated by reduced order LTI models. For the Lees-Edwards SLLOD type model nonlinear ODEs will be approximated by a Linear Time Varying (LTV) model about some nominal trajectory and both balanced truncation and Proper Orthogonal Decomposition (POD) will be used to assess the plausibility of reduced order models to this system description. An immediate application of the derived LTV models is Quasilinearization or Waveform Relaxation. Quasilinearization is a Newton's method applied to the ODE operator equation. Its a recursive method that solves nonlinear ODE's by solving a LTV systems at each iteration to obtain a new closer solution. LTV models are derived for both Gosling and Lees-Edwards type models. Particular attention is given to SLLOD Lees-Edwards models because they are in a form most amenable to performing Taylor series expansion, and the most commonly used model to examine viscosity. With linear models developed a method is presented to calculate viscosity based on LTI Gosling models but is shown to have some limitations. To address these issues LTV SLLOD models are analyzed with both Balanced Truncation and POD and both show that significant order reduction is possible. By examining the singular values of both techniques it is shown that Balanced Truncation has a potential to offer greater reduction, which should be expected as it is based on the input/output mapping instead of just the state information as in POD. Obtaining reduced order systems that capture the property of interest is challenging. For Balanced Truncation reduced order models for 1-D LJ and FENE systems are obtained and are shown to capture the output of interest fairly well. However numerical challenges currently limit this analysis to small order systems. Suggestions are presented to extend this method to larger systems. In addition reduced 2nd order systems are obtained from POD. Here the challenge is extending the solution beyond the original period used for the projection, in particular identifying the manifold the solution travels along. The remaining challenges are presented and discussed.

  20. A study of helicopter stability and control including blade dynamics

    NASA Technical Reports Server (NTRS)

    Zhao, Xin; Curtiss, H. C., Jr.

    1988-01-01

    A linearized model of rotorcraft dynamics has been developed through the use of symbolic automatic equation generating techniques. The dynamic model has been formulated in a unique way such that it can be used to analyze a variety of rotor/body coupling problems including a rotor mounted on a flexible shaft with a number of modes as well as free-flight stability and control characteristics. Direct comparison of the time response to longitudinal, lateral and directional control inputs at various trim conditions shows that the linear model yields good to very good correlation with flight test. In particular it is shown that a dynamic inflow model is essential to obtain good time response correlation, especially for the hover trim condition. It also is shown that the main rotor wake interaction with the tail rotor and fixed tail surfaces is a significant contributor to the response at translational flight trim conditions. A relatively simple model for the downwash and sidewash at the tail surfaces based on flat vortex wake theory is shown to produce good agreement. Then, the influence of rotor flap and lag dynamics on automatic control systems feedback gain limitations is investigated with the model. It is shown that the blade dynamics, especially lagging dynamics, can severly limit the useable values of the feedback gain for simple feedback control and that multivariable optimal control theory is a powerful tool to design high gain augmentation control system. The frequency-shaped optimal control design can offer much better flight dynamic characteristics and a stable margin for the feedback system without need to model the lagging dynamics.

  1. Predicting minimum uncertainties in the inversion of ocean color geophysical parameters based on Cramer-Rao bounds.

    PubMed

    Jay, Sylvain; Guillaume, Mireille; Chami, Malik; Minghelli, Audrey; Deville, Yannick; Lafrance, Bruno; Serfaty, Véronique

    2018-01-22

    We present an analytical approach based on Cramer-Rao Bounds (CRBs) to investigate the uncertainties in estimated ocean color parameters resulting from the propagation of uncertainties in the bio-optical reflectance modeling through the inversion process. Based on given bio-optical and noise probabilistic models, CRBs can be computed efficiently for any set of ocean color parameters and any sensor configuration, directly providing the minimum estimation variance that can be possibly attained by any unbiased estimator of any targeted parameter. Here, CRBs are explicitly developed using (1) two water reflectance models corresponding to deep and shallow waters, resp., and (2) four probabilistic models describing the environmental noises observed within four Sentinel-2 MSI, HICO, Sentinel-3 OLCI and MODIS images, resp. For both deep and shallow waters, CRBs are shown to be consistent with the experimental estimation variances obtained using two published remote-sensing methods, while not requiring one to perform any inversion. CRBs are also used to investigate to what extent perfect a priori knowledge on one or several geophysical parameters can improve the estimation of remaining unknown parameters. For example, using pre-existing knowledge of bathymetry (e.g., derived from LiDAR) within the inversion is shown to greatly improve the retrieval of bottom cover for shallow waters. Finally, CRBs are shown to provide valuable information on the best estimation performances that may be achieved with the MSI, HICO, OLCI and MODIS configurations for a variety of oceanic, coastal and inland waters. CRBs are thus demonstrated to be an informative and efficient tool to characterize minimum uncertainties in inverted ocean color geophysical parameters.

  2. Chaotic itinerancy and power-law residence time distribution in stochastic dynamical systems.

    PubMed

    Namikawa, Jun

    2005-08-01

    Chaotic itinerant motion among varieties of ordered states is described by a stochastic model based on the mechanism of chaotic itinerancy. The model consists of a random walk on a half-line and a Markov chain with a transition probability matrix. The stability of attractor ruin in the model is investigated by analyzing the residence time distribution of orbits at attractor ruins. It is shown that the residence time distribution averaged over all attractor ruins can be described by the superposition of (truncated) power-law distributions if the basin of attraction for each attractor ruin has a zero measure. This result is confirmed by simulation of models exhibiting chaotic itinerancy. Chaotic itinerancy is also shown to be absent in coupled Milnor attractor systems if the transition probability among attractor ruins can be represented as a Markov chain.

  3. Implementation of a channelized Hotelling observer model to assess image quality of x-ray angiography systems

    PubMed Central

    Favazza, Christopher P.; Fetterly, Kenneth A.; Hangiandreou, Nicholas J.; Leng, Shuai; Schueler, Beth A.

    2015-01-01

    Abstract. Evaluation of flat-panel angiography equipment through conventional image quality metrics is limited by the scope of standard spatial-domain image quality metric(s), such as contrast-to-noise ratio and spatial resolution, or by restricted access to appropriate data to calculate Fourier domain measurements, such as modulation transfer function, noise power spectrum, and detective quantum efficiency. Observer models have been shown capable of overcoming these limitations and are able to comprehensively evaluate medical-imaging systems. We present a spatial domain-based channelized Hotelling observer model to calculate the detectability index (DI) of our different sized disks and compare the performance of different imaging conditions and angiography systems. When appropriate, changes in DIs were compared to expectations based on the classical Rose model of signal detection to assess linearity of the model with quantum signal-to-noise ratio (SNR) theory. For these experiments, the estimated uncertainty of the DIs was less than 3%, allowing for precise comparison of imaging systems or conditions. For most experimental variables, DI changes were linear with expectations based on quantum SNR theory. DIs calculated for the smallest objects demonstrated nonlinearity with quantum SNR theory due to system blur. Two angiography systems with different detector element sizes were shown to perform similarly across the majority of the detection tasks. PMID:26158086

  4. Creep Tests and Modeling Based on Continuum Damage Mechanics for T91 and T92 Steels

    NASA Astrophysics Data System (ADS)

    Pan, J. P.; Tu, S. H.; Zhu, X. W.; Tan, L. J.; Hu, B.; Wang, Q.

    2017-12-01

    9-11%Cr ferritic steels play an important role in high-temperature and high-pressure boilers of advanced power plants. In this paper, a continuum damage mechanics (CDM)-based creep model was proposed to study the creep behavior of T91 and T92 steels at high temperatures. Long-time creep tests were performed for both steels under different conditions. The creep rupture data and creep curves obtained from creep tests were captured well by theoretical calculation based on the CDM model over a long creep time. It is shown that the developed model is able to predict creep data for the two ferritic steels accurately up to tens of thousands of hours.

  5. Is there hope for multi-site complexation modeling?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bickmore, Barry R.; Rosso, Kevin M.; Mitchell, S. C.

    2006-06-06

    It has been shown here that the standard formulation of the MUSIC model does not deliver the molecular-scale insight into oxide surface reactions that it promises. The model does not properly divide long-range electrostatic and short-range contributions to acid-base reaction energies, and it does not treat solvation in a physically realistic manner. However, even if the current MUSIC model does not succeed in its ambitions, its ambitions are still reasonable. It was a pioneering attempt in that Hiemstra and coworkers recognized that intrinsic equilibrium constants, where the effects of long-range electrostatic effects have been removed, must be theoretically constrained priormore » to model fitting if there is to be any hope of obtaining molecular-scale insights from SCMs. We have also shown, on the other hand, that it may be premature to dismiss all valence-based models of acidity. Not only can some such models accurately predict intrinsic acidity constants, but they can also now be linked to the results of molecular dynamics simulations of solvated systems. Significant challenges remain for those interested in creating SCMs that are accurate at the molecular scale. It will only be after all model parameters can be predicted from theory, and the models validated against titration data that we will be able to begin to have some confidence that we really are adequately describing the chemical systems in question.« less

  6. The analysis and modelling of dilatational terms in compressible turbulence

    NASA Technical Reports Server (NTRS)

    Sarkar, S.; Erlebacher, G.; Hussaini, M. Y.; Kreiss, H. O.

    1991-01-01

    It is shown that the dilatational terms that need to be modeled in compressible turbulence include not only the pressure-dilatation term but also another term - the compressible dissipation. The nature of these dilatational terms in homogeneous turbulence is explored by asymptotic analysis of the compressible Navier-Stokes equations. A non-dimensional parameter which characterizes some compressible effects in moderate Mach number, homogeneous turbulence is identified. Direct numerical simulations (DNS) of isotropic, compressible turbulence are performed, and their results are found to be in agreement with the theoretical analysis. A model for the compressible dissipation is proposed; the model is based on the asymptotic analysis and the direct numerical simulations. This model is calibrated with reference to the DNS results regarding the influence of compressibility on the decay rate of isotropic turbulence. An application of the proposed model to the compressible mixing layer has shown that the model is able to predict the dramatically reduced growth rate of the compressible mixing layer.

  7. The analysis and modeling of dilatational terms in compressible turbulence

    NASA Technical Reports Server (NTRS)

    Sarkar, S.; Erlebacher, G.; Hussaini, M. Y.; Kreiss, H. O.

    1989-01-01

    It is shown that the dilatational terms that need to be modeled in compressible turbulence include not only the pressure-dilatation term but also another term - the compressible dissipation. The nature of these dilatational terms in homogeneous turbulence is explored by asymptotic analysis of the compressible Navier-Stokes equations. A non-dimensional parameter which characterizes some compressible effects in moderate Mach number, homogeneous turbulence is identified. Direct numerical simulations (DNS) of isotropic, compressible turbulence are performed, and their results are found to be in agreement with the theoretical analysis. A model for the compressible dissipation is proposed; the model is based on the asymptotic analysis and the direct numerical simulations. This model is calibrated with reference to the DNS results regarding the influence of compressibility on the decay rate of isotropic turbulence. An application of the proposed model to the compressible mixing layer has shown that the model is able to predict the dramatically reduced growth rate of the compressible mixing layer.

  8. Spatial Epidemic Modelling in Social Networks

    NASA Astrophysics Data System (ADS)

    Simoes, Joana Margarida

    2005-06-01

    The spread of infectious diseases is highly influenced by the structure of the underlying social network. The target of this study is not the network of acquaintances, but the social mobility network: the daily movement of people between locations, in regions. It was already shown that this kind of network exhibits small world characteristics. The model developed is agent based (ABM) and comprehends a movement model and a infection model. In the movement model, some assumptions are made about its structure and the daily movement is decomposed into four types: neighborhood, intra region, inter region and random. The model is Geographical Information Systems (GIS) based, and uses real data to define its geometry. Because it is a vector model, some optimization techniques were used to increase its efficiency.

  9. A model for identifying and ranking need for trauma service in nonmetropolitan regions based on injury risk and access to services.

    PubMed

    Schuurman, Nadine; Bell, Nathaniel; Hameed, Morad S; Simons, Richard

    2008-07-01

    Timely access to definitive trauma care has been shown to improve survival rates after severe injury. Unfortunately, despite development of sophisticated trauma systems, prompt, definitive trauma care remains unavailable to over 50 million North Americans, particularly in rural areas. Measures to quantify social and geographic isolation may provide important insights for the development of health policy aimed at reducing the burden of injury and improving access to trauma care in presently under serviced populations. Indices of social deprivation based on census data, and spatial analyses of access to trauma centers based on street network files were combined into a single index, the Population Isolation Vulnerability Amplifier (PIVA) to characterize vulnerability to trauma in socioeconomically and geographically diverse rural and urban communities across British Columbia. Regions with a sufficient core population that are more than one hour travel time from existing services were ranked based on their level of socioeconomic vulnerability. Ten regions throughout the province were identified as most in need of trauma services based on population, isolation and vulnerability. Likewise, 10 communities were classified as some of the least isolated areas and were simultaneously classified as least vulnerable populations in province. The model was verified using trauma services utilization data from the British Columbia Trauma Registry. These data indicate that including vulnerability in the model provided superior results to running the model based only on population and road travel time. Using the PIVA model we have shown that across Census Urban Areas there are wide variations in population dependence on and distances to accredited tertiary/district trauma centers throughout British Columbia. Many of the factors that influence access to definitive trauma care can be combined into a single quantifiable model that researchers in the health sector can use to predict where to place new services. The model can also be used to locate optimal locations for any basket of health services.

  10. Empirical Validation of Integrated Learning Performances for Hydrologic Phenomena: 3rd-Grade Students' Model-Driven Explanation-Construction

    ERIC Educational Resources Information Center

    Forbes, Cory T.; Zangori, Laura; Schwarz, Christina V.

    2015-01-01

    Water is a crucial topic that spans the K-12 science curriculum, including the elementary grades. Students should engage in the articulation, negotiation, and revision of model-based explanations about hydrologic phenomena. However, past research has shown that students, particularly early learners, often struggle to understand hydrologic…

  11. Teaching Subtraction and Multiplication with Regrouping Using the Concrete-Representational-Abstract Sequence and Strategic Instruction Model

    ERIC Educational Resources Information Center

    Flores, Margaret M.; Hinton, Vanessa; Strozier, Shaunita D.

    2014-01-01

    Based on Common Core Standards (2010), mathematics interventions should emphasize conceptual understanding of numbers and operations as well as fluency. For students at risk for failure, the concrete-representational-abstract (CRA) sequence and the Strategic Instruction Model (SIM) have been shown effective in teaching computation with an emphasis…

  12. Improving Teaching and Learning in Higher Education: Metaphors and Models for Partnership Consultancy

    ERIC Educational Resources Information Center

    Morrison, Keith

    2003-01-01

    The management of partnerships with external consultants is discussed with reference to seven metaphors of partnership, illuminated by an external consultancy review of teaching and learning in a University Language Centre. Shortcomings are shown in each of the seven metaphors. A model of partnership is advocated, based on Habermas' principles of…

  13. A Group Decision Approach to Developing Concept-Effect Models for Diagnosing Student Learning Problems in Mathematics

    ERIC Educational Resources Information Center

    Hwang, Gwo-Jen; Panjaburee, Patcharin; Triampo, Wannapong; Shih, Bo-Ying

    2013-01-01

    Diagnosing student learning barriers has been recognized as the most fundamental and important issue for improving the learning achievements of students. In the past decade, several learning diagnosis approaches have been proposed based on the concept-effect relationship (CER) model. However, past studies have shown that the effectiveness of this…

  14. Brain mechanisms for perceptual and reward-related decision-making.

    PubMed

    Deco, Gustavo; Rolls, Edmund T; Albantakis, Larissa; Romo, Ranulfo

    2013-04-01

    Phenomenological models of decision-making, including the drift-diffusion and race models, are compared with mechanistic, biologically plausible models, such as integrate-and-fire attractor neuronal network models. The attractor network models show how decision confidence is an emergent property; and make testable predictions about the neural processes (including neuronal activity and fMRI signals) involved in decision-making which indicate that the medial prefrontal cortex is involved in reward value-based decision-making. Synaptic facilitation in these models can help to account for sequential vibrotactile decision-making, and for how postponed decision-related responses are made. The randomness in the neuronal spiking-related noise that makes the decision-making probabilistic is shown to be increased by the graded firing rate representations found in the brain, to be decreased by the diluted connectivity, and still to be significant in biologically large networks with thousands of synapses onto each neuron. The stability of these systems is shown to be influenced in different ways by glutamatergic and GABAergic efficacy, leading to a new field of dynamical neuropsychiatry with applications to understanding schizophrenia and obsessive-compulsive disorder. The noise in these systems is shown to be advantageous, and to apply to similar attractor networks involved in short-term memory, long-term memory, attention, and associative thought processes. Copyright © 2012 Elsevier Ltd. All rights reserved.

  15. Processor-Based Strong Physical Unclonable Functions with Aging-Based Response Tuning (Preprint)

    DTIC Science & Technology

    2013-01-01

    NUMBER OF PAGES 19a. NAME OF RESPONSIBLE PERSON GARRET S. ROSE a. REPORT U b . ABSTRACT U c. THIS PAGE U 19b. TELEPHONE NUMBER (Include area code...generated by quad-tree process variation model [1]. The number in the right side of the figures means Z value of Gaussian distribution. B . Delay model To...and B are technology dependent constants. As shown in Equation 2, the Vth shift heavily depends on temperature (T ) and stress time (t). By applying

  16. Studies on mineral dust using airborne lidar, ground-based remote sensing, and in situ instrumentation

    NASA Astrophysics Data System (ADS)

    Marenco, Franco; Ryder, Claire; Estellés, Victor; Segura, Sara; Amiridis, Vassilis; Proestakis, Emmanouil; Marinou, Eleni; Tsekeri, Alexandra; Smith, Helen; Ulanowski, Zbigniew; O'Sullivan, Debbie; Brooke, Jennifer; Pradhan, Yaswant; Buxmann, Joelle

    2018-04-01

    In August 2015, the AER-D campaign made use of the FAAM research aircraft based in Cape Verde, and targeted mineral dust. First results will be shown here. The campaign had multiple objectives: (1) lidar dust mapping for the validation of satellite and model products; (2) validation of sunphotometer remote sensing with airborne measurements; (3) coordinated measurements with the CATS lidar on the ISS; (4) radiative closure studies; and (5) the validation of a new model of dustsonde.

  17. Radiation Hardened Electronics for Space Environments (RHESE)

    NASA Technical Reports Server (NTRS)

    Keys, Andrew S.; Adams, James H.; Frazier, Donald O.; Patrick, Marshall C.; Watson, Michael D.; Johnson, Michael A.; Cressler, John D.; Kolawa, Elizabeth A.

    2007-01-01

    Radiation Environmental Modeling is crucial to proper predictive modeling and electronic response to the radiation environment. When compared to on-orbit data, CREME96 has been shown to be inaccurate in predicting the radiation environment. The NEDD bases much of its radiation environment data on CREME96 output. Close coordination and partnership with DoD radiation-hardened efforts will result in leveraged - not duplicated or independently developed - technology capabilities of: a) Radiation-hardened, reconfigurable FPGA-based electronics; and b) High Performance Processors (NOT duplication or independent development).

  18. Estimation of the whole-body averaged SAR of grounded human models for plane wave exposure at respective resonance frequencies.

    PubMed

    Hirata, Akimasa; Yanase, Kazuya; Laakso, Ilkka; Chan, Kwok Hung; Fujiwara, Osamu; Nagaoka, Tomoaki; Watanabe, Soichi; Conil, Emmanuelle; Wiart, Joe

    2012-12-21

    According to the international guidelines, the whole-body averaged specific absorption rate (WBA-SAR) is used as a metric of basic restriction for radio-frequency whole-body exposure. It is well known that the WBA-SAR largely depends on the frequency of the incident wave for a given incident power density. The frequency at which the WBA-SAR becomes maximal is called the 'resonance frequency'. Our previous study proposed a scheme for estimating the WBA-SAR at this resonance frequency based on an analogy between the power absorption characteristic of human models in free space and that of a dipole antenna. However, a scheme for estimating the WBA-SAR in a grounded human has not been discussed sufficiently, even though the WBA-SAR in a grounded human is larger than that in an ungrounded human. In this study, with the use of the finite-difference time-domain method, the grounded condition is confirmed to be the worst-case exposure for human body models in a standing posture. Then, WBA-SARs in grounded human models are calculated at their respective resonant frequencies. A formula for estimating the WBA-SAR of a human standing on the ground is proposed based on an analogy with a quarter-wavelength monopole antenna. First, homogenized human body models are shown to provide the conservative WBA-SAR as compared with anatomically based models. Based on the formula proposed here, the WBA-SARs in grounded human models are approximately 10% larger than those in free space. The variability of the WBA-SAR was shown to be ±30% even for humans of the same age, which is caused by the body shape.

  19. A Fokker-Planck based kinetic model for diatomic rarefied gas flows

    NASA Astrophysics Data System (ADS)

    Gorji, M. Hossein; Jenny, Patrick

    2013-06-01

    A Fokker-Planck based kinetic model is presented here, which also accounts for internal energy modes characteristic for diatomic gas molecules. The model is based on a Fokker-Planck approximation of the Boltzmann equation for monatomic molecules, whereas phenomenological principles were employed for the derivation. It is shown that the model honors the equipartition theorem in equilibrium and fulfills the Landau-Teller relaxation equations for internal degrees of freedom. The objective behind this approximate kinetic model is accuracy at reasonably low computational cost. This can be achieved due to the fact that the resulting stochastic differential equations are continuous in time; therefore, no collisions between the simulated particles have to be calculated. Besides, because of the devised energy conserving time integration scheme, it is not required to resolve the collisional scales, i.e., the mean collision time and the mean free path of molecules. This, of course, gives rise to much more efficient simulations with respect to other particle methods, especially the conventional direct simulation Monte Carlo (DSMC), for small and moderate Knudsen numbers. To examine the new approach, first the computational cost of the model was compared with respect to DSMC, where significant speed up could be obtained for small Knudsen numbers. Second, the structure of a high Mach shock (in nitrogen) was studied, and the good performance of the model for such out of equilibrium conditions could be demonstrated. At last, a hypersonic flow of nitrogen over a wedge was studied, where good agreement with respect to DSMC (with level to level transition model) for vibrational and translational temperatures is shown.

  20. Modeling the 1958 Lituya Bay mega-tsunami with a PVM-IFCP GPU-based model

    NASA Astrophysics Data System (ADS)

    González-Vida, José M.; Arcas, Diego; de la Asunción, Marc; Castro, Manuel J.; Macías, Jorge; Ortega, Sergio; Sánchez-Linares, Carlos; Titov, Vasily

    2013-04-01

    In this work we present a numerical study, performed in collaboration with the NOAA Center for Tsunami Research (USA), that uses a GPU version of the PVM-IFCP landslide model for the simulation of the 1958 landslide generated tsunami of Lituya Bay. In this model, a layer composed of fluidized granular material is assumed to flow within an upper layer of an inviscid fluid (e. g. water). The model is discretized using a two dimensional PVM-IFCP [Fernández - Castro - Parés. On an Intermediate Field Capturing Riemann Solver Based on a Parabolic Viscosity Matrix for the Two-Layer Shallow Water System, J. Sci. Comput., 48 (2011):117-140] finite volume scheme implemented on GPU cards for increasing the speed-up. This model has been previously validated by using the two-dimensional physical laboratory experiments data from H. Fritz [Lituya Bay Landslide Impact Generated Mega-Tsunami 50th Anniversary. Pure Appl. Geophys., 166 (2009) pp. 153-175]. In the present work, the first step was to reconstruct the topobathymetry of the Lituya Bay before this event ocurred, this is based on USGS geological surveys data. Then, a sensitivity analysis of some model parameters has been performed in order to determine the parameters that better fit to reality, when model results are compared against available event data, as run-up areas. In this presentation, the reconstruction of the pre-tsunami scenario will be shown, a detailed simulation of the tsunami presented and several comparisons with real data (runup, wave height, etc.) shown.

  1. Improvement of a 2D numerical model of lava flows

    NASA Astrophysics Data System (ADS)

    Ishimine, Y.

    2013-12-01

    I propose an improved procedure that reduces an improper dependence of lava flow directions on the orientation of Digital Elevation Model (DEM) in two-dimensional simulations based on Ishihara et al. (in Lava Flows and Domes, Fink, JH eds., 1990). The numerical model for lava flow simulations proposed by Ishihara et al. (1990) is based on two-dimensional shallow water model combined with a constitutive equation for a Bingham fluid. It is simple but useful because it properly reproduces distributions of actual lava flows. Thus, it has been regarded as one of pioneer work of numerical simulations of lava flows and it is still now widely used in practical hazard prediction map for civil defense officials in Japan. However, the model include an improper dependence of lava flow directions on the orientation of DEM because the model separately assigns the condition for the lava flow to stop due to yield stress for each of two orthogonal axes of rectangular calculating grid based on DEM. This procedure brings a diamond-shaped distribution as shown in Fig. 1 when calculating a lava flow supplied from a point source on a virtual flat plane although the distribution should be circle-shaped. To improve the drawback, I proposed a modified procedure that uses the absolute value of yield stress derived from both components of two orthogonal directions of the slope steepness to assign the condition for lava flows to stop. This brings a better result as shown in Fig. 2. Fig. 1. (a) Contour plots calculated with the original model of Ishihara et al. (1990). (b) Contour plots calculated with a proposed model.

  2. Power consumption analysis of pump station control systems based on fuzzy controllers with discrete terms in iThink software

    NASA Astrophysics Data System (ADS)

    Muravyova, E. A.; Bondarev, A. V.; Sharipov, M. I.; Galiaskarova, G. R.; Kubryak, A. I.

    2018-03-01

    In this article, power consumption of pumping station control systems is discussed. To study the issue, two simulation models of oil level control in the iThink software have been developed, using a frequency converter only and using a frequency converter and a fuzzy controller. A simulation of the oil-level control was carried out in a graphic form, and plots of pumps power consumption were obtained. Based on the initial and obtained data, the efficiency of the considered control systems has been compared, and also the power consumption of the systems was shown graphically using a frequency converter only and using a frequency converter and a fuzzy controller. The models analysis has shown that it is more economical and safe to use a control circuit with a frequency converter and a fuzzy controller.

  3. Model of a thin film optical fiber fluorosensor

    NASA Technical Reports Server (NTRS)

    Egalon, Claudio O.; Rogowski, Robert S.

    1991-01-01

    The efficiency of core-light injection from sources in the cladding of an optical fiber is modeled analytically by means of the exact field solution of a step-profile fiber. The analysis is based on the techniques by Marcuse (1988) in which the sources are treated as infinitesimal electric currents with random phase and orientation that excite radiation fields and bound modes. Expressions are developed based on an infinite cladding approximation which yield the power efficiency for a fiber coated with fluorescent sources in the core/cladding interface. Marcuse's results are confirmed for the case of a weakly guiding cylindrical fiber with fluorescent sources uniformly distributed in the cladding, and the power efficiency is shown to be practically constant for variable wavelengths and core radii. The most efficient fibers have the thin film located at the core/cladding boundary, and fibers with larger differences in the indices of refraction are shown to be the most efficient.

  4. Noise stochastic corrected maximum a posteriori estimator for birefringence imaging using polarization-sensitive optical coherence tomography

    PubMed Central

    Kasaragod, Deepa; Makita, Shuichi; Hong, Young-Joo; Yasuno, Yoshiaki

    2017-01-01

    This paper presents a noise-stochastic corrected maximum a posteriori estimator for birefringence imaging using Jones matrix optical coherence tomography. The estimator described in this paper is based on the relationship between probability distribution functions of the measured birefringence and the effective signal to noise ratio (ESNR) as well as the true birefringence and the true ESNR. The Monte Carlo method is used to numerically describe this relationship and adaptive 2D kernel density estimation provides the likelihood for a posteriori estimation of the true birefringence. Improved estimation is shown for the new estimator with stochastic model of ESNR in comparison to the old estimator, both based on the Jones matrix noise model. A comparison with the mean estimator is also done. Numerical simulation validates the superiority of the new estimator. The superior performance of the new estimator was also shown by in vivo measurement of optic nerve head. PMID:28270974

  5. Interpretation with a Donnan-based concept of the influence of simple salt concentration on the apparent binding of divalent ions to the polyelectrolytes polystyrenesulfonate and dextran sulfate

    USGS Publications Warehouse

    Marinsky, J.A.; Baldwin, Robert F.; Reddy, M.M.

    1985-01-01

    It has been shown that the apparent enhancement of divalent metal ion binding to polyions such as polystyrenesulfonate (PSS) and dextran sulfate (DS) by decreasing the ionic strength of these mixed counterion systems (M2+, M+, X-, polyion) can be anticipated with the Donnan-based model developed by one of us (J.A.M.). Ion-exchange distribution methods have been employed to measure the removal by the polyion of trace divalent metal ion from simple salt (NaClO4)-polyion (NaPSS) mixtures. These data and polyion interaction data published earlier by Mattai and Kwak for the mixed counterion systems MgCl2-LiCl-DS and MgCl2-CsCl-DS have been shown to be amenable to rather precise analysis by this model. ?? 1985 American Chemical Society.

  6. Equivalent circuit of radio frequency-plasma with the transformer model

    NASA Astrophysics Data System (ADS)

    Nishida, K.; Mochizuki, S.; Ohta, M.; Yasumoto, M.; Lettry, J.; Mattei, S.; Hatayama, A.

    2014-02-01

    LINAC4 H- source is radio frequency (RF) driven type source. In the RF system, it is required to match the load impedance, which includes H- source, to that of final amplifier. We model RF plasma inside the H- source as circuit elements using transformer model so that characteristics of the load impedance become calculable. It has been shown that the modeling based on the transformer model works well to predict the resistance and inductance of the plasma.

  7. Market-oriented Programming Using Small-world Networks for Controlling Building Environments

    NASA Astrophysics Data System (ADS)

    Shigei, Noritaka; Miyajima, Hiromi; Osako, Tsukasa

    The market model, which is one of the economic activity models, is modeled as an agent system, and applying the model to the resource allocation problem has been studied. For air conditioning control of building, which is one of the resource allocation problems, an effective method based on the agent system using auction has been proposed for traditional PID controller. On the other hand, it has been considered that this method is performed by decentralized control. However, its decentralization is not perfect, and its performace is not enough. In this paper, firstly, we propose a perfectly decentralized agent model and show its performance. Secondly, in order to improve the model, we propose the agent model based on small-world model. The effectiveness of the proposed model is shown by simulation.

  8. A critical issue in model-based inference for studying trait-based community assembly and a solution.

    PubMed

    Ter Braak, Cajo J F; Peres-Neto, Pedro; Dray, Stéphane

    2017-01-01

    Statistical testing of trait-environment association from data is a challenge as there is no common unit of observation: the trait is observed on species, the environment on sites and the mediating abundance on species-site combinations. A number of correlation-based methods, such as the community weighted trait means method (CWM), the fourth-corner correlation method and the multivariate method RLQ, have been proposed to estimate such trait-environment associations. In these methods, valid statistical testing proceeds by performing two separate resampling tests, one site-based and the other species-based and by assessing significance by the largest of the two p -values (the p max test). Recently, regression-based methods using generalized linear models (GLM) have been proposed as a promising alternative with statistical inference via site-based resampling. We investigated the performance of this new approach along with approaches that mimicked the p max test using GLM instead of fourth-corner. By simulation using models with additional random variation in the species response to the environment, the site-based resampling tests using GLM are shown to have severely inflated type I error, of up to 90%, when the nominal level is set as 5%. In addition, predictive modelling of such data using site-based cross-validation very often identified trait-environment interactions that had no predictive value. The problem that we identify is not an "omitted variable bias" problem as it occurs even when the additional random variation is independent of the observed trait and environment data. Instead, it is a problem of ignoring a random effect. In the same simulations, the GLM-based p max test controlled the type I error in all models proposed so far in this context, but still gave slightly inflated error in more complex models that included both missing (but important) traits and missing (but important) environmental variables. For screening the importance of single trait-environment combinations, the fourth-corner test is shown to give almost the same results as the GLM-based tests in far less computing time.

  9. Interpretation of the results of statistical measurements. [search for basic probability model

    NASA Technical Reports Server (NTRS)

    Olshevskiy, V. V.

    1973-01-01

    For random processes, the calculated probability characteristic, and the measured statistical estimate are used in a quality functional, which defines the difference between the two functions. Based on the assumption that the statistical measurement procedure is organized so that the parameters for a selected model are optimized, it is shown that the interpretation of experimental research is a search for a basic probability model.

  10. On Application of the Ostwald-de Waele Model to Description of Non-Newtonian Fluid Flow in the Nip of Counter-Rotating Rolls

    NASA Astrophysics Data System (ADS)

    Shapovalov, V. M.

    2018-05-01

    The accuracy of the Ostwald-de Waele model in solving the problem of roll flow has been assessed by comparing with the "reference" solution for an Ellis fluid. As a result of the analysis, it has been shown that the model based on a power-law equation leads to substantial distortions of the flow pattern.

  11. Laboratory and Field Studies of the Acoustics of Multiphase Ocean Bottom Materials

    DTIC Science & Technology

    2011-09-30

    data from this measurement campaign is shown in Figs. 1, 2, 3 and 4. The statistical nature of this data will be assesed and comparison to models... environmental regulations limiting sound levels in the water, even for this source, which is intended to replace SUS. We conducted an engineering...Finally, a full listing of all grant-related activities is shown in the Fiscal Year Publications section below. IMPACT /APPLICATIONS The Biot-based

  12. A new approach for assimilation of two-dimensional radar precipitation in a high resolution NWP model

    NASA Astrophysics Data System (ADS)

    Korsholm, Ulrik; Petersen, Claus; Hansen Sass, Bent; Woetman, Niels; Getreuer Jensen, David; Olsen, Bjarke Tobias; GIll, Rasphal; Vedel, Henrik

    2014-05-01

    The DMI nowcasting system has been running in a pre-operational state for the past year. The system consists of hourly simulations with the High Resolution Limited Area weather model combined with surface and three-dimensional variational assimilation at each restart and nudging of satellite cloud products and radar precipitation. Nudging of a two-dimensional radar reflectivity CAPPI product is achieved using a new method where low level horizontal divergence is nudged towards pseudo observations. Pseudo observations are calculated based on an assumed relation between divergence and precipitation rate and the strength of the nudging is proportional to the offset between observed and modelled precipitation leading to increased moisture convergence below cloud base if there is an under-production of precipitation relative to the CAPPI product. If the model over-predicts precipitation, the low level moisture source is reduced, and in-cloud moisture is nudged towards environmental values. In this talk results will be discussed based on calculation of the fractions skill score in cases with heavy precipitation over Denmark. Furthermore, results from simulations combining reflectivity nudging and extrapolation of reflectivity will be shown. Results indicate that the new method leads to fast adjustment of the dynamical state of the model to facilitate precipitation release when the model precipitation intensity is too low. Removal of precipitation is also shown to be of importance and strong improvements were found in the position of the precipitation systems. Bias is reduced for low and extreme precipitation rates.

  13. Model-based pH monitor for sensor assessment.

    PubMed

    van Schagen, Kim; Rietveld, Luuk; Veersma, Alex; Babuska, Robert

    2009-01-01

    Owing to the nature of the treatment processes, monitoring the processes based on individual online measurements is difficult or even impossible. However, the measurements (online and laboratory) can be combined with a priori process knowledge, using mathematical models, to objectively monitor the treatment processes and measurement devices. The pH measurement is a commonly used measurement at different stages in the drinking water treatment plant, although it is a unreliable instrument, requiring significant maintenance. It is shown that, using a grey-box model, it is possible to assess the measurement devices effectively, even if detailed information of the specific processes is unknown.

  14. XFEM-based modeling of successive resections for preoperative image updating

    NASA Astrophysics Data System (ADS)

    Vigneron, Lara M.; Robe, Pierre A.; Warfield, Simon K.; Verly, Jacques G.

    2006-03-01

    We present a new method for modeling organ deformations due to successive resections. We use a biomechanical model of the organ, compute its volume-displacement solution based on the eXtended Finite Element Method (XFEM). The key feature of XFEM is that material discontinuities induced by every new resection can be handled without remeshing or mesh adaptation, as would be required by the conventional Finite Element Method (FEM). We focus on the application of preoperative image updating for image-guided surgery. Proof-of-concept demonstrations are shown for synthetic and real data in the context of neurosurgery.

  15. [Research on airborne hyperspectral identification of red tide organism dominant species based on SVM].

    PubMed

    Ma, Yi; Zhang, Jie; Cui, Ting-wei

    2006-12-01

    Airborne hyperspectral identification of red tide organism dominant species can provide technique for distinguishing red tide and its toxin, and provide support for scaling the disaster. Based on support vector machine(SVM), the present paper provides an identification model of red tide dominant species. Utilizing this model, the authors accomplished three identification experiments with the hyperspectral data obtained on 16th July, and 19th and 25th August, 2001. It is shown from the identification results that the model has a high precision and is not restricted by high dimension of the hyperspectral data.

  16. Retrosynthetic Reaction Prediction Using Neural Sequence-to-Sequence Models

    PubMed Central

    2017-01-01

    We describe a fully data driven model that learns to perform a retrosynthetic reaction prediction task, which is treated as a sequence-to-sequence mapping problem. The end-to-end trained model has an encoder–decoder architecture that consists of two recurrent neural networks, which has previously shown great success in solving other sequence-to-sequence prediction tasks such as machine translation. The model is trained on 50,000 experimental reaction examples from the United States patent literature, which span 10 broad reaction types that are commonly used by medicinal chemists. We find that our model performs comparably with a rule-based expert system baseline model, and also overcomes certain limitations associated with rule-based expert systems and with any machine learning approach that contains a rule-based expert system component. Our model provides an important first step toward solving the challenging problem of computational retrosynthetic analysis. PMID:29104927

  17. Formal Analysis of Self-Efficacy in Job Interviewee’s Mental State Model

    NASA Astrophysics Data System (ADS)

    Ajoge, N. S.; Aziz, A. A.; Yusof, S. A. Mohd

    2017-08-01

    This paper presents a formal analysis approach for self-efficacy model of interviewee’s mental state during a job interview session. Self-efficacy is a construct that has been hypothesised to combine with motivation and interviewee anxiety to define state influence of interviewees. The conceptual model was built based on psychological theories and models related to self-efficacy. A number of well-known relations between events and the course of self-efficacy are summarized from the literature and it is shown that the proposed model exhibits those patterns. In addition, this formal model has been mathematically analysed to find out which stable situations exist. Finally, it is pointed out how this model can be used in a software agent or robot-based platform. Such platform can provide an interview coaching approach where support to the user is provided based on their individual metal state during interview sessions.

  18. A model for combined targeting and tracking tasks in computer applications.

    PubMed

    Senanayake, Ransalu; Hoffmann, Errol R; Goonetilleke, Ravindra S

    2013-11-01

    Current models for targeted-tracking are discussed and shown to be inadequate as a means of understanding the combined task of tracking, as in the Drury's paradigm, and having a final target to be aimed at, as in the Fitts' paradigm. It is shown that the task has to be split into components that are, in general, performed sequentially and have a movement time component dependent on the difficulty of the individual component of the task. In some cases, the task time may be controlled by the Fitts' task difficulty, and in others, it may be dominated by the Drury's task difficulty. Based on an experiment carried out that captured movement time in combinations of visually controlled and ballistic movements, a model for movement time in targeted-tracking was developed.

  19. An inventory model with random demand

    NASA Astrophysics Data System (ADS)

    Mitsel, A. A.; Kritski, O. L.; Stavchuk, LG

    2017-01-01

    The article describes a three-product inventory model with random demand at equal frequencies of delivery. A feature of this model is that the additional purchase of resources required is carried out within the scope of their deficit. This fact allows reducing their storage costs. A simulation based on the data on arrival of raw and materials at an enterprise in Kazakhstan has been prepared. The proposed model is shown to enable savings up to 40.8% of working capital.

  20. A Simple Sensor Model for THUNDER Actuators

    NASA Technical Reports Server (NTRS)

    Campbell, Joel F.; Bryant, Robert G.

    2009-01-01

    A quasi-static (low frequency) model is developed for THUNDER actuators configured as displacement sensors based on a simple Raleigh-Ritz technique. This model is used to calculate charge as a function of displacement. Using this and the calculated capacitance, voltage vs. displacement and voltage vs. electrical load curves are generated and compared with measurements. It is shown this model gives acceptable results and is useful for determining rough estimates of sensor output for various loads, laminate configurations and thicknesses.

  1. An approach to development of ontological knowledge base in the field of scientific and research activity in Russia

    NASA Astrophysics Data System (ADS)

    Murtazina, M. Sh; Avdeenko, T. V.

    2018-05-01

    The state of art and the progress in application of semantic technologies in the field of scientific and research activity have been analyzed. Even elementary empirical comparison has shown that the semantic search engines are superior in all respects to conventional search technologies. However, semantic information technologies are insufficiently used in the field of scientific and research activity in Russia. In present paper an approach to construction of ontological model of knowledge base is proposed. The ontological model is based on the upper-level ontology and the RDF mechanism for linking several domain ontologies. The ontological model is implemented in the Protégé environment.

  2. Fault Detection for Automotive Shock Absorber

    NASA Astrophysics Data System (ADS)

    Hernandez-Alcantara, Diana; Morales-Menendez, Ruben; Amezquita-Brooks, Luis

    2015-11-01

    Fault detection for automotive semi-active shock absorbers is a challenge due to the non-linear dynamics and the strong influence of the disturbances such as the road profile. First obstacle for this task, is the modeling of the fault, which has been shown to be of multiplicative nature. Many of the most widespread fault detection schemes consider additive faults. Two model-based fault algorithms for semiactive shock absorber are compared: an observer-based approach and a parameter identification approach. The performance of these schemes is validated and compared using a commercial vehicle model that was experimentally validated. Early results shows that a parameter identification approach is more accurate, whereas an observer-based approach is less sensible to parametric uncertainty.

  3. Inductive reasoning.

    PubMed

    Hayes, Brett K; Heit, Evan; Swendsen, Haruka

    2010-03-01

    Inductive reasoning entails using existing knowledge or observations to make predictions about novel cases. We review recent findings in research on category-based induction as well as theoretical models of these results, including similarity-based models, connectionist networks, an account based on relevance theory, Bayesian models, and other mathematical models. A number of touchstone empirical phenomena that involve taxonomic similarity are described. We also examine phenomena involving more complex background knowledge about premises and conclusions of inductive arguments and the properties referenced. Earlier models are shown to give a good account of similarity-based phenomena but not knowledge-based phenomena. Recent models that aim to account for both similarity-based and knowledge-based phenomena are reviewed and evaluated. Among the most important new directions in induction research are a focus on induction with uncertain premise categories, the modeling of the relationship between inductive and deductive reasoning, and examination of the neural substrates of induction. A common theme in both the well-established and emerging lines of induction research is the need to develop well-articulated and empirically testable formal models of induction. Copyright © 2010 John Wiley & Sons, Ltd. For further resources related to this article, please visit the WIREs website. Copyright © 2010 John Wiley & Sons, Ltd.

  4. Processing speed enhances model-based over model-free reinforcement learning in the presence of high working memory functioning

    PubMed Central

    Schad, Daniel J.; Jünger, Elisabeth; Sebold, Miriam; Garbusow, Maria; Bernhardt, Nadine; Javadi, Amir-Homayoun; Zimmermann, Ulrich S.; Smolka, Michael N.; Heinz, Andreas; Rapp, Michael A.; Huys, Quentin J. M.

    2014-01-01

    Theories of decision-making and its neural substrates have long assumed the existence of two distinct and competing valuation systems, variously described as goal-directed vs. habitual, or, more recently and based on statistical arguments, as model-free vs. model-based reinforcement-learning. Though both have been shown to control choices, the cognitive abilities associated with these systems are under ongoing investigation. Here we examine the link to cognitive abilities, and find that individual differences in processing speed covary with a shift from model-free to model-based choice control in the presence of above-average working memory function. This suggests shared cognitive and neural processes; provides a bridge between literatures on intelligence and valuation; and may guide the development of process models of different valuation components. Furthermore, it provides a rationale for individual differences in the tendency to deploy valuation systems, which may be important for understanding the manifold neuropsychiatric diseases associated with malfunctions of valuation. PMID:25566131

  5. Modal analysis of graphene-based structures for large deformations, contact and material nonlinearities

    NASA Astrophysics Data System (ADS)

    Ghaffari, Reza; Sauer, Roger A.

    2018-06-01

    The nonlinear frequencies of pre-stressed graphene-based structures, such as flat graphene sheets and carbon nanotubes, are calculated. These structures are modeled with a nonlinear hyperelastic shell model. The model is calibrated with quantum mechanics data and is valid for high strains. Analytical solutions of the natural frequencies of various plates are obtained for the Canham bending model by assuming infinitesimal strains. These solutions are used for the verification of the numerical results. The performance of the model is illustrated by means of several examples. Modal analysis is performed for square plates under pure dilatation or uniaxial stretch, circular plates under pure dilatation or under the effects of an adhesive substrate, and carbon nanotubes under uniaxial compression or stretch. The adhesive substrate is modeled with van der Waals interaction (based on the Lennard-Jones potential) and a coarse grained contact model. It is shown that the analytical natural frequencies underestimate the real ones, and this should be considered in the design of devices based on graphene structures.

  6. Modeling the data management system of Space Station Freedom with DEPEND

    NASA Technical Reports Server (NTRS)

    Olson, Daniel P.; Iyer, Ravishankar K.; Boyd, Mark A.

    1993-01-01

    Some of the features and capabilities of the DEPEND simulation-based modeling tool are described. A study of a 1553B local bus subsystem of the Space Station Freedom Data Management System (SSF DMS) is used to illustrate some types of system behavior that can be important to reliability and performance evaluations of this type of spacecraft. A DEPEND model of the subsystem is used to illustrate how these types of system behavior can be modeled, and shows what kinds of engineering and design questions can be answered through the use of these modeling techniques. DEPEND's process-based simulation environment is shown to provide a flexible method for modeling complex interactions between hardware and software elements of a fault-tolerant computing system.

  7. The Council of Regional Accrediting Commissions Framework for Competency-Based Education: A Grounded Theory Study

    ERIC Educational Resources Information Center

    Butland, Mark James

    2017-01-01

    Colleges facing pressures to increase student outcomes while reducing costs have shown an increasing interest in competency-based education (CBE) models. Regional accreditors created a joint policy on CBE evaluation. Two years later, through this grounded theory study, I sought to understand from experts the nature of this policy, its impact, and…

  8. Category Rating Is Based on Prototypes and Not Instances: Evidence from Feedback-Dependent Context Effects

    ERIC Educational Resources Information Center

    Petrov, Alexander A.

    2011-01-01

    Context effects in category rating on a 7-point scale are shown to reverse direction depending on feedback. Context (skewed stimulus frequencies) was manipulated between and feedback within subjects in two experiments. The diverging predictions of prototype- and exemplar-based scaling theories were tested using two representative models: ANCHOR…

  9. Polymethylsilsesquioxanes through base-catalyzed redistribution of oligomethylhydridosiloxanes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    RAHIMIAN,KAMYAR; ASSINK,ROGER A.; LOY,DOUGLAS A.

    2000-04-04

    There has been an increasing amount of interest in silsesquioxanes and polysilsesquioxanes. They have been used as models for silica surfaces and have been shown to have great potential for several industrial applications. Typical synthesis of polysilsesquioxanes involves the hydrolysis of organotricholorosilanes and/or organotrialkoxysilanes in the presence of acid or base catalysts, usually in the presence of organic solvents.

  10. Comparing in Cylinder Pressure Modelling of a DI Diesel Engine Fuelled on Alternative Fuel Using Two Tabulated Chemistry Approaches

    PubMed Central

    Ngayihi Abbe, Claude Valery; Nzengwa, Robert; Danwe, Raidandi

    2014-01-01

    The present work presents the comparative simulation of a diesel engine fuelled on diesel fuel and biodiesel fuel. Two models, based on tabulated chemistry, were implemented for the simulation purpose and results were compared with experimental data obtained from a single cylinder diesel engine. The first model is a single zone model based on the Krieger and Bormann combustion model while the second model is a two-zone model based on Olikara and Bormann combustion model. It was shown that both models can predict well the engine's in-cylinder pressure as well as its overall performances. The second model showed a better accuracy than the first, while the first model was easier to implement and faster to compute. It was found that the first method was better suited for real time engine control and monitoring while the second one was better suited for engine design and emission prediction. PMID:27379306

  11. Reduced-order modeling for hyperthermia: an extended balanced-realization-based approach.

    PubMed

    Mattingly, M; Bailey, E A; Dutton, A W; Roemer, R B; Devasia, S

    1998-09-01

    Accurate thermal models are needed in hyperthermia cancer treatments for such tasks as actuator and sensor placement design, parameter estimation, and feedback temperature control. The complexity of the human body produces full-order models which are too large for effective execution of these tasks, making use of reduced-order models necessary. However, standard balanced-realization (SBR)-based model reduction techniques require a priori knowledge of the particular placement of actuators and sensors for model reduction. Since placement design is intractable (computationally) on the full-order models, SBR techniques must use ad hoc placements. To alleviate this problem, an extended balanced-realization (EBR)-based model-order reduction approach is presented. The new technique allows model order reduction to be performed over all possible placement designs and does not require ad hoc placement designs. It is shown that models obtained using the EBR method are more robust to intratreatment changes in the placement of the applied power field than those models obtained using the SBR method.

  12. Ancestral haplotype-based association mapping with generalized linear mixed models accounting for stratification.

    PubMed

    Zhang, Z; Guillaume, F; Sartelet, A; Charlier, C; Georges, M; Farnir, F; Druet, T

    2012-10-01

    In many situations, genome-wide association studies are performed in populations presenting stratification. Mixed models including a kinship matrix accounting for genetic relatedness among individuals have been shown to correct for population and/or family structure. Here we extend this methodology to generalized linear mixed models which properly model data under various distributions. In addition we perform association with ancestral haplotypes inferred using a hidden Markov model. The method was shown to properly account for stratification under various simulated scenari presenting population and/or family structure. Use of ancestral haplotypes resulted in higher power than SNPs on simulated datasets. Application to real data demonstrates the usefulness of the developed model. Full analysis of a dataset with 4600 individuals and 500 000 SNPs was performed in 2 h 36 min and required 2.28 Gb of RAM. The software GLASCOW can be freely downloaded from www.giga.ulg.ac.be/jcms/prod_381171/software. francois.guillaume@jouy.inra.fr Supplementary data are available at Bioinformatics online.

  13. Description of the Main Ionospheric Trough by the SM-MIT Model. European Longitudinal Sector

    NASA Astrophysics Data System (ADS)

    Leshchinskaya, T. Yu.; Pustovalova, L. V.

    2018-05-01

    Due to the selection of exsisting ionospheric models for incorporation into the created System of Ionospheric Monotoring and Prediction of the Russian Federation, the model of the main ionospheric trough (SM-MIT) is tested with the data from ground-based ionospheric observations in the European longitudinal sector. It is shown that the SM-MIT model does not give an increase in accuracy in comparison to the foF2 monthly median upon a description of the equatorial wall of the MIT. The model describes the foF2 values in the MIT minimum with higher accuracy than the foF2 monthly median or the median IRI model; however, at the same time, the deviations of the model foF2 values from the observed values are high enough: 20-30%. In the MIT minimum, the decrease in the model foF2 values relative to the median values is on average only 10%, which is substantially less than the observed depth of MIT in the evening sector. The verification results have shown that the available SM-MIT model must be completed for practical use.

  14. Some predictions of the attached eddy model for a high Reynolds number boundary layer.

    PubMed

    Nickels, T B; Marusic, I; Hafez, S; Hutchins, N; Chong, M S

    2007-03-15

    Many flows of practical interest occur at high Reynolds number, at which the flow in most of the boundary layer is turbulent, showing apparently random fluctuations in velocity across a wide range of scales. The range of scales over which these fluctuations occur increases with the Reynolds number and hence high Reynolds number flows are difficult to compute or predict. In this paper, we discuss the structure of these flows and describe a physical model, based on the attached eddy hypothesis, which makes predictions for the statistical properties of these flows and their variation with Reynolds number. The predictions are shown to compare well with the results from recent experiments in a new purpose-built high Reynolds number facility. The model is also shown to provide a clear physical explanation for the trends in the data. The limits of applicability of the model are also discussed.

  15. An AI-based approach to structural damage identification by modal analysis

    NASA Technical Reports Server (NTRS)

    Glass, B. J.; Hanagud, S.

    1990-01-01

    Flexible-structure damage is presently addressed by a combined model- and parameter-identification approach which employs the AI methodologies of classification, heuristic search, and object-oriented model knowledge representation. The conditions for model-space search convergence to the best model are discussed in terms of search-tree organization and initial model parameter error. In the illustrative example of a truss structure presented, the use of both model and parameter identification is shown to lead to smaller parameter corrections than would be required by parameter identification alone.

  16. Reduced Order Modeling for Rapid Simulation of Blast Events of a Military Ground Vehicle and Its Occupants

    DTIC Science & Technology

    2014-08-12

    deformation of the hull due to blast . An LS- Dyna model which included the ConWep function [6] was run, with a charge mass corresponding to a STANAG...different models and blast loadings are shown in Table 3. These responses are based on generic seat properties and assumed dummy position, which can be...Comparison between MADYMO and LS- Dyna models An LS- Dyna model with ConWep blast force applied to all segments of the hull floor and a MADYMO model with

  17. Charecterisation and Modelling Urbanisation Pattern in Sillicon Valley of India

    NASA Astrophysics Data System (ADS)

    Aithal, B. H.

    2015-12-01

    Urbanisation and Urban sprawl has led to environmental problems and large losses of arable land in India. In this study, we characterise pattern of urban growth and model urban sprawl by means of a combination of remote sensing, geographical information system, spatial metrics and CA based modelling. This analysis uses time-series data to explore and derive the potential political-socio-economic- land based driving forces behind urbanisation and urban sprawl, and spatial models in different scenarios to explore the spatio-temporal interactions and development. The study area applied is Greater Bangalore, for the period from 1973 to 2015. Further water bodies depletion, vegetation depletion, tree cover were also analysed to obtain specific region based results effecting global climate and regional balance. Agents were integrated successfully into modelling aspects to understand and foresee the landscape pattern change in urban morphology. The results reveal built-up paved surfaces has expanded towards the outskirts and have expanded into the buffer regions around the city. Population growth, economic, industrial developments in the city core and transportation development are still the main causes of urban sprawl in the region. Agent based model are considered to be to the traditional models. Agent Based modelling approach as seen in this paper clearly shown its effectiveness in capturing the micro dynamics and influence in its neighbourhood mapping. Greenhouse gas emission inventory has shown important aspects such as domestic sector to be one of the major impact categories in the region. Further tree cover reduced drastically and is evident from the statistics and determines that if city is in verge of creating a chaos in terms of human health and desertification. Study concludes that integration of remote sensing, GIS, and agent based modelling offers an excellent opportunity to explore the spatio-temporal variation and visulaisation of sprawling metropolitan region. This study give a complete overview of urbanisation and effects being caused due to urban sprawl in the region and help planners and city managers in understanding the future pockets and scenarios of urban growth.

  18. A Ball Lightning Model as a Possible Explanation of Recently Reported Cavity Lights

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fryberger, David; /SLAC

    The salient features of cavity lights, in particular, mobile luminous objects (MLO's), as have been experimentally observed in superconducting accelerator cavities, are summarized. A model based upon standard electromagnetic interactions between a small particle and the 1.5 GHz cavity excitation field is described. This model can explain some features of these data, in particular, the existence of particle orbits without wall contact. While this result is an important success for the model, it is detailed why the model as it stands is incomplete. It is argued that no avenues for a suitable extension of the model through established physics appearmore » evident, which motivates an investigation of a model based upon a more exotic object, ball lightning. As discussed, further motivation derives from the fact that there are significant similarities in many of the qualitative features of ball lightning and MLO's, even though they appear in quite different circumstances and differ in scale by orders of magnitude. The ball lightning model, which incorporates electromagnetic charges and currents, is based on a symmetrized set of Maxwell's equations in which the electromagnetic sources and fields are characterized by a process called dyality rotation. It is shown that a consistent mathematical description of dyality rotation as a physical process can be achieved by adding suitable (phenomenological) current terms to supplement the usual current terms in the symmetrized Maxwell's equations. These currents, which enable the conservation of electric and magnetic charge, are called vacuum currents. It is shown that the proposed ball lightning model offers a good qualitative explanation of the perplexing aspects of the MLO data. Avenues for further study are indicated.« less

  19. From ancient Greece to the cognitive revolution: A comprehensive view of physical rehabilitation sciences.

    PubMed

    Martínez-Pernía, David; González-Castán, Óscar; Huepe, David

    2017-02-01

    The development of rehabilitation has traditionally focused on measurements of motor disorders and measurements of the improvements produced during the therapeutic process; however, physical rehabilitation sciences have not focused on understanding the philosophical and scientific principles in clinical intervention and how they are interrelated. The main aim of this paper is to explain the foundation stones of the disciplines of physical therapy, occupational therapy, and speech/language therapy in recovery from motor disorder. To reach our goals, the mechanistic view and how it is integrated into physical rehabilitation will first be explained. Next, a classification into mechanistic therapy based on an old version (automaton model) and a technological version (cyborg model) will be shown. Then, it will be shown how physical rehabilitation sciences found a new perspective in motor recovery, which is based on functionalism, during the cognitive revolution in the 1960s. Through this cognitive theory, physical rehabilitation incorporated into motor recovery of those therapeutic strategies that solicit the activation of the brain and/or symbolic processing; aspects that were not taken into account in mechanistic therapy. In addition, a classification into functionalist rehabilitation based on a computational therapy and a brain therapy will be shown. At the end of the article, the methodological principles in physical rehabilitation sciences will be explained. It will allow us to go deeper into the differences and similarities between therapeutic mechanism and therapeutic functionalism.

  20. Mathematical modeling of the aerodynamic characteristics in flight dynamics

    NASA Technical Reports Server (NTRS)

    Tobak, M.; Chapman, G. T.; Schiff, L. B.

    1984-01-01

    Basic concepts involved in the mathematical modeling of the aerodynamic response of an aircraft to arbitrary maneuvers are reviewed. The original formulation of an aerodynamic response in terms of nonlinear functionals is shown to be compatible with a derivation based on the use of nonlinear functional expansions. Extensions of the analysis through its natural connection with ideas from bifurcation theory are indicated.

  1. Using Image Modelling to Teach Newton's Laws with the Ollie Trick

    ERIC Educational Resources Information Center

    Dias, Marco Adriano; Carvalho, Paulo Simeão; Vianna, Deise Miranda

    2016-01-01

    Image modelling is a video-based teaching tool that is a combination of strobe images and video analysis. This tool can enable a qualitative and a quantitative approach to the teaching of physics, in a much more engaging and appealling way than the traditional expositive practice. In a specific scenario shown in this paper, the Ollie trick, we…

  2. A Statistical Framework for Protein Quantitation in Bottom-Up MS-Based Proteomics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karpievitch, Yuliya; Stanley, Jeffrey R.; Taverner, Thomas

    2009-08-15

    Motivation: Quantitative mass spectrometry-based proteomics requires protein-level estimates and associated confidence measures. Challenges include the presence of low quality or incorrectly identified peptides and informative missingness. Furthermore, models are required for rolling peptide-level information up to the protein level. Results: We present a statistical model that carefully accounts for informative missingness in peak intensities and allows unbiased, model-based, protein-level estimation and inference. The model is applicable to both label-based and label-free quantitation experiments. We also provide automated, model-based, algorithms for filtering of proteins and peptides as well as imputation of missing values. Two LC/MS datasets are used to illustrate themore » methods. In simulation studies, our methods are shown to achieve substantially more discoveries than standard alternatives. Availability: The software has been made available in the opensource proteomics platform DAnTE (http://omics.pnl.gov/software/). Contact: adabney@stat.tamu.edu Supplementary information: Supplementary data are available at Bioinformatics online.« less

  3. Passivity/Lyapunov based controller design for trajectory tracking of flexible joint manipulators

    NASA Technical Reports Server (NTRS)

    Sicard, Pierre; Wen, John T.; Lanari, Leonardo

    1992-01-01

    A passivity and Lyapunov based approach for the control design for the trajectory tracking problem of flexible joint robots is presented. The basic structure of the proposed controller is the sum of a model-based feedforward and a model-independent feedback. Feedforward selection and solution is analyzed for a general model for flexible joints, and for more specific and practical model structures. Passivity theory is used to design a motor state-based controller in order to input-output stabilize the error system formed by the feedforward. Observability conditions for asymptotic stability are stated and verified. In order to accommodate for modeling uncertainties and to allow for the implementation of a simplified feedforward compensation, the stability of the system is analyzed in presence of approximations in the feedforward by using a Lyapunov based robustness analysis. It is shown that under certain conditions, e.g., the desired trajectory is varying slowly enough, stability is maintained for various approximations of a canonical feedforward.

  4. Learning Based Bidding Strategy for HVAC Systems in Double Auction Retail Energy Markets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Yannan; Somani, Abhishek; Carroll, Thomas E.

    In this paper, a bidding strategy is proposed using reinforcement learning for HVAC systems in a double auction market. The bidding strategy does not require a specific model-based representation of behavior, i.e., a functional form to translate indoor house temperatures into bid prices. The results from reinforcement learning based approach are compared with the HVAC bidding approach used in the AEP gridSMART® smart grid demonstration project and it is shown that the model-free (learning based) approach tracks well the results from the model-based behavior. Successful use of model-free approaches to represent device-level economic behavior may help develop similar approaches tomore » represent behavior of more complex devices or groups of diverse devices, such as in a building. Distributed control requires an understanding of decision making processes of intelligent agents so that appropriate mechanisms may be developed to control and coordinate their responses, and model-free approaches to represent behavior will be extremely useful in that quest.« less

  5. Effective Simulation Strategy of Multiscale Flows using a Lattice Boltzmann model with a Stretched Lattice

    NASA Astrophysics Data System (ADS)

    Yahia, Eman; Premnath, Kannan

    2017-11-01

    Resolving multiscale flow physics (e.g. for boundary layer or mixing layer flows) effectively generally requires the use of different grid resolutions in different coordinate directions. Here, we present a new formulation of a multiple relaxation time (MRT)-lattice Boltzmann (LB) model for anisotropic meshes. It is based on a simpler and more stable non-orthogonal moment basis while the use of MRT introduces additional flexibility, and the model maintains a stream-collide procedure; its second order moment equilibria are augmented with additional velocity gradient terms dependent on grid aspect ratio that fully restores the required isotropy of the transport coefficients of the normal and shear stresses. Furthermore, by introducing additional cubic velocity corrections, it maintains Galilean invariance. The consistency of this stretched lattice based LB scheme with the Navier-Stokes equations is shown via a Chapman-Enskog expansion. Numerical study for a variety of benchmark flow problems demonstrate its ability for accurate and effective simulations at relatively high Reynolds numbers. The MRT-LB scheme is also shown to be more stable compared to prior LB models for rectangular grids, even for grid aspect ratios as small as 0.1 and for Reynolds numbers of 10000.

  6. Examination of simplified travel demand model. [Internal volume forecasting model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, R.L. Jr.; McFarlane, W.J.

    1978-01-01

    A simplified travel demand model, the Internal Volume Forecasting (IVF) model, proposed by Low in 1972 is evaluated as an alternative to the conventional urban travel demand modeling process. The calibration of the IVF model for a county-level study area in Central Wisconsin results in what appears to be a reasonable model; however, analysis of the structure of the model reveals two primary mis-specifications. Correction of the mis-specifications leads to a simplified gravity model version of the conventional urban travel demand models. Application of the original IVF model to ''forecast'' 1960 traffic volumes based on the model calibrated for 1970more » produces accurate estimates. Shortcut and ad hoc models may appear to provide reasonable results in both the base and horizon years; however, as shown by the IVF mode, such models will not always provide a reliable basis for transportation planning and investment decisions.« less

  7. Estimating the Probability of Rare Events Occurring Using a Local Model Averaging.

    PubMed

    Chen, Jin-Hua; Chen, Chun-Shu; Huang, Meng-Fan; Lin, Hung-Chih

    2016-10-01

    In statistical applications, logistic regression is a popular method for analyzing binary data accompanied by explanatory variables. But when one of the two outcomes is rare, the estimation of model parameters has been shown to be severely biased and hence estimating the probability of rare events occurring based on a logistic regression model would be inaccurate. In this article, we focus on estimating the probability of rare events occurring based on logistic regression models. Instead of selecting a best model, we propose a local model averaging procedure based on a data perturbation technique applied to different information criteria to obtain different probability estimates of rare events occurring. Then an approximately unbiased estimator of Kullback-Leibler loss is used to choose the best one among them. We design complete simulations to show the effectiveness of our approach. For illustration, a necrotizing enterocolitis (NEC) data set is analyzed. © 2016 Society for Risk Analysis.

  8. Development of a recursion RNG-based turbulence model

    NASA Technical Reports Server (NTRS)

    Zhou, YE; Vahala, George; Thangam, S.

    1993-01-01

    Reynolds stress closure models based on the recursion renormalization group theory are developed for the prediction of turbulent separated flows. The proposed model uses a finite wavenumber truncation scheme to account for the spectral distribution of energy. In particular, the model incorporates effects of both local and nonlocal interactions. The nonlocal interactions are shown to yield a contribution identical to that from the epsilon-renormalization group (RNG), while the local interactions introduce higher order dispersive effects. A formal analysis of the model is presented and its ability to accurately predict separated flows is analyzed from a combined theoretical and computational stand point. Turbulent flow past a backward facing step is chosen as a test case and the results obtained based on detailed computations demonstrate that the proposed recursion -RNG model with finite cut-off wavenumber can yield very good predictions for the backstep problem.

  9. A Data Model for Teleconsultation in Managing High-Risk Pregnancies: Design and Preliminary Evaluation

    PubMed Central

    Deldar, Kolsoum

    2017-01-01

    Background Teleconsultation is a guarantor for virtual supervision of clinical professors on clinical decisions made by medical residents in teaching hospitals. Type, format, volume, and quality of exchanged information have a great influence on the quality of remote clinical decisions or tele-decisions. Thus, it is necessary to develop a reliable and standard model for these clinical relationships. Objective The goal of this study was to design and evaluate a data model for teleconsultation in the management of high-risk pregnancies. Methods This study was implemented in three phases. In the first phase, a systematic review, a qualitative study, and a Delphi approach were done in selected teaching hospitals. Systematic extraction and localization of diagnostic items to develop the tele-decision clinical archetypes were performed as the second phase. Finally, the developed model was evaluated using predefined consultation scenarios. Results Our review study has shown that present medical consultations have no specific structure or template for patient information exchange. Furthermore, there are many challenges in the remote medical decision-making process, and some of them are related to the lack of the mentioned structure. The evaluation phase of our research has shown that data quality (P<.001), adequacy (P<.001), organization (P<.001), confidence (P<.001), and convenience (P<.001) had more scores in archetype-based consultation scenarios compared with routine-based ones. Conclusions Our archetype-based model could acquire better and higher scores in the data quality, adequacy, organization, confidence, and convenience dimensions than ones with routine scenarios. It is probable that the suggested archetype-based teleconsultation model may improve the quality of physician-physician remote medical consultations. PMID:29242181

  10. From brain topography to brain topology: relevance of graph theory to functional neuroscience.

    PubMed

    Minati, Ludovico; Varotto, Giulia; D'Incerti, Ludovico; Panzica, Ferruccio; Chan, Dennis

    2013-07-10

    Although several brain regions show significant specialization, higher functions such as cross-modal information integration, abstract reasoning and conscious awareness are viewed as emerging from interactions across distributed functional networks. Analytical approaches capable of capturing the properties of such networks can therefore enhance our ability to make inferences from functional MRI, electroencephalography and magnetoencephalography data. Graph theory is a branch of mathematics that focuses on the formal modelling of networks and offers a wide range of theoretical tools to quantify specific features of network architecture (topology) that can provide information complementing the anatomical localization of areas responding to given stimuli or tasks (topography). Explicit modelling of the architecture of axonal connections and interactions among areas can furthermore reveal peculiar topological properties that are conserved across diverse biological networks, and highly sensitive to disease states. The field is evolving rapidly, partly fuelled by computational developments that enable the study of connectivity at fine anatomical detail and the simultaneous interactions among multiple regions. Recent publications in this area have shown that graph-based modelling can enhance our ability to draw causal inferences from functional MRI experiments, and support the early detection of disconnection and the modelling of pathology spread in neurodegenerative disease, particularly Alzheimer's disease. Furthermore, neurophysiological studies have shown that network topology has a profound link to epileptogenesis and that connectivity indices derived from graph models aid in modelling the onset and spread of seizures. Graph-based analyses may therefore significantly help understand the bases of a range of neurological conditions. This review is designed to provide an overview of graph-based analyses of brain connectivity and their relevance to disease aimed principally at general neuroscientists and clinicians.

  11. Value-based procurement of medical devices: Application to devices for mechanical thrombectomy in ischemic stroke.

    PubMed

    Trippoli, Sabrina; Caccese, Erminia; Marinai, Claudio; Messori, Andrea

    2018-03-01

    In the acute ischemic stroke, endovascular devices have shown promising clinical results and are also likely to represent value for money, as several modeling studies have shown. Pharmacoeconomic evaluations in this field, however, have little impact on the procurement of these devices. The present study explored how complex pharmacoeconomic models that evaluate effectiveness and cost can be incorporated into the in-hospital procurement of thrombectomy devices. As regards clinical modeling, we extracted outcomes at three months from randomized trials conducted for four thrombectomy devices, and we projected long-term results using standard Markov modeling. In estimating QALYs, the same model was run for the four devices. As regards economic modeling, we firstly estimated for each device the net monetary benefit (NMB) per patient (threshold = $60,000 per QALY); then, we simulated a competitive tender across the four products by determining the tender-based score (on a 0-to-100 scale). Prices of individual devices were obtained from manufacturers. Extensive sensitivity testing was applied to our analyses. For the four devices (Solitaire, Trevo, Penumbra, Solumbra), QALYs were 1.86, 1.52, 1,79, 1.35, NMB was $101,824, $83,546, $101,923, $69,440, and tender-based scores were 99.70, 43.43, 100, 0, respectively. Sensitivity analysis confirmed findings from base-case. Our results indicate that, in the field of thrombectomy devices, incorporating the typical tools of cost-effectiveness into the processes of tenders and procurement is feasible. Bridging the methodology of cost-effectiveness with the every-day practice of in-hospital procurement can contribute to maximizing the health returns that are generated by in-hospital expenditures for medical devices. Copyright © 2018 Elsevier B.V. All rights reserved.

  12. Adapting an Agent-Based Model of Socio-Technical Systems to Analyze System and Security Failures

    DTIC Science & Technology

    2016-05-09

    statistically significant amount, which it did with a p-valueɘ.0003 on a simulation of 3125 iterations; the data is shown in the Delegation 1 column of...Blackout metric to a statistically significant amount, with a p-valueɘ.0003 on a simulation of 3125 iterations; the data is shown in the Delegation 2...Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems: volume 1-Volume 1, pp. 1007- 1014 . International Foundation

  13. Improved model for the angular dependence of excimer laser ablation rates in polymer materials

    NASA Astrophysics Data System (ADS)

    Pedder, J. E. A.; Holmes, A. S.; Dyer, P. E.

    2009-10-01

    Measurements of the angle-dependent ablation rates of polymers that have applications in microdevice fabrication are reported. A simple model based on Beer's law, including plume absorption, is shown to give good agreement with the experimental findings for polycarbonate and SU8, ablated using the 193 and 248 nm excimer lasers, respectively. The modeling forms a useful tool for designing masks needed to fabricate complex surface relief by ablation.

  14. Computational studies of horizontal axis wind turbines

    NASA Astrophysics Data System (ADS)

    Xu, Guanpeng

    A numerical technique has been developed for efficiently simulating fully three-dimensional viscous fluid flow around horizontal axis wind turbines (HAWT) using a zonal approach. The flow field is viewed as a combination of viscous regions, inviscid regions and vortices. The method solves the costly unsteady Reynolds averaged Navier-Stokes (RANS) equations only in the viscous region around the turbine blades. It solves the full potential equation in the inviscid region where flow is irrotational and isentropic. The tip vortices are simulated using a Lagrangean approach, thus removing the need to accurately resolve them on a fine grid. The hybrid method is shown to provide good results with modest CPU resources. A full Navier-Stokes based methodology has also been developed for modeling wind turbines at high wind conditions where extensive stall may occur. An overset grid based version that can model rotor-tower interactions has been developed. Finally, a blade element theory based methodology has been developed for the purpose of developing improved tip loss models and stall delay models. The effects of turbulence are simulated using a zero equation eddy viscosity model, or a one equation Spalart-Allmaras model. Two transition models, one based on the Eppler's criterion, and the other based on Michel's criterion, have been developed and tested. The hybrid method has been extensively validated for axial wind conditions for three rotors---NREL Phase II, Phase III, and Phase VI configurations. A limited set of calculations has been done for rotors operating under yaw conditions. Preliminary simulations have also been carried out to assess the effects of the tower wake on the rotor. In most of these cases, satisfactory agreement has been obtained with measurements. Using the numerical results from present methodologies as a guide, Prandtl's tip loss model and Corrigan's stall delay model were correlated with present calculations. An improved tip loss model has been obtained. A correction to the Corrigan's stall delay model has also been developed. Incorporation of these corrections is shown to considerably improve power predictions, even when a very simple aerodynamic theory---blade element method with annular inflow---is used.

  15. Covering complete proteomes with X-ray structures: A current snapshot

    DOE PAGES

    Mizianty, Marcin J.; Fan, Xiao; Yan, Jing; ...

    2014-10-23

    Structural genomics programs have developed and applied structure-determination pipelines to a wide range of protein targets, facilitating the visualization of macromolecular interactions and the understanding of their molecular and biochemical functions. The fundamental question of whether three-dimensional structures of all proteins and all functional annotations can be determined using X-ray crystallography is investigated. A first-of-its-kind large-scale analysis of crystallization propensity for all proteins encoded in 1953 fully sequenced genomes was performed. It is shown that current X-ray crystallographic knowhow combined with homology modeling can provide structures for 25% of modeling families (protein clusters for which structural models can be obtainedmore » through homology modeling), with at least one structural model produced for each Gene Ontology functional annotation. The coverage varies between superkingdoms, with 19% for eukaryotes, 35% for bacteria and 49% for archaea, and with those of viruses following the coverage values of their hosts. It is shown that the crystallization propensities of proteomes from the taxonomic superkingdoms are distinct. The use of knowledge-based target selection is shown to substantially increase the ability to produce X-ray structures. It is demonstrated that the human proteome has one of the highest attainable coverage values among eukaryotes, and GPCR membrane proteins suitable for X-ray structure determination were determined.« less

  16. Testing Small Variance Priors Using Prior-Posterior Predictive p Values.

    PubMed

    Hoijtink, Herbert; van de Schoot, Rens

    2017-04-03

    Muthén and Asparouhov (2012) propose to evaluate model fit in structural equation models based on approximate (using small variance priors) instead of exact equality of (combinations of) parameters to zero. This is an important development that adequately addresses Cohen's (1994) The Earth is Round (p < .05), which stresses that point null-hypotheses are so precise that small and irrelevant differences from the null-hypothesis may lead to their rejection. It is tempting to evaluate small variance priors using readily available approaches like the posterior predictive p value and the DIC. However, as will be shown, both are not suited for the evaluation of models based on small variance priors. In this article, a well behaving alternative, the prior-posterior predictive p value, will be introduced. It will be shown that it is consistent, the distributions under the null and alternative hypotheses will be elaborated, and it will be applied to testing whether the difference between 2 means and the size of a correlation are relevantly different from zero. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  17. Investigation of Small-Caliber Primer Function Using a Multiphase Computational Model

    DTIC Science & Technology

    2008-07-01

    all solid walls along with specified inflow at the primer orifice (0.102 cm < Y < 0.102 cm at X = 0). Initially , the entire flowfield is filled...to explicitly treat both the gas and solid phase. The model is based on the One Dimensional Turbulence modeling approach that has recently emerged as...a powerful tool in multiphase simulations. Initial results are shown for the model run as a stand-alone code and are compared to recent experiments

  18. Modelling Seasonally Freezing Ground Conditions

    DTIC Science & Technology

    1989-05-01

    used as the ’snow input’ in the larger hydrological models, e.g. Pangburn (1987). The most advanced index model is Anderson’s (1973) model. This bases...source as the soils) is shown in figures 32 and 33. Table 10 shows the percentage areas of Hydrologic Soil Groups, Land Use and Slope Distribution for...C") z c~cu CYa) 65 table 10: Percentage areas of Hydrologic Soil Grouos, Land Use and Slope Distribution over W3 (?Pn!ke e: al., 1978) Parameter

  19. Graphs and Tracks Revisited

    NASA Astrophysics Data System (ADS)

    Christian, Wolfgang; Belloni, Mario

    2013-04-01

    We have recently developed a Graphs and Tracks model based on an earlier program by David Trowbridge, as shown in Fig. 1. Our model can show position, velocity, acceleration, and energy graphs and can be used for motion-to-graphs exercises. Users set the heights of the track segments, and the model displays the motion of the ball on the track together with position, velocity, and acceleration graphs. This ready-to-run model is available in the ComPADRE OSP Collection at www.compadre.org/osp/items/detail.cfm?ID=12023.

  20. Search-based model identification of smart-structure damage

    NASA Technical Reports Server (NTRS)

    Glass, B. J.; Macalou, A.

    1991-01-01

    This paper describes the use of a combined model and parameter identification approach, based on modal analysis and artificial intelligence (AI) techniques, for identifying damage or flaws in a rotating truss structure incorporating embedded piezoceramic sensors. This smart structure example is representative of a class of structures commonly found in aerospace systems and next generation space structures. Artificial intelligence techniques of classification, heuristic search, and an object-oriented knowledge base are used in an AI-based model identification approach. A finite model space is classified into a search tree, over which a variant of best-first search is used to identify the model whose stored response most closely matches that of the input. Newly-encountered models can be incorporated into the model space. This adaptativeness demonstrates the potential for learning control. Following this output-error model identification, numerical parameter identification is used to further refine the identified model. Given the rotating truss example in this paper, noisy data corresponding to various damage configurations are input to both this approach and a conventional parameter identification method. The combination of the AI-based model identification with parameter identification is shown to lead to smaller parameter corrections than required by the use of parameter identification alone.

  1. Modeling Carbon-Black/Polymer Composite Sensors

    PubMed Central

    Lei, Hua; Pitt, William G.; McGrath, Lucas K.; Ho, Clifford K.

    2012-01-01

    Conductive polymer composite sensors have shown great potential in identifying gaseous analytes. To more thoroughly understand the physical and chemical mechanisms of this type of sensor, a mathematical model was developed by combining two sub-models: a conductivity model and a thermodynamic model, which gives a relationship between the vapor concentration of analyte(s) and the change of the sensor signals. In this work, 64 chemiresistors representing eight different carbon concentrations (8–60 vol% carbon) were constructed by depositing thin films of a carbon-black/polyisobutylene composite onto concentric spiral platinum electrodes on a silicon chip. The responses of the sensors were measured in dry air and at various vapor pressures of toluene and trichloroethylene. Three parameters in the conductivity model were determined by fitting the experimental data. It was shown that by applying this model, the sensor responses can be adequately predicted for given vapor pressures; furthermore the analyte vapor concentrations can be estimated based on the sensor responses. This model will guide the improvement of the design and fabrication of conductive polymer composite sensors for detecting and identifying mixtures of organic vapors. PMID:22518071

  2. GENERALIZED VISCOPLASTIC MODELING OF DEBRIS FLOW.

    USGS Publications Warehouse

    Chen, Cheng-lung

    1988-01-01

    The earliest model developed by R. A. Bagnold was based on the concept of the 'dispersive' pressure generated by grain collisions. Some efforts have recently been made by theoreticians in non-Newtonian fluid mechanics to modify or improve Bagnold's concept or model. A viable rheological model should consist both of a rate-independent part and a rate-dependent part. A generalized viscoplastic fluid (GVF) model that has both parts as well as two major rheological properties (i. e. , the normal stress effect and soil yield criterion) is shown to be sufficiently accurate, yet practical for general use in debris-flow modeling. In fact, Bagnold's model is found to be only a particular case of the GVF model. analytical solutions for (steady) uniform debris flows in wide channels are obtained from the GVF model based on Bagnold's simplified assumption of constant grain concentration.

  3. Toward a Model-Based Predictive Controller Design in Brain–Computer Interfaces

    PubMed Central

    Kamrunnahar, M.; Dias, N. S.; Schiff, S. J.

    2013-01-01

    A first step in designing a robust and optimal model-based predictive controller (MPC) for brain–computer interface (BCI) applications is presented in this article. An MPC has the potential to achieve improved BCI performance compared to the performance achieved by current ad hoc, nonmodel-based filter applications. The parameters in designing the controller were extracted as model-based features from motor imagery task-related human scalp electroencephalography. Although the parameters can be generated from any model-linear or non-linear, we here adopted a simple autoregressive model that has well-established applications in BCI task discriminations. It was shown that the parameters generated for the controller design can as well be used for motor imagery task discriminations with performance (with 8–23% task discrimination errors) comparable to the discrimination performance of the commonly used features such as frequency specific band powers and the AR model parameters directly used. An optimal MPC has significant implications for high performance BCI applications. PMID:21267657

  4. Toward a model-based predictive controller design in brain-computer interfaces.

    PubMed

    Kamrunnahar, M; Dias, N S; Schiff, S J

    2011-05-01

    A first step in designing a robust and optimal model-based predictive controller (MPC) for brain-computer interface (BCI) applications is presented in this article. An MPC has the potential to achieve improved BCI performance compared to the performance achieved by current ad hoc, nonmodel-based filter applications. The parameters in designing the controller were extracted as model-based features from motor imagery task-related human scalp electroencephalography. Although the parameters can be generated from any model-linear or non-linear, we here adopted a simple autoregressive model that has well-established applications in BCI task discriminations. It was shown that the parameters generated for the controller design can as well be used for motor imagery task discriminations with performance (with 8-23% task discrimination errors) comparable to the discrimination performance of the commonly used features such as frequency specific band powers and the AR model parameters directly used. An optimal MPC has significant implications for high performance BCI applications.

  5. Novel model of a AlGaN/GaN high electron mobility transistor based on an artificial neural network

    NASA Astrophysics Data System (ADS)

    Cheng, Zhi-Qun; Hu, Sha; Liu, Jun; Zhang, Qi-Jun

    2011-03-01

    In this paper we present a novel approach to modeling AlGaN/GaN high electron mobility transistor (HEMT) with an artificial neural network (ANN). The AlGaN/GaN HEMT device structure and its fabrication process are described. The circuit-based Neuro-space mapping (neuro-SM) technique is studied in detail. The EEHEMT model is implemented according to the measurement results of the designed device, which serves as a coarse model. An ANN is proposed to model AlGaN/GaN HEMT based on the coarse model. Its optimization is performed. The simulation results from the model are compared with the measurement results. It is shown that the simulation results obtained from the ANN model of AlGaN/GaN HEMT are more accurate than those obtained from the EEHEMT model. Project supported by the National Natural Science Foundation of China (Grant No. 60776052).

  6. Numerical modeling and performance analysis of zinc oxide (ZnO) thin-film based gas sensor

    NASA Astrophysics Data System (ADS)

    Punetha, Deepak; Ranjan, Rashmi; Pandey, Saurabh Kumar

    2018-05-01

    This manuscript describes the modeling and analysis of Zinc Oxide thin film based gas sensor. The conductance and sensitivity of the sensing layer has been described by change in temperature as well as change in gas concentration. The analysis has been done for reducing and oxidizing agents. Simulation results revealed the change in resistance and sensitivity of the sensor with respect to temperature and different gas concentration. To check the feasibility of the model, all the simulated results have been analyze by different experimental reported work. Wolkenstein theory has been used to model the proposed sensor and the simulation results have been shown by using device simulation software.

  7. Modelling of a holographic interferometry based calorimeter for radiation dosimetry

    NASA Astrophysics Data System (ADS)

    Beigzadeh, A. M.; Vaziri, M. R. Rashidian; Ziaie, F.

    2017-08-01

    In this research work, a model for predicting the behaviour of holographic interferometry based calorimeters for radiation dosimetry is introduced. Using this technique for radiation dosimetry via measuring the variations of refractive index due to energy deposition of radiation has several considerable advantages such as extreme sensitivity and ability of working without normally used temperature sensors that disturb the radiation field. We have shown that the results of our model are in good agreement with the experiments performed by other researchers under the same conditions. This model also reveals that these types of calorimeters have the additional and considerable merits of transforming the dose distribution to a set of discernible interference fringes.

  8. A hybrid phenomenological model for ferroelectroelastic ceramics. Part II: Morphotropic PZT ceramics

    NASA Astrophysics Data System (ADS)

    Stark, S.; Neumeister, P.; Balke, H.

    2016-10-01

    In this part II of a two part series, the rate-independent hybrid phenomenological constitutive model introduced in part I is modified to account for the material behavior of morphotropic lead zirconate titanate ceramics (PZT ceramics). The modifications are based on a discussion of the available literature results regarding the micro-structure of these materials. In particular, a monoclinic phase and a highly simplified representation of the hierarchical structure of micro-domains and nano-domains observed experimentally are incorporated into the model. It is shown that experimental data for the commercially available morphotropic PZT material PIC151 (PI Ceramic GmbH, Lederhose, Germany) can be reproduced and predicted based on the modified hybrid model.

  9. Classical and quantum Big Brake cosmology for scalar field and tachyonic models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kamenshchik, A. Yu.; Manti, S.

    We study a relation between the cosmological singularities in classical and quantum theory, comparing the classical and quantum dynamics in some models possessing the Big Brake singularity - the model based on a scalar field and two models based on a tachyon-pseudo-tachyon field . It is shown that the effect of quantum avoidance is absent for the soft singularities of the Big Brake type while it is present for the Big Bang and Big Crunch singularities. Thus, there is some kind of a classical - quantum correspondence, because soft singularities are traversable in classical cosmology, while the strong Big Bangmore » and Big Crunch singularities are not traversable.« less

  10. Gain degradation and amplitude scintillation due to tropospheric turbulence

    NASA Technical Reports Server (NTRS)

    Theobold, D. M.; Hodge, D. B.

    1978-01-01

    It is shown that a simple physical model is adequate for the prediction of the long term statistics of both the reduced signal levels and increased peak-to-peak fluctuations. The model is based on conventional atmospheric turbulence theory and incorporates both amplitude and angle of arrival fluctuations. This model predicts the average variance of signals observed under clear air conditions at low elevation angles on earth-space paths at 2, 7.3, 20 and 30 GHz. Design curves based on this model for gain degradation, realizable gain, amplitude fluctuation as a function of antenna aperture size, frequency, and either terrestrial path length or earth-space path elevation angle are presented.

  11. Improved model reduction and tuning of fractional-order PI(λ)D(μ) controllers for analytical rule extraction with genetic programming.

    PubMed

    Das, Saptarshi; Pan, Indranil; Das, Shantanu; Gupta, Amitava

    2012-03-01

    Genetic algorithm (GA) has been used in this study for a new approach of suboptimal model reduction in the Nyquist plane and optimal time domain tuning of proportional-integral-derivative (PID) and fractional-order (FO) PI(λ)D(μ) controllers. Simulation studies show that the new Nyquist-based model reduction technique outperforms the conventional H(2)-norm-based reduced parameter modeling technique. With the tuned controller parameters and reduced-order model parameter dataset, optimum tuning rules have been developed with a test-bench of higher-order processes via genetic programming (GP). The GP performs a symbolic regression on the reduced process parameters to evolve a tuning rule which provides the best analytical expression to map the data. The tuning rules are developed for a minimum time domain integral performance index described by a weighted sum of error index and controller effort. From the reported Pareto optimal front of the GP-based optimal rule extraction technique, a trade-off can be made between the complexity of the tuning formulae and the control performance. The efficacy of the single-gene and multi-gene GP-based tuning rules has been compared with the original GA-based control performance for the PID and PI(λ)D(μ) controllers, handling four different classes of representative higher-order processes. These rules are very useful for process control engineers, as they inherit the power of the GA-based tuning methodology, but can be easily calculated without the requirement for running the computationally intensive GA every time. Three-dimensional plots of the required variation in PID/fractional-order PID (FOPID) controller parameters with reduced process parameters have been shown as a guideline for the operator. Parametric robustness of the reported GP-based tuning rules has also been shown with credible simulation examples. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.

  12. Cosmology based on f(R) gravity admits 1 eV sterile neutrinos.

    PubMed

    Motohashi, Hayato; Starobinsky, Alexei A; Yokoyama, Jun'ichi

    2013-03-22

    It is shown that the tension between recent neutrino oscillation experiments, favoring sterile neutrinos with masses of the order of 1 eV, and cosmological data which impose stringent constraints on neutrino masses from the free streaming suppression of density fluctuations, can be resolved in models of the present accelerated expansion of the Universe based on f(R) gravity.

  13. Using recurrent neural networks for adaptive communication channel equalization.

    PubMed

    Kechriotis, G; Zervas, E; Manolakos, E S

    1994-01-01

    Nonlinear adaptive filters based on a variety of neural network models have been used successfully for system identification and noise-cancellation in a wide class of applications. An important problem in data communications is that of channel equalization, i.e., the removal of interferences introduced by linear or nonlinear message corrupting mechanisms, so that the originally transmitted symbols can be recovered correctly at the receiver. In this paper we introduce an adaptive recurrent neural network (RNN) based equalizer whose small size and high performance makes it suitable for high-speed channel equalization. We propose RNN based structures for both trained adaptation and blind equalization, and we evaluate their performance via extensive simulations for a variety of signal modulations and communication channel models. It is shown that the RNN equalizers have comparable performance with traditional linear filter based equalizers when the channel interferences are relatively mild, and that they outperform them by several orders of magnitude when either the channel's transfer function has spectral nulls or severe nonlinear distortion is present. In addition, the small-size RNN equalizers, being essentially generalized IIR filters, are shown to outperform multilayer perceptron equalizers of larger computational complexity in linear and nonlinear channel equalization cases.

  14. Nitrogen feedbacks increase future terrestrial ecosystem carbon uptake in an individual-based dynamic vegetation model

    NASA Astrophysics Data System (ADS)

    Wårlind, D.; Smith, B.; Hickler, T.; Arneth, A.

    2014-11-01

    Recently a considerable amount of effort has been put into quantifying how interactions of the carbon and nitrogen cycle affect future terrestrial carbon sinks. Dynamic vegetation models, representing the nitrogen cycle with varying degree of complexity, have shown diverging constraints of nitrogen dynamics on future carbon sequestration. In this study, we use LPJ-GUESS, a dynamic vegetation model employing a detailed individual- and patch-based representation of vegetation dynamics, to evaluate how population dynamics and resource competition between plant functional types, combined with nitrogen dynamics, have influenced the terrestrial carbon storage in the past and to investigate how terrestrial carbon and nitrogen dynamics might change in the future (1850 to 2100; one representative "business-as-usual" climate scenario). Single-factor model experiments of CO2 fertilisation and climate change show generally similar directions of the responses of C-N interactions, compared to the C-only version of the model as documented in previous studies using other global models. Under an RCP 8.5 scenario, nitrogen limitation suppresses potential CO2 fertilisation, reducing the cumulative net ecosystem carbon uptake between 1850 and 2100 by 61%, and soil warming-induced increase in nitrogen mineralisation reduces terrestrial carbon loss by 31%. When environmental changes are considered conjointly, carbon sequestration is limited by nitrogen dynamics up to the present. However, during the 21st century, nitrogen dynamics induce a net increase in carbon sequestration, resulting in an overall larger carbon uptake of 17% over the full period. This contrasts with previous results with other global models that have shown an 8 to 37% decrease in carbon uptake relative to modern baseline conditions. Implications for the plausibility of earlier projections of future terrestrial C dynamics based on C-only models are discussed.

  15. Individual welfare maximization in electricity markets including consumer and full transmission system modeling

    NASA Astrophysics Data System (ADS)

    Weber, James Daniel

    1999-11-01

    This dissertation presents a new algorithm that allows a market participant to maximize its individual welfare in the electricity spot market. The use of such an algorithm in determining market equilibrium points, called Nash equilibria, is also demonstrated. The start of the algorithm is a spot market model that uses the optimal power flow (OPF), with a full representation of the transmission system. The OPF is also extended to model consumer behavior, and a thorough mathematical justification for the inclusion of the consumer model in the OPF is presented. The algorithm utilizes price and dispatch sensitivities, available from the Hessian matrix of the OPF, to help determine an optimal change in an individual's bid. The algorithm is shown to be successful in determining local welfare maxima, and the prospects for scaling the algorithm up to realistically sized systems are very good. Assuming a market in which all participants maximize their individual welfare, economic equilibrium points, called Nash equilibria, are investigated. This is done by iteratively solving the individual welfare maximization algorithm for each participant until a point is reached where all individuals stop modifying their bids. It is shown that these Nash equilibria can be located in this manner. However, it is also demonstrated that equilibria do not always exist, and are not always unique when they do exist. It is also shown that individual welfare is a highly nonconcave function resulting in many local maxima. As a result, a more global optimization technique, using a genetic algorithm (GA), is investigated. The genetic algorithm is successfully demonstrated on several systems. It is also shown that a GA can be developed using special niche methods, which allow a GA to converge to several local optima at once. Finally, the last chapter of this dissertation covers the development of a new computer visualization routine for power system analysis: contouring. The contouring algorithm is demonstrated to be useful in visualizing bus-based and transmission line-based quantities.

  16. Dose-dependent DNA adduct formation by cinnamaldehyde and other food-borne α,β-unsaturated aldehydes predicted by physiologically based in silico modelling.

    PubMed

    Kiwamoto, R; Ploeg, D; Rietjens, I M C M; Punt, A

    2016-03-01

    Genotoxicity of α,β-unsaturated aldehydes shown in vitro raises a concern for the use of the aldehydes as food flavourings, while at low dose exposures the formation of DNA adducts may be prevented by detoxification. Unlike many α,β-unsaturated aldehydes for which in vivo data are absent, cinnamaldehyde was shown to be not genotoxic or carcinogenic in vivo. The present study aimed at comparing dose-dependent DNA adduct formation by cinnamaldehyde and 18 acyclic food-borne α,β-unsaturated aldehydes using physiologically based kinetic/dynamic (PBK/D) modelling. In rats, cinnamaldehyde was predicted to induce higher DNA adducts levels than 6 out of the 18 α,β-unsaturated aldehydes, indicating that these 6 aldehydes may also test negative in vivo. At the highest cinnamaldehyde dose that tested negative in vivo, cinnamaldehyde was predicted to form at least three orders of magnitude higher levels of DNA adducts than the 18 aldehydes at their respective estimated daily intake. These results suggest that for all the 18 α,β-unsaturated aldehydes DNA adduct formation at doses relevant for human dietary exposure may not raise a concern. The present study illustrates a possible use of physiologically based in silico modelling to facilitate a science-based comparison and read-across on the possible risks posed by DNA reactive agents. Copyright © 2015 Elsevier B.V. All rights reserved.

  17. Families from supergroups and predictions for leptonic CP violation

    NASA Astrophysics Data System (ADS)

    Barr, S. M.; Chen, Heng-Yu

    2017-10-01

    As was shown in 1984 by Caneschi, Farrar, and Schwimmer, decomposing representations of the supergroup SU( M | N ), can give interesting anomaly-free sets of fermion representations of SU( M ) × SU( N ) × U(1). It is shown here that such groups can be used to construct realistic grand unified models with non-abelian gauged family symmetries. A particularly simple three-family example based on SU(5) × SU(2) × U(1) is studied. The forms of the mass matrices, including that of the right-handed neutrinos, are determined in terms of SU(2) Clebsch coefficients; and the model is able to fit the lepton sector and predict the Dirac CP-violating phase of the neutrinos. Models of this type would have a rich phenomenology if part of the family symmetry is broken near the electroweak scale.

  18. The ten thousand Kims

    NASA Astrophysics Data System (ADS)

    Baek, Seung Ki; Minnhagen, Petter; Kim, Beom Jun

    2011-07-01

    In Korean culture, the names of family members are recorded in special family books. This makes it possible to follow the distribution of Korean family names far back in history. It is shown here that these name distributions are well described by a simple null model, the random group formation (RGF) model. This model makes it possible to predict how the name distributions change and these predictions are shown to be borne out. In particular, the RGF model predicts that for married women entering a collection of family books in a certain year, the occurrence of the most common family name 'Kim' should be directly proportional to the total number of married women with the same proportionality constant for all the years. This prediction is also borne out to a high degree. We speculate that it reflects some inherent social stability in the Korean culture. In addition, we obtain an estimate of the total population of the Korean culture down to the year 500 AD, based on the RGF model, and find about ten thousand Kims.

  19. Modeling human perception and estimation of kinematic responses during aircraft landing

    NASA Technical Reports Server (NTRS)

    Schmidt, David K.; Silk, Anthony B.

    1988-01-01

    The thrust of this research is to determine estimation accuracy of aircraft responses based on observed cues. By developing the geometric relationships between the outside visual scene and the kinematics during landing, visual and kinesthetic cues available to the pilot were modeled. Both fovial and peripheral vision was examined. The objective was to first determine estimation accuracy in a variety of flight conditions, and second to ascertain which parameters are most important and lead to the best achievable accuracy in estimating the actual vehicle response. It was found that altitude estimation was very sensitive to the FOV. For this model the motion cue of perceived vertical acceleration was shown to be less important than the visual cues. The inclusion of runway geometry in the visual scene increased estimation accuracy in most cases. Finally, it was shown that for this model if the pilot has an incorrect internal model of the system kinematics the choice of observations thought to be 'optimal' may in fact be suboptimal.

  20. Urn model for products’ shares in international trade

    NASA Astrophysics Data System (ADS)

    Barbier, Matthieu; Lee, D.-S.

    2017-12-01

    International trade fluxes evolve as countries revise their portfolios of trade products towards economic development. Accordingly products’ shares in international trade vary with time, reflecting the transfer of capital between distinct industrial sectors. Here we analyze the share of hundreds of product categories in world trade for four decades and find a scaling law obeyed by the annual variation of product share, which informs us of how capital flows and interacts over the product space. A model of stochastic transfer of capital between products based on the observed scaling relation is proposed and shown to reproduce exactly the empirical share distribution. The model allows analytic solutions as well as numerical simulations, which predict a pseudo-condensation of capital onto few product categories and when it will occur. At the individual level, our model finds certain products unpredictable, the excess or deficient growth of which with respect to the model prediction is shown to be correlated with the nature of goods.

  1. Multi-fluid CFD analysis in Process Engineering

    NASA Astrophysics Data System (ADS)

    Hjertager, B. H.

    2017-12-01

    An overview of modelling and simulation of flow processes in gas/particle and gas/liquid systems are presented. Particular emphasis is given to computational fluid dynamics (CFD) models that use the multi-dimensional multi-fluid techniques. Turbulence modelling strategies for gas/particle flows based on the kinetic theory for granular flows are given. Sub models for the interfacial transfer processes and chemical kinetics modelling are presented. Examples are shown for some gas/particle systems including flow and chemical reaction in risers as well as gas/liquid systems including bubble columns and stirred tanks.

  2. Simulation for transthoracic echocardiography of aortic valve

    PubMed Central

    Nanda, Navin C.; Kapur, K. K.; Kapoor, Poonam Malhotra

    2016-01-01

    Simulation allows interactive transthoracic echocardiography (TTE) learning using a virtual three-dimensional model of the heart and may aid in the acquisition of the cognitive and technical skills needed to perform TTE. The ability to link probe manipulation, cardiac anatomy, and echocardiographic images using a simulator has been shown to be an effective model for training anesthesiology residents in transesophageal echocardiography. A proposed alternative to real-time reality patient-based learning is simulation-based training that allows anesthesiologists to learn complex concepts and procedures, especially for specific structures such as aortic valve. PMID:27397455

  3. Spectral analysis for nonstationary and nonlinear systems: a discrete-time-model-based approach.

    PubMed

    He, Fei; Billings, Stephen A; Wei, Hua-Liang; Sarrigiannis, Ptolemaios G; Zhao, Yifan

    2013-08-01

    A new frequency-domain analysis framework for nonlinear time-varying systems is introduced based on parametric time-varying nonlinear autoregressive with exogenous input models. It is shown how the time-varying effects can be mapped to the generalized frequency response functions (FRFs) to track nonlinear features in frequency, such as intermodulation and energy transfer effects. A new mapping to the nonlinear output FRF is also introduced. A simulated example and the application to intracranial electroencephalogram data are used to illustrate the theoretical results.

  4. Statistical description of turbulent transport for flux driven toroidal plasmas

    NASA Astrophysics Data System (ADS)

    Anderson, J.; Imadera, K.; Kishimoto, Y.; Li, J. Q.; Nordman, H.

    2017-06-01

    A novel methodology to analyze non-Gaussian probability distribution functions (PDFs) of intermittent turbulent transport in global full-f gyrokinetic simulations is presented. In this work, the auto-regressive integrated moving average (ARIMA) model is applied to time series data of intermittent turbulent heat transport to separate noise and oscillatory trends, allowing for the extraction of non-Gaussian features of the PDFs. It was shown that non-Gaussian tails of the PDFs from first principles based gyrokinetic simulations agree with an analytical estimation based on a two fluid model.

  5. An adaptive finite element method for the inequality-constrained Reynolds equation

    NASA Astrophysics Data System (ADS)

    Gustafsson, Tom; Rajagopal, Kumbakonam R.; Stenberg, Rolf; Videman, Juha

    2018-07-01

    We present a stabilized finite element method for the numerical solution of cavitation in lubrication, modeled as an inequality-constrained Reynolds equation. The cavitation model is written as a variable coefficient saddle-point problem and approximated by a residual-based stabilized method. Based on our recent results on the classical obstacle problem, we present optimal a priori estimates and derive novel a posteriori error estimators. The method is implemented as a Nitsche-type finite element technique and shown in numerical computations to be superior to the usually applied penalty methods.

  6. A flow-control mechanism for distributed systems

    NASA Technical Reports Server (NTRS)

    Maitan, J.

    1991-01-01

    A new approach to the rate-based flow control in store-and-forward networks is evaluated. Existing methods display oscillations in the presence of transport delays. The proposed scheme is based on the explicit use of an embedded dynamic model of a store-and-forward buffer in a controller's feedback loop. It is shown that the use of the model eliminates the oscillations caused by the transport delays. The paper presents simulation examples and assesses the applicability of the scheme in the new generation of high-speed photonic networks where transport delays must be considered.

  7. Expanding the Capabilities of the JPL Electronic Nose for an International Space Station Technology Demonstration

    NASA Technical Reports Server (NTRS)

    Ryan, Margaret A.; Shevade, A. V.; Taylor, C. J.; Homer, M. L.; Jewell, A. D.; Kisor, A.; Manatt, K. S .; Yen, S. P. S.; Blanco, M.; Goddard, W. A., III

    2006-01-01

    An array-based sensing system based on polymer/carbon composite conductometric sensors is under development at JPL for use as an environmental monitor in the International Space Station. Sulfur dioxide has been added to the analyte set for this phase of development. Using molecular modeling techniques, the interaction energy between SO2 and polymer functional groups has been calculated, and polymers selected as potential SO2 sensors. Experiment has validated the model and two selected polymers have been shown to be promising materials for SO2 detection.

  8. Nuclear fragmentation energy and momentum transfer distributions in relativistic heavy-ion collisions

    NASA Technical Reports Server (NTRS)

    Khandelwal, Govind S.; Khan, Ferdous

    1989-01-01

    An optical model description of energy and momentum transfer in relativistic heavy-ion collisions, based upon composite particle multiple scattering theory, is presented. Transverse and longitudinal momentum transfers to the projectile are shown to arise from the real and absorptive part of the optical potential, respectively. Comparisons of fragment momentum distribution observables with experiments are made and trends outlined based on our knowledge of the underlying nucleon-nucleon interaction. Corrections to the above calculations are discussed. Finally, use of the model as a tool for estimating collision impact parameters is indicated.

  9. Climatology of Ultra Violet (UV) irradiance as measured through the Belgian ground-based monitoring network during the time period of 1995-2014

    NASA Astrophysics Data System (ADS)

    Pandey, Praveen; Gillotay, Didier; Depiesse, Cedric

    2016-04-01

    In this study we describe the network of ground-based ultraviolet (UV) radiation monitoring stations in Belgium. The evolution of the entire network, together with the details of measuring instruments is given. The observed cumulative irradiances -UVB, UVA and total solar irradiance (TSI)- over the course of measurement for three stations -a northern (Ostende), central (Uccle) and a southern (Redu)- are shown. The longest series of measurement shown in this study is at Uccle, Brussels, from 1995 till 2014. Thus, the variation of the UV index (UVI), together with the variation of irradiances during summer and winter months at Uccle are shown as a part of this climatological study. The trend of UVB irradiance over the above mentioned three stations is shown. This UVB trend is studied in conjunction with the long-term satellite-based total column ozone value over Belgium, which shows two distinct trends marked by a change point. The total column ozone trend following the change point is positive. It is also seen that the UVB trend is positive for the urban/sub-urban sites: Uccle and Redu. Whereas the UVB trend at Ostende, which is a coastal site, is not positive. A possible explanation of this relation between total column ozone and UVB trend could be associated with aerosols, which is shown in this paper by means of a radiative transfer model based study -as a part of a preliminary investigation. It is seen that the UVI is influenced by the type of aerosols.

  10. Accounting for large deformations in real-time simulations of soft tissues based on reduced-order models.

    PubMed

    Niroomandi, S; Alfaro, I; Cueto, E; Chinesta, F

    2012-01-01

    Model reduction techniques have shown to constitute a valuable tool for real-time simulation in surgical environments and other fields. However, some limitations, imposed by real-time constraints, have not yet been overcome. One of such limitations is the severe limitation in time (established in 500Hz of frequency for the resolution) that precludes the employ of Newton-like schemes for solving non-linear models as the ones usually employed for modeling biological tissues. In this work we present a technique able to deal with geometrically non-linear models, based on the employ of model reduction techniques, together with an efficient non-linear solver. Examples of the performance of the technique over some examples will be given. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  11. A Model-Based Anomaly Detection Approach for Analyzing Streaming Aircraft Engine Measurement Data

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Rinehart, Aidan W.

    2014-01-01

    This paper presents a model-based anomaly detection architecture designed for analyzing streaming transient aircraft engine measurement data. The technique calculates and monitors residuals between sensed engine outputs and model predicted outputs for anomaly detection purposes. Pivotal to the performance of this technique is the ability to construct a model that accurately reflects the nominal operating performance of the engine. The dynamic model applied in the architecture is a piecewise linear design comprising steady-state trim points and dynamic state space matrices. A simple curve-fitting technique for updating the model trim point information based on steadystate information extracted from available nominal engine measurement data is presented. Results from the application of the model-based approach for processing actual engine test data are shown. These include both nominal fault-free test case data and seeded fault test case data. The results indicate that the updates applied to improve the model trim point information also improve anomaly detection performance. Recommendations for follow-on enhancements to the technique are also presented and discussed.

  12. A Model-Based Anomaly Detection Approach for Analyzing Streaming Aircraft Engine Measurement Data

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Rinehart, Aidan Walker

    2015-01-01

    This paper presents a model-based anomaly detection architecture designed for analyzing streaming transient aircraft engine measurement data. The technique calculates and monitors residuals between sensed engine outputs and model predicted outputs for anomaly detection purposes. Pivotal to the performance of this technique is the ability to construct a model that accurately reflects the nominal operating performance of the engine. The dynamic model applied in the architecture is a piecewise linear design comprising steady-state trim points and dynamic state space matrices. A simple curve-fitting technique for updating the model trim point information based on steadystate information extracted from available nominal engine measurement data is presented. Results from the application of the model-based approach for processing actual engine test data are shown. These include both nominal fault-free test case data and seeded fault test case data. The results indicate that the updates applied to improve the model trim point information also improve anomaly detection performance. Recommendations for follow-on enhancements to the technique are also presented and discussed.

  13. Multiscaling Edge Effects in an Agent-based Money Emergence Model

    NASA Astrophysics Data System (ADS)

    Oświęcimka, P.; Drożdż, S.; Gębarowski, R.; Górski, A. Z.; Kwapień, J.

    An agent-based computational economical toy model for the emergence of money from the initial barter trading, inspired by Menger's postulate that money can spontaneously emerge in a commodity exchange economy, is extensively studied. The model considered, while manageable, is significantly complex, however. It is already able to reveal phenomena that can be interpreted as emergence and collapse of money as well as the related competition effects. In particular, it is shown that - as an extra emerging effect - the money lifetimes near the critical threshold value develop multiscaling, which allow one to set parallels to critical phenomena and, thus, to the real financial markets.

  14. Atmospheric Transmission Modeling: Proposed Aerosol Methodology with Application to the Grafenwoehr Atmospheric Optics Data Base

    DTIC Science & Technology

    1976-12-01

    ATMOSPHERIC OPTICS DATA BASE Robert E. Roberts December 1976 Ott 𔃾 a IDA INSTITUTE FOR DEFENSE ANALYSES 𔃾 SCIENCE AND TECHNOLOGY DIVISION 400 Army-Navy...distribu- tions, including three haze models and three cloud models, are shown in Fig. 2 for 0.7 pim and 10 um radiation. As one proceeds from the various...7 10 0.1 0.2 0.3 0.4 0.5 0.6 0.1 0.8 0.9 1.0 TRANSMISSION (8. 1-12.0 MICRONS) F.3’IGURE A-i1. Transmission in the 3.4-4.1 pim and 0.8-1.1 umn Bands

  15. Enthalpy-Based Thermal Evolution of Loops: III. Comparison of Zero-Dimensional Models

    NASA Technical Reports Server (NTRS)

    Cargill, P. J.; Bradshaw, Stephen J.; Klimchuk, James A.

    2012-01-01

    Zero dimensional (0D) hydrodynamic models, provide a simple and quick way to study the thermal evolution of coronal loops subjected to time-dependent heating. This paper presents a comparison of a number of 0D models that have been published in the past and is intended to provide a guide for those interested in either using the old models or developing new ones. The principal difference between the models is the way the exchange of mass and energy between corona, transition region and chromosphere is treated, as plasma cycles into and out of a loop during a heating-cooling cycle. It is shown that models based on the principles of mass and energy conservation can give satisfactory results at some, or, in the case of the Enthalpy Based Thermal Evolution of Loops (EBTEL) model, all stages of the loop evolution. Empirical models can lead to low coronal densities, spurious delays between the peak density and temperature, and, for short heating pulses, overly short loop lifetimes.

  16. Novel schemes for measurement-based quantum computation.

    PubMed

    Gross, D; Eisert, J

    2007-06-01

    We establish a framework which allows one to construct novel schemes for measurement-based quantum computation. The technique develops tools from many-body physics-based on finitely correlated or projected entangled pair states-to go beyond the cluster-state based one-way computer. We identify resource states radically different from the cluster state, in that they exhibit nonvanishing correlations, can be prepared using nonmaximally entangling gates, or have very different local entanglement properties. In the computational models, randomness is compensated in a different manner. It is shown that there exist resource states which are locally arbitrarily close to a pure state. We comment on the possibility of tailoring computational models to specific physical systems.

  17. Data-driven non-Markovian closure models

    NASA Astrophysics Data System (ADS)

    Kondrashov, Dmitri; Chekroun, Mickaël D.; Ghil, Michael

    2015-03-01

    This paper has two interrelated foci: (i) obtaining stable and efficient data-driven closure models by using a multivariate time series of partial observations from a large-dimensional system; and (ii) comparing these closure models with the optimal closures predicted by the Mori-Zwanzig (MZ) formalism of statistical physics. Multilayer stochastic models (MSMs) are introduced as both a generalization and a time-continuous limit of existing multilevel, regression-based approaches to closure in a data-driven setting; these approaches include empirical model reduction (EMR), as well as more recent multi-layer modeling. It is shown that the multilayer structure of MSMs can provide a natural Markov approximation to the generalized Langevin equation (GLE) of the MZ formalism. A simple correlation-based stopping criterion for an EMR-MSM model is derived to assess how well it approximates the GLE solution. Sufficient conditions are derived on the structure of the nonlinear cross-interactions between the constitutive layers of a given MSM to guarantee the existence of a global random attractor. This existence ensures that no blow-up can occur for a broad class of MSM applications, a class that includes non-polynomial predictors and nonlinearities that do not necessarily preserve quadratic energy invariants. The EMR-MSM methodology is first applied to a conceptual, nonlinear, stochastic climate model of coupled slow and fast variables, in which only slow variables are observed. It is shown that the resulting closure model with energy-conserving nonlinearities efficiently captures the main statistical features of the slow variables, even when there is no formal scale separation and the fast variables are quite energetic. Second, an MSM is shown to successfully reproduce the statistics of a partially observed, generalized Lotka-Volterra model of population dynamics in its chaotic regime. The challenges here include the rarity of strange attractors in the model's parameter space and the existence of multiple attractor basins with fractal boundaries. The positivity constraint on the solutions' components replaces here the quadratic-energy-preserving constraint of fluid-flow problems and it successfully prevents blow-up.

  18. Biology, Philosophy, and Scientific Method.

    ERIC Educational Resources Information Center

    Hill, L.

    1985-01-01

    The limits of falsification are discussed and the historically based models of science described by Lakatos and Kuhn are shown to offer greater insights into the practice of science. The theory of natural selection is used to relate biology to philosophy and scientific method. (Author/JN)

  19. Collective plasma effects associated with the continuous injection model of solar flare particle streams

    NASA Technical Reports Server (NTRS)

    Vlahos, L.; Papadopoulos, K.

    1979-01-01

    A modified continuous injection model for impulsive solar flares that includes self-consistent plasma nonlinearities based on the concept of marginal stability is presented. A quasi-stationary state is established, composed of a hot truncated electron Maxwellian distribution confined by acoustic turbulence on the top of the loop and energetic electron beams precipitating in the chromosphere. It is shown that the radiation properties of the model are in accordance with observations.

  20. Exact solutions and low-frequency instability of the adiabatic auroral arc model

    NASA Technical Reports Server (NTRS)

    Cornwall, John M.

    1988-01-01

    The adiabatic auroral arc model couples a kinetic theory parallel current driven by mirror forces to horizontal ionospheric currents; the resulting equations are nonlinear. Some exact stationary solutions to these equations, some of them based on the Liouville equation, are developed, with both latitudinal and longitudinal spatial variations. These Liouville equation exact solutions are related to stability boundaries of low-frequency instabilities such as Kelvin-Helmholtz, as shown by a study of a simplified model.

  1. Discrete analysis of spatial-sensitivity models

    NASA Technical Reports Server (NTRS)

    Nielsen, Kenneth R. K.; Wandell, Brian A.

    1988-01-01

    Procedures for reducing the computational burden of current models of spatial vision are described, the simplifications being consistent with the prediction of the complete model. A method for using pattern-sensitivity measurements to estimate the initial linear transformation is also proposed which is based on the assumption that detection performance is monotonic with the vector length of the sensor responses. It is shown how contrast-threshold data can be used to estimate the linear transformation needed to characterize threshold performance.

  2. Non-Invasive Cell-Based Therapy for Traumatic Optic Neuropathy

    DTIC Science & Technology

    2013-10-01

    Morgans, Sergey Girman, Raymond Lund and Shaomei Wang Retinal Morphological and Functional Changes in an Animal Model of Retinitis Pigmentosa . Vis...model was created. 2. Rat MSC and M-Sch were reliable produced for experiments. 3. Systemic administration of MSC significantly preserved retinal ...TON also promote retinal ganglion cell survival. From the first year study, we have shown that systemic administration of MSC can significantly

  3. Brief Report: Effects of Video-Based Group Instruction on Spontaneous Social Interaction of Adolescents with Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Plavnick, Joshua B.; Dueñas, Ana D.

    2018-01-01

    Four adolescents with autism spectrum disorder (ASD) were taught to interact with peers by asking social questions or commenting about others during game play or group activities. Participants were shown a video model and then given an opportunity to perform the social behavior depicted in the model when playing a game with one another. All…

  4. Improved model predictive control of resistive wall modes by error field estimator in EXTRAP T2R

    NASA Astrophysics Data System (ADS)

    Setiadi, A. C.; Brunsell, P. R.; Frassinetti, L.

    2016-12-01

    Many implementations of a model-based approach for toroidal plasma have shown better control performance compared to the conventional type of feedback controller. One prerequisite of model-based control is the availability of a control oriented model. This model can be obtained empirically through a systematic procedure called system identification. Such a model is used in this work to design a model predictive controller to stabilize multiple resistive wall modes in EXTRAP T2R reversed-field pinch. Model predictive control is an advanced control method that can optimize the future behaviour of a system. Furthermore, this paper will discuss an additional use of the empirical model which is to estimate the error field in EXTRAP T2R. Two potential methods are discussed that can estimate the error field. The error field estimator is then combined with the model predictive control and yields better radial magnetic field suppression.

  5. Accuracy estimates for some global analytical models of the Earth's main magnetic field on the basis of data on gradient magnetic surveys at stratospheric balloons

    NASA Astrophysics Data System (ADS)

    Tsvetkov, Yu. P.; Brekhov, O. M.; Bondar, T. N.; Filippov, S. V.; Petrov, V. G.; Tsvetkova, N. M.; Frunze, A. Kh.

    2014-03-01

    Two global analytical models of the main magnetic field of the Earth (MFE) have been used to determine their potential in deriving an anomalous MFE from balloon magnetic surveys conducted at altitudes of ˜30 km. The daily mean spherical harmonic model (DMSHM) constructed from satellite data on the day of balloon magnetic surveys was analyzed. This model for the day of magnetic surveys was shown to be almost free of errors associated with secular variations and can be recommended for deriving an anomalous MFE. The error of the enhanced magnetic model (EMM) was estimated depending on the number of harmonics used in the model. The model limited by the first 13 harmonics was shown to be able to lead to errors in the main MFE of around 15 nT. The EMM developed to n = m = 720 and constructed on the basis of satellite and ground-based magnetic data fails to adequately simulate the anomalous MFE at altitudes of 30 km. To construct a representative model developed to m = n = 720, ground-based magnetic data should be replaced by data of balloon magnetic surveys for altitudes of ˜30 km. The results of investigations were confirmed by a balloon experiment conducted by Pushkov Institute of Terrestrial Magnetism, Ionosphere, and Radio Wave Propagation of the Russian Academy of Sciences and the Moscow Aviation Institute.

  6. Multialternative drift-diffusion model predicts the relationship between visual fixations and choice in value-based decisions.

    PubMed

    Krajbich, Ian; Rangel, Antonio

    2011-08-16

    How do we make decisions when confronted with several alternatives (e.g., on a supermarket shelf)? Previous work has shown that accumulator models, such as the drift-diffusion model, can provide accurate descriptions of the psychometric data for binary value-based choices, and that the choice process is guided by visual attention. However, the computational processes used to make choices in more complicated situations involving three or more options are unknown. We propose a model of trinary value-based choice that generalizes what is known about binary choice, and test it using an eye-tracking experiment. We find that the model provides a quantitatively accurate description of the relationship between choice, reaction time, and visual fixation data using the same parameters that were estimated in previous work on binary choice. Our findings suggest that the brain uses similar computational processes to make binary and trinary choices.

  7. Using fuzzy rule-based knowledge model for optimum plating conditions search

    NASA Astrophysics Data System (ADS)

    Solovjev, D. S.; Solovjeva, I. A.; Litovka, Yu V.; Arzamastsev, A. A.; Glazkov, V. P.; L’vov, A. A.

    2018-03-01

    The paper discusses existing approaches to plating process modeling in order to decrease the distribution thickness of plating surface cover. However, these approaches do not take into account the experience, knowledge, and intuition of the decision-makers when searching the optimal conditions of electroplating technological process. The original approach to optimal conditions search for applying the electroplating coatings, which uses the rule-based model of knowledge and allows one to reduce the uneven product thickness distribution, is proposed. The block diagrams of a conventional control system of a galvanic process as well as the system based on the production model of knowledge are considered. It is shown that the fuzzy production model of knowledge in the control system makes it possible to obtain galvanic coatings of a given thickness unevenness with a high degree of adequacy to the experimental data. The described experimental results confirm the theoretical conclusions.

  8. A determinant-based criterion for working correlation structure selection in generalized estimating equations.

    PubMed

    Jaman, Ajmery; Latif, Mahbub A H M; Bari, Wasimul; Wahed, Abdus S

    2016-05-20

    In generalized estimating equations (GEE), the correlation between the repeated observations on a subject is specified with a working correlation matrix. Correct specification of the working correlation structure ensures efficient estimators of the regression coefficients. Among the criteria used, in practice, for selecting working correlation structure, Rotnitzky-Jewell, Quasi Information Criterion (QIC) and Correlation Information Criterion (CIC) are based on the fact that if the assumed working correlation structure is correct then the model-based (naive) and the sandwich (robust) covariance estimators of the regression coefficient estimators should be close to each other. The sandwich covariance estimator, used in defining the Rotnitzky-Jewell, QIC and CIC criteria, is biased downward and has a larger variability than the corresponding model-based covariance estimator. Motivated by this fact, a new criterion is proposed in this paper based on the bias-corrected sandwich covariance estimator for selecting an appropriate working correlation structure in GEE. A comparison of the proposed and the competing criteria is shown using simulation studies with correlated binary responses. The results revealed that the proposed criterion generally performs better than the competing criteria. An example of selecting the appropriate working correlation structure has also been shown using the data from Madras Schizophrenia Study. Copyright © 2015 John Wiley & Sons, Ltd.

  9. Particle tracking acceleration via signed distance fields in direct-accelerated geometry Monte Carlo

    DOE PAGES

    Shriwise, Patrick C.; Davis, Andrew; Jacobson, Lucas J.; ...

    2017-08-26

    Computer-aided design (CAD)-based Monte Carlo radiation transport is of value to the nuclear engineering community for its ability to conduct transport on high-fidelity models of nuclear systems, but it is more computationally expensive than native geometry representations. This work describes the adaptation of a rendering data structure, the signed distance field, as a geometric query tool for accelerating CAD-based transport in the direct-accelerated geometry Monte Carlo toolkit. Demonstrations of its effectiveness are shown for several problems. The beginnings of a predictive model for the data structure's utilization based on various problem parameters is also introduced.

  10. A control method for bilateral teleoperating systems

    NASA Astrophysics Data System (ADS)

    Strassberg, Yesayahu

    1992-01-01

    The thesis focuses on control of bilateral master-slave teleoperators. The bilateral control issue of teleoperators is studied and a new scheme that overcomes basic unsolved problems is proposed. A performance measure, based on the multiport modeling method, is introduced in order to evaluate and understand the limitations of earlier published bilateral control laws. Based on the study evaluating the different methods, the objective of the thesis is stated. The proposed control law is then introduced, its ideal performance is demonstrated, and conditions for stability and robustness are derived. It is shown that stability, desired performance, and robustness can be obtained under the assumption that the deviation of the model from the actual system satisfies certain norm inequalities and the measurement uncertainties are bounded. The proposed scheme is validated by numerical simulation. The simulated system is based on the configuration of the RAL (Robotics and Automation Laboratory) telerobot. From the simulation results it is shown that good tracking performance can be obtained. In order to verify the performance of the proposed scheme when applied to a real hardware system, an experimental setup of a three degree of freedom master-slave teleoperator (i.e. three degree of freedom master and three degree of freedom slave robot) was built. Three basic experiments were conducted to verify the performance of the proposed control scheme. The first experiment verified the master control law and its contribution to the robustness and performance of the entire system. The second experiment demonstrated the actual performance of the system while performing a free motion teleoperating task. From the experimental results, it is shown that the control law has good performance and is robust to uncertainties in the models of the master and slave.

  11. DNA-based approaches to the treatment of allergies.

    PubMed

    Spiegelberg, Hans L; Raz, Eyal

    2002-02-01

    Although excellent pharmacological treatments for allergies exist, they do not change the underlying pathogenesis of allergic diseases and do not cure the disease. Only allergen-specific immunotherapy, the injection of small but increasing amounts of allergen, has been shown to change a pre-existing allergic Th2 immune response to a non-allergic Th1 response. However, since injection of allergen is associated with the risk of allergic and sometimes even life-threatening anaphylactic reactions, immunotherapy is no longer used as extensively as in the past. In the search for a novel immunotherapy having a low risk-to-benefit ratio, immunostimulatory CpG motif DNA sequences have recently been shown to provide an excellent tool for designing safer and more efficient forms of allergen immunotherapy. These DNA-based immunotherapeutics include allergen gene vaccines, immunization with allergen-DNA conjugates and immunomodulation with immunostimulatory oligodeoxynucleotides. All three DNA-based immunotherapeutics have been shown to be very effective in animal models of allergic diseases and, at present, allergen-DNA conjugates are being tested for their safety and efficacy in allergic patients. This review describes the preclinical findings and the data of the first clinical trials in allergic patients of DNA-based immunotherapeutics for allergic disorders.

  12. Measurement and analysis of acoustic flight test data for two advanced design high speed propeller models

    NASA Technical Reports Server (NTRS)

    Brooks, B. M.; Mackall, K. G.

    1984-01-01

    The recent test program, in which the SR-2 and SR-3 Prop-Fan models were acoustically tested in flight, is described and the results of analysis of noise data acquired are discussed. The trends of noise levels with flight operating parameters are shown. The acoustic benefits of the SR-3 design with swept blades relative to the SR-2 design with straight blades are shown. Noise data measured on the surface of a small-diameter microphone boom mounted above the fuselage and on the surface of the airplane fuselage are compared to show the effects of acoustic propagation through a boundary layer. Noise level estimates made using a theoretically based prediction methodology are compared with measurements.

  13. Modeling of the Nitric Oxide Transport in the Human Lungs.

    PubMed

    Karamaoun, Cyril; Van Muylem, Alain; Haut, Benoît

    2016-01-01

    In the human lungs, nitric oxide (NO) acts as a bronchodilatator, by relaxing the bronchial smooth muscles and is closely linked to the inflammatory status of the lungs, owing to its antimicrobial activity. Furthermore, the molar fraction of NO in the exhaled air has been shown to be higher for asthmatic patients than for healthy patients. Multiple models have been developed in order to characterize the NO dynamics in the lungs, owing to their complex structure. Indeed, direct measurements in the lungs are difficult and, therefore, these models are valuable tools to interpret experimental data. In this work, a new model of the NO transport in the human lungs is proposed. It belongs to the family of the morphological models and is based on the morphometric model of Weibel (1963). When compared to models published previously, its main new features are the layered representation of the wall of the airways and the possibility to simulate the influence of bronchoconstriction (BC) and of the presence of mucus on the NO transport in lungs. The model is based on a geometrical description of the lungs, at rest and during a respiratory cycle, coupled with transport equations, written in the layers composing an airway wall and in the lumen of the airways. First, it is checked that the model is able to reproduce experimental information available in the literature. Second, the model is used to discuss some features of the NO transport in healthy and unhealthy lungs. The simulation results are analyzed, especially when BC has occurred in the lungs. For instance, it is shown that BC can have a significant influence on the NO transport in the tissues composing an airway wall. It is also shown that the relation between BC and the molar fraction of NO in the exhaled air is complex. Indeed, BC might lead to an increase or to a decrease of this molar fraction, depending on the extent of the BC and on the possible presence of mucus. This should be confirmed experimentally and might provide an interesting way to characterize the extent of BC in unhealthy patients.

  14. Bifurcation phenomena in an impulsive model of non-basal testosterone regulation

    NASA Astrophysics Data System (ADS)

    Zhusubaliyev, Zhanybai T.; Churilov, Alexander N.; Medvedev, Alexander

    2012-03-01

    Complex nonlinear dynamics in a recent mathematical model of non-basal testosterone regulation are investigated. In agreement with biological evidence, the pulsatile (non-basal) secretion of testosterone is modeled by frequency and amplitude modulated feedback. It is shown that, in addition to already known periodic motions with one and two pulses in the least period of a closed-loop system solution, cycles of higher periodicity and chaos are present in the model in hand. The broad range of exhibited dynamic behaviors makes the model highly promising in model-based signal processing of hormone data.

  15. A generic biokinetic model for noble gases with application to radon.

    PubMed

    Leggett, Rich; Marsh, James; Gregoratto, Demetrio; Blanchardon, Eric

    2013-06-01

    To facilitate the estimation of radiation doses from intake of radionuclides, the International Commission on Radiological Protection (ICRP) publishes dose coefficients (dose per unit intake) based on reference biokinetic and dosimetric models. The ICRP generally has not provided biokinetic models or dose coefficients for intake of noble gases, but plans to provide such information for (222)Rn and other important radioisotopes of noble gases in a forthcoming series of reports on occupational intake of radionuclides (OIR). This paper proposes a generic biokinetic model framework for noble gases and develops parameter values for radon. The framework is tailored to applications in radiation protection and is consistent with a physiologically based biokinetic modelling scheme adopted for the OIR series. Parameter values for a noble gas are based largely on a blood flow model and physical laws governing transfer of a non-reactive and soluble gas between materials. Model predictions for radon are shown to be consistent with results of controlled studies of its biokinetics in human subjects.

  16. Application of an OCT data-based mathematical model of the foveal pit in Parkinson disease.

    PubMed

    Ding, Yin; Spund, Brian; Glazman, Sofya; Shrier, Eric M; Miri, Shahnaz; Selesnick, Ivan; Bodis-Wollner, Ivan

    2014-11-01

    Spectral-domain Optical coherence tomography (OCT) has shown remarkable utility in the study of retinal disease and has helped to characterize the fovea in Parkinson disease (PD) patients. We developed a detailed mathematical model based on raw OCT data to allow differentiation of foveae of PD patients from healthy controls. Of the various models we tested, a difference of a Gaussian and a polynomial was found to have "the best fit". Decision was based on mathematical evaluation of the fit of the model to the data of 45 control eyes versus 50 PD eyes. We compared the model parameters in the two groups using receiver-operating characteristics (ROC). A single parameter discriminated 70 % of PD eyes from controls, while using seven of the eight parameters of the model allowed 76 % to be discriminated. The future clinical utility of mathematical modeling in study of diffuse neurodegenerative conditions that also affect the fovea is discussed.

  17. A method of real-time fault diagnosis for power transformers based on vibration analysis

    NASA Astrophysics Data System (ADS)

    Hong, Kaixing; Huang, Hai; Zhou, Jianping; Shen, Yimin; Li, Yujie

    2015-11-01

    In this paper, a novel probability-based classification model is proposed for real-time fault detection of power transformers. First, the transformer vibration principle is introduced, and two effective feature extraction techniques are presented. Next, the details of the classification model based on support vector machine (SVM) are shown. The model also includes a binary decision tree (BDT) which divides transformers into different classes according to health state. The trained model produces posterior probabilities of membership to each predefined class for a tested vibration sample. During the experiments, the vibrations of transformers under different conditions are acquired, and the corresponding feature vectors are used to train the SVM classifiers. The effectiveness of this model is illustrated experimentally on typical in-service transformers. The consistency between the results of the proposed model and the actual condition of the test transformers indicates that the model can be used as a reliable method for transformer fault detection.

  18. A physiologically based pharmacokinetic model for developmental exposure to BDE-47 in rats

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Emond, Claude, E-mail: claude.emond@umontreal.c; BioSimulation Consulting Inc., Newark, DE 19711; Raymer, James H.

    2010-02-01

    Polybrominated diphenyl ethers (PBDEs) are used commercially as additive flame retardants and have been shown to transfer into environmental compartments, where they have the potential to bioaccumulate in wildlife and humans. Of the 209 possible PBDEs, 2,2',4,4'-tetrabromodiphenyl ether (BDE-47) is usually the dominant congener found in human blood and milk samples. BDE-47 has been shown to have endocrine activity and produce developmental, reproductive, and neurotoxic effects. The objective of this study was to develop a physiologically based pharmacokinetic (PBPK) model for BDE-47 in male and female (pregnant and non-pregnant) adult rats to facilitate investigations of developmental exposure. This model consistsmore » of eight compartments: liver, brain, adipose tissue, kidney, placenta, fetus, blood, and the rest of the body. Concentrations of BDE-47 from the literature and from maternal-fetal pharmacokinetic studies conducted at RTI International were used to parameterize and evaluate the model. The results showed that the model simulated BDE-47 tissue concentrations in adult male, maternal, and fetal compartments within the standard deviations of the experimental data. The model's ability to estimate BDE-47 concentrations in the fetus after maternal exposure will be useful to design in utero exposure/effect studies. This PBPK model is the first one designed for any PBDE pharmaco/toxicokinetic description. The next steps will be to expand this model to simulate BDE-47 pharmacokinetics and distributions across species (mice), and then extrapolate it to humans. After mouse and human model development, additional PBDE congeners will be incorporated into the model and simulated as a mixture.« less

  19. A scalable variational inequality approach for flow through porous media models with pressure-dependent viscosity

    NASA Astrophysics Data System (ADS)

    Mapakshi, N. K.; Chang, J.; Nakshatrala, K. B.

    2018-04-01

    Mathematical models for flow through porous media typically enjoy the so-called maximum principles, which place bounds on the pressure field. It is highly desirable to preserve these bounds on the pressure field in predictive numerical simulations, that is, one needs to satisfy discrete maximum principles (DMP). Unfortunately, many of the existing formulations for flow through porous media models do not satisfy DMP. This paper presents a robust, scalable numerical formulation based on variational inequalities (VI), to model non-linear flows through heterogeneous, anisotropic porous media without violating DMP. VI is an optimization technique that places bounds on the numerical solutions of partial differential equations. To crystallize the ideas, a modification to Darcy equations by taking into account pressure-dependent viscosity will be discretized using the lowest-order Raviart-Thomas (RT0) and Variational Multi-scale (VMS) finite element formulations. It will be shown that these formulations violate DMP, and, in fact, these violations increase with an increase in anisotropy. It will be shown that the proposed VI-based formulation provides a viable route to enforce DMP. Moreover, it will be shown that the proposed formulation is scalable, and can work with any numerical discretization and weak form. A series of numerical benchmark problems are solved to demonstrate the effects of heterogeneity, anisotropy and non-linearity on DMP violations under the two chosen formulations (RT0 and VMS), and that of non-linearity on solver convergence for the proposed VI-based formulation. Parallel scalability on modern computational platforms will be illustrated through strong-scaling studies, which will prove the efficiency of the proposed formulation in a parallel setting. Algorithmic scalability as the problem size is scaled up will be demonstrated through novel static-scaling studies. The performed static-scaling studies can serve as a guide for users to be able to select an appropriate discretization for a given problem size.

  20. Guided by Theory, Informed by Practice: Training and Support for the Good Behavior Game, a Classroom-Based Behavior Management Strategy

    ERIC Educational Resources Information Center

    Poduska, Jeanne M.; Kurki, Anja

    2014-01-01

    Moving evidence-based practices for classroom behavior management into real-world settings is a high priority for education and public health. This article describes the development and use of a model of training and support for the Good Behavior Game (GBG), one of the few preventive interventions shown to have positive outcomes for elementary…

  1. Response surface method in geotechnical/structural analysis, phase 1

    NASA Astrophysics Data System (ADS)

    Wong, F. S.

    1981-02-01

    In the response surface approach, an approximating function is fit to a long running computer code based on a limited number of code calculations. The approximating function, called the response surface, is then used to replace the code in subsequent repetitive computations required in a statistical analysis. The procedure of the response surface development and feasibility of the method are shown using a sample problem in slop stability which is based on data from centrifuge experiments of model soil slopes and involves five random soil parameters. It is shown that a response surface can be constructed based on as few as four code calculations and that the response surface is computationally extremely efficient compared to the code calculation. Potential applications of this research include probabilistic analysis of dynamic, complex, nonlinear soil/structure systems such as slope stability, liquefaction, and nuclear reactor safety.

  2. On the stability of the exact solutions of the dual-phase lagging model of heat conduction.

    PubMed

    Ordonez-Miranda, Jose; Alvarado-Gil, Juan Jose

    2011-04-13

    The dual-phase lagging (DPL) model has been considered as one of the most promising theoretical approaches to generalize the classical Fourier law for heat conduction involving short time and space scales. Its applicability, potential, equivalences, and possible drawbacks have been discussed in the current literature. In this study, the implications of solving the exact DPL model of heat conduction in a three-dimensional bounded domain solution are explored. Based on the principle of causality, it is shown that the temperature gradient must be always the cause and the heat flux must be the effect in the process of heat transfer under the dual-phase model. This fact establishes explicitly that the single- and DPL models with different physical origins are mathematically equivalent. In addition, taking into account the properties of the Lambert W function and by requiring that the temperature remains stable, in such a way that it does not go to infinity when the time increases, it is shown that the DPL model in its exact form cannot provide a general description of the heat conduction phenomena.

  3. A high-resolution physically-based global flood hazard map

    NASA Astrophysics Data System (ADS)

    Kaheil, Y.; Begnudelli, L.; McCollum, J.

    2016-12-01

    We present the results from a physically-based global flood hazard model. The model uses a physically-based hydrologic model to simulate river discharges, and 2D hydrodynamic model to simulate inundation. The model is set up such that it allows the application of large-scale flood hazard through efficient use of parallel computing. For hydrology, we use the Hillslope River Routing (HRR) model. HRR accounts for surface hydrology using Green-Ampt parameterization. The model is calibrated against observed discharge data from the Global Runoff Data Centre (GRDC) network, among other publicly-available datasets. The parallel-computing framework takes advantage of the river network structure to minimize cross-processor messages, and thus significantly increases computational efficiency. For inundation, we implemented a computationally-efficient 2D finite-volume model with wetting/drying. The approach consists of simulating flood along the river network by forcing the hydraulic model with the streamflow hydrographs simulated by HRR, and scaled up to certain return levels, e.g. 100 years. The model is distributed such that each available processor takes the next simulation. Given an approximate criterion, the simulations are ordered from most-demanding to least-demanding to ensure that all processors finalize almost simultaneously. Upon completing all simulations, the maximum envelope of flood depth is taken to generate the final map. The model is applied globally, with selected results shown from different continents and regions. The maps shown depict flood depth and extent at different return periods. These maps, which are currently available at 3 arc-sec resolution ( 90m) can be made available at higher resolutions where high resolution DEMs are available. The maps can be utilized by flood risk managers at the national, regional, and even local levels to further understand their flood risk exposure, exercise certain measures of mitigation, and/or transfer the residual risk financially through flood insurance programs.

  4. A validated approach for modeling collapse of steel structures

    NASA Astrophysics Data System (ADS)

    Saykin, Vitaliy Victorovich

    A civil engineering structure is faced with many hazardous conditions such as blasts, earthquakes, hurricanes, tornadoes, floods, and fires during its lifetime. Even though structures are designed for credible events that can happen during a lifetime of the structure, extreme events do happen and cause catastrophic failures. Understanding the causes and effects of structural collapse is now at the core of critical areas of national need. One factor that makes studying structural collapse difficult is the lack of full-scale structural collapse experimental test results against which researchers could validate their proposed collapse modeling approaches. The goal of this work is the creation of an element deletion strategy based on fracture models for use in validated prediction of collapse of steel structures. The current work reviews the state-of-the-art of finite element deletion strategies for use in collapse modeling of structures. It is shown that current approaches to element deletion in collapse modeling do not take into account stress triaxiality in vulnerable areas of the structure, which is important for proper fracture and element deletion modeling. The report then reviews triaxiality and its role in fracture prediction. It is shown that fracture in ductile materials is a function of triaxiality. It is also shown that, depending on the triaxiality range, different fracture mechanisms are active and should be accounted for. An approach using semi-empirical fracture models as a function of triaxiality are employed. The models to determine fracture initiation, softening and subsequent finite element deletion are outlined. This procedure allows for stress-displacement softening at an integration point of a finite element in order to subsequently remove the element. This approach avoids abrupt changes in the stress that would create dynamic instabilities, thus making the results more reliable and accurate. The calibration and validation of these models are shown. The calibration is performed using a particle swarm optimization algorithm to establish accurate parameters when calibrated to circumferentially notched tensile coupons. It is shown that consistent, accurate predictions are attained using the chosen models. The variation of triaxiality in steel material during plastic hardening and softening is reported. The range of triaxiality in steel structures undergoing collapse is investigated in detail and the accuracy of the chosen finite element deletion approaches is discussed. This is done through validation of different structural components and structural frames undergoing severe fracture and collapse.

  5. Probability model for atmospheric sulfur dioxide concentrations in the area of Venice

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Buttazzoni, C.; Lavagnini, I.; Marani, A.

    1986-09-01

    This paper deals with a comparative screening of existing air quality models based on their ability to simulate the distribution of sulfur dioxide data in the Venetian area. Investigations have been carried out on sulfur dioxide dispersion in the atmosphere of the Venetian area. The studies have been mainly focused on transport models (Gaussian, plume and K-models) aiming at meaningful correlations of sources and receptors. Among the results, a noteworthy disagreement of simulated and experimental data, due to the lack of thorough knowledge of source field conditions and of local meteorology of the sea-land transition area, has been shown. Investigationsmore » with receptor oriented models (based, e.g., on time series analysis, Fourier analysis, or statistical distributions) have also been performed.« less

  6. Investigation into discretization methods of the six-parameter Iwan model

    NASA Astrophysics Data System (ADS)

    Li, Yikun; Hao, Zhiming; Feng, Jiaquan; Zhang, Dingguo

    2017-02-01

    Iwan model is widely applied for the purpose of describing nonlinear mechanisms of jointed structures. In this paper, parameter identification procedures of the six-parameter Iwan model based on joint experiments with different preload techniques are performed. Four kinds of discretization methods deduced from stiffness equation of the six-parameter Iwan model are provided, which can be used to discretize the integral-form Iwan model into a sum of finite Jenkins elements. In finite element simulation, the influences of discretization methods and numbers of Jenkins elements on computing accuracy are discussed. Simulation results indicate that a higher accuracy can be obtained with larger numbers of Jenkins elements. It is also shown that compared with other three kinds of discretization methods, the geometric series discretization based on stiffness provides the highest computing accuracy.

  7. A fluid-mechanic-based model for the sedimentation of flocculated suspensions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chhabra, R.P.; Prasad, D.

    1991-02-01

    Due to the wide occurrence of the suspensions of fine particles in mineral and chemical processing industries, considerable interest has been shown in modeling the hydrodynamic behavior of such systems. A fluid-mechanic-based analysis is presented for the settling behavior of flocculated4d suspensions. Flocs have been modeled as composite spheres consisting of a solid core embedded in a shell of homogeneous and isotropic porous medium. Theoretical estimates of the rates of sedimentation for flocculated suspensions are obtained by solving the equations of continuity and of motion. The interparticle interactions are incorporated into the analysis by employing the Happel free surface cellmore » model. The results reported embrace wide ranges of conditions of floc size and concentration.« less

  8. Tissue multifractality and hidden Markov model based integrated framework for optimum precancer detection

    NASA Astrophysics Data System (ADS)

    Mukhopadhyay, Sabyasachi; Das, Nandan K.; Kurmi, Indrajit; Pradhan, Asima; Ghosh, Nirmalya; Panigrahi, Prasanta K.

    2017-10-01

    We report the application of a hidden Markov model (HMM) on multifractal tissue optical properties derived via the Born approximation-based inverse light scattering method for effective discrimination of precancerous human cervical tissue sites from the normal ones. Two global fractal parameters, generalized Hurst exponent and the corresponding singularity spectrum width, computed by multifractal detrended fluctuation analysis (MFDFA), are used here as potential biomarkers. We develop a methodology that makes use of these multifractal parameters by integrating with different statistical classifiers like the HMM and support vector machine (SVM). It is shown that the MFDFA-HMM integrated model achieves significantly better discrimination between normal and different grades of cancer as compared to the MFDFA-SVM integrated model.

  9. An empirically-based model for the lift coefficients of twisted airfoils with leading-edge tubercles

    NASA Astrophysics Data System (ADS)

    Ni, Zao; Su, Tsung-chow; Dhanak, Manhar

    2018-04-01

    Experimental data for untwisted airfoils are utilized to propose a model for predicting the lift coefficients of twisted airfoils with leading-edge tubercles. The effectiveness of the empirical model is verified through comparison with results of a corresponding computational fluid-dynamic (CFD) study. The CFD study is carried out for both twisted and untwisted airfoils with tubercles, the latter shown to compare well with available experimental data. Lift coefficients of twisted airfoils predicted from the proposed empirically-based model match well with the corresponding coefficients determined using the verified CFD study. Flow details obtained from the latter provide better insight into the underlying mechanism and behavior at stall of twisted airfoils with leading edge tubercles.

  10. Prediction of Spatiotemporal Patterns of Neural Activity from Pairwise Correlations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marre, O.; El Boustani, S.; Fregnac, Y.

    We designed a model-based analysis to predict the occurrence of population patterns in distributed spiking activity. Using a maximum entropy principle with a Markovian assumption, we obtain a model that accounts for both spatial and temporal pairwise correlations among neurons. This model is tested on data generated with a Glauber spin-glass system and is shown to correctly predict the occurrence probabilities of spatiotemporal patterns significantly better than Ising models only based on spatial correlations. This increase of predictability was also observed on experimental data recorded in parietal cortex during slow-wave sleep. This approach can also be used to generate surrogatesmore » that reproduce the spatial and temporal correlations of a given data set.« less

  11. An object-based approach for detecting small brain lesions: application to Virchow-Robin spaces.

    PubMed

    Descombes, Xavier; Kruggel, Frithjof; Wollny, Gert; Gertz, Hermann Josef

    2004-02-01

    This paper is concerned with the detection of multiple small brain lesions from magnetic resonance imaging (MRI) data. A model based on the marked point process framework is designed to detect Virchow-Robin spaces (VRSs). These tubular shaped spaces are due to retraction of the brain parenchyma from its supplying arteries. VRS are described by simple geometrical objects that are introduced as small tubular structures. Their radiometric properties are embedded in a data term. A prior model includes interactions describing the clustering property of VRS. A Reversible Jump Markov Chain Monte Carlo algorithm (RJMCMC) optimizes the proposed model, obtained by multiplying the prior and the data model. Example results are shown on T1-weighted MRI datasets of elderly subjects.

  12. Model-based occluded object recognition using Petri nets

    NASA Astrophysics Data System (ADS)

    Zhou, Chuan; Hura, Gurdeep S.

    1998-09-01

    This paper discusses the use of Petri nets to model the process of the object matching between an image and a model under different 2D geometric transformations. This transformation finds its applications in sensor-based robot control, flexible manufacturing system and industrial inspection, etc. A description approach for object structure is presented by its topological structure relation called Point-Line Relation Structure (PLRS). It has been shown how Petri nets can be used to model the matching process, and an optimal or near optimal matching can be obtained by tracking the reachability graph of the net. The experiment result shows that object can be successfully identified and located under 2D transformation such as translations, rotations, scale changes and distortions due to object occluded partially.

  13. EPR-based material modelling of soils

    NASA Astrophysics Data System (ADS)

    Faramarzi, Asaad; Alani, Amir M.

    2013-04-01

    In the past few decades, as a result of the rapid developments in computational software and hardware, alternative computer aided pattern recognition approaches have been introduced to modelling many engineering problems, including constitutive modelling of materials. The main idea behind pattern recognition systems is that they learn adaptively from experience and extract various discriminants, each appropriate for its purpose. In this work an approach is presented for developing material models for soils based on evolutionary polynomial regression (EPR). EPR is a recently developed hybrid data mining technique that searches for structured mathematical equations (representing the behaviour of a system) using genetic algorithm and the least squares method. Stress-strain data from triaxial tests are used to train and develop EPR-based material models for soil. The developed models are compared with some of the well-known conventional material models and it is shown that EPR-based models can provide a better prediction for the behaviour of soils. The main benefits of using EPR-based material models are that it provides a unified approach to constitutive modelling of all materials (i.e., all aspects of material behaviour can be implemented within a unified environment of an EPR model); it does not require any arbitrary choice of constitutive (mathematical) models. In EPR-based material models there are no material parameters to be identified. As the model is trained directly from experimental data therefore, EPR-based material models are the shortest route from experimental research (data) to numerical modelling. Another advantage of EPR-based constitutive model is that as more experimental data become available, the quality of the EPR prediction can be improved by learning from the additional data, and therefore, the EPR model can become more effective and robust. The developed EPR-based material models can be incorporated in finite element (FE) analysis.

  14. Nonlinear unitary quantum collapse model with self-generated noise

    NASA Astrophysics Data System (ADS)

    Geszti, Tamás

    2018-04-01

    Collapse models including some external noise of unknown origin are routinely used to describe phenomena on the quantum-classical border; in particular, quantum measurement. Although containing nonlinear dynamics and thereby exposed to the possibility of superluminal signaling in individual events, such models are widely accepted on the basis of fully reproducing the non-signaling statistical predictions of quantum mechanics. Here we present a deterministic nonlinear model without any external noise, in which randomness—instead of being universally present—emerges in the measurement process, from deterministic irregular dynamics of the detectors. The treatment is based on a minimally nonlinear von Neumann equation for a Stern–Gerlach or Bell-type measuring setup, containing coordinate and momentum operators in a self-adjoint skew-symmetric, split scalar product structure over the configuration space. The microscopic states of the detectors act as a nonlocal set of hidden parameters, controlling individual outcomes. The model is shown to display pumping of weights between setup-defined basis states, with a single winner randomly selected and the rest collapsing to zero. Environmental decoherence has no role in the scenario. Through stochastic modelling, based on Pearle’s ‘gambler’s ruin’ scheme, outcome probabilities are shown to obey Born’s rule under a no-drift or ‘fair-game’ condition. This fully reproduces quantum statistical predictions, implying that the proposed non-linear deterministic model satisfies the non-signaling requirement. Our treatment is still vulnerable to hidden signaling in individual events, which remains to be handled by future research.

  15. Modeling of direct detection Doppler wind lidar. I. The edge technique.

    PubMed

    McKay, J A

    1998-09-20

    Analytic models, based on a convolution of a Fabry-Perot etalon transfer function with a Gaussian spectral source, are developed for the shot-noise-limited measurement precision of Doppler wind lidars based on the edge filter technique by use of either molecular or aerosol atmospheric backscatter. The Rayleigh backscatter formulation yields a map of theoretical sensitivity versus etalon parameters, permitting design optimization and showing that the optimal system will have a Doppler measurement uncertainty no better than approximately 2.4 times that of a perfect, lossless receiver. An extension of the models to include the effect of limited etalon aperture leads to a condition for the minimum aperture required to match light collection optics. It is shown that, depending on the choice of operating point, the etalon aperture finesse must be 4-15 to avoid degradation of measurement precision. A convenient, closed-form expression for the measurement precision is obtained for spectrally narrow backscatter and is shown to be useful for backscatter that is spectrally broad as well. The models are extended to include extrinsic noise, such as solar background or the Rayleigh background on an aerosol Doppler lidar. A comparison of the model predictions with experiment has not yet been possible, but a comparison with detailed instrument modeling by McGill and Spinhirne shows satisfactory agreement. The models derived here will be more conveniently implemented than McGill and Spinhirne's and more readily permit physical insights to the optimization and limitations of the double-edge technique.

  16. Assessing rear-end crash potential in urban locations based on vehicle-by-vehicle interactions, geometric characteristics and operational conditions.

    PubMed

    Dimitriou, Loukas; Stylianou, Katerina; Abdel-Aty, Mohamed A

    2018-03-01

    Rear-end crashes are one of the most frequently occurring crash types, especially in urban networks. An understanding of the contributing factors and their significant association with rear-end crashes is of practical importance and will help in the development of effective countermeasures. The objective of this study is to assess rear-end crash potential at a microscopic level in an urban environment, by investigating vehicle-by-vehicle interactions. To do so, several traffic parameters at the individual vehicle level have been taken into consideration, for capturing car-following characteristics and vehicle interactions, and to investigate their effect on potential rear-end crashes. In this study rear-end crash potential was estimated based on stopping distance between two consecutive vehicles, and four rear-end crash potential cases were developed. The results indicated that 66.4% of the observations were estimated as rear-end crash potentials. It was also shown that rear-end crash potential was presented when traffic flow and speed standard deviation were higher. Also, locational characteristics such as lane of travel and location in the network were found to affect drivers' car following decisions and additionally, it was shown that speeds were lower and headways higher when Heavy Goods Vehicles lead. Finally, a model-based behavioral analysis based on Multinomial Logit regression was conducted to systematically identify the statistically significant variables in explaining rear-end risk potential. The modeling results highlighted the significance of the explanatory variables associated with rear-end crash potential, however it was shown that their effect varied among different model configurations. The outcome of the results can be of significant value for several purposes, such as real-time monitoring of risk potential, allocating enforcement units in urban networks and designing targeted proactive safety policies. Copyright © 2018 Elsevier Ltd. All rights reserved.

  17. Research on the equivalent circuit model of a circular flexural-vibration-research on the equivalent circuit model of a circular flexural-vibration-mode piezoelectric transformer with moderate thickness.

    PubMed

    Huang, Yihua; Huang, Wenjin; Wang, Qinglei; Su, Xujian

    2013-07-01

    The equivalent circuit model of a piezoelectric transformer is useful in designing and optimizing the related driving circuits. Based on previous work, an equivalent circuit model for a circular flexural-vibration-mode piezoelectric transformer with moderate thickness is proposed and validated by finite element analysis. The input impedance, voltage gain, and efficiency of the transformer are determined through computation. The basic behaviors of the transformer are shown by numerical results.

  18. Piezoelectric transformer structural modeling--a review.

    PubMed

    Yang, Jiashi

    2007-06-01

    A review on piezoelectric transformer structural modeling is presented. The operating principle and the basic behavior of piezoelectric transformers as governed by the linear theory of piezoelectricity are shown by a simple, theoretical analysis on a Rosen transformer based on extensional modes of a nonhomogeneous ceramic rod. Various transformers are classified according to their structural shapes, operating modes, and voltage transforming capability. Theoretical and numerical modeling results from the theory of piezoelectricity are reviewed. More advances modeling on thermal and nonlinear effects also are discussed. The article contains 167 references.

  19. A one-dimensional stochastic approach to the study of cyclic voltammetry with adsorption effects

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Samin, Adib J.

    In this study, a one-dimensional stochastic model based on the random walk approach is used to simulate cyclic voltammetry. The model takes into account mass transport, kinetics of the redox reactions, adsorption effects and changes in the morphology of the electrode. The model is shown to display the expected behavior. Furthermore, the model shows consistent qualitative agreement with a finite difference solution. This approach allows for an understanding of phenomena on a microscopic level and may be useful for analyzing qualitative features observed in experimentally recorded signals.

  20. Simulink Model of the Ares I Upper Stage Main Propulsion System

    NASA Technical Reports Server (NTRS)

    Burchett, Bradley T.

    2008-01-01

    A numerical model of the Ares I upper stage main propulsion system is formulated based on first principles. Equation's are written as non-linear ordinary differential equations. The GASP fortran code is used to compute thermophysical properties of the working fluids. Complicated algebraic constraints are numerically solved. The model is implemented in Simulink and provides a rudimentary simulation of the time history of important pressures and temperatures during re-pressurization, boost and upper stage firing. The model is validated against an existing reliable code, and typical results are shown.

  1. A differential geometry model for the perceived colors space

    NASA Astrophysics Data System (ADS)

    Provenzi, Edoardo

    2016-06-01

    The space of perceived colors, before acquiring an industrial interest, has received a systematic theoretical attention from philosophers, physicists and mathematicians. The research about this topic is still active nowadays. In this paper, it will be presented a critical overview of a model based on differential geometry proposed by H. L. Resnikoff in 1974. It will be shown that, while some fundamental and elegant ideas behind this model can be still used as a guiding principle, some other parts of the model must be updated to comply with the modern findings about color perception.

  2. A one-dimensional stochastic approach to the study of cyclic voltammetry with adsorption effects

    NASA Astrophysics Data System (ADS)

    Samin, Adib J.

    2016-05-01

    In this study, a one-dimensional stochastic model based on the random walk approach is used to simulate cyclic voltammetry. The model takes into account mass transport, kinetics of the redox reactions, adsorption effects and changes in the morphology of the electrode. The model is shown to display the expected behavior. Furthermore, the model shows consistent qualitative agreement with a finite difference solution. This approach allows for an understanding of phenomena on a microscopic level and may be useful for analyzing qualitative features observed in experimentally recorded signals.

  3. Numerical Analysis of Modeling Based on Improved Elman Neural Network

    PubMed Central

    Jie, Shao

    2014-01-01

    A modeling based on the improved Elman neural network (IENN) is proposed to analyze the nonlinear circuits with the memory effect. The hidden layer neurons are activated by a group of Chebyshev orthogonal basis functions instead of sigmoid functions in this model. The error curves of the sum of squared error (SSE) varying with the number of hidden neurons and the iteration step are studied to determine the number of the hidden layer neurons. Simulation results of the half-bridge class-D power amplifier (CDPA) with two-tone signal and broadband signals as input have shown that the proposed behavioral modeling can reconstruct the system of CDPAs accurately and depict the memory effect of CDPAs well. Compared with Volterra-Laguerre (VL) model, Chebyshev neural network (CNN) model, and basic Elman neural network (BENN) model, the proposed model has better performance. PMID:25054172

  4. The Effects of Implementing TopModel Concepts in the Noah Model

    NASA Technical Reports Server (NTRS)

    Peters-Lidard, C. D.; Houser, Paul R. (Technical Monitor)

    2002-01-01

    Topographic effects on runoff generation have been documented observationally (e.g., Dunne and Black, 1970) and are the subject of the physically based rainfall-runoff model TOPMODEL (Beven and Kirkby, 1979; Beven, 1986a;b) and its extensions, which incorporate variable soil transmissivity effects (Sivapalan et al, 1987, Wood et al., 1988; 1990). These effects have been shown to exert significant control over the spatial distribution of runoff, soil moisture and evapotranspiration, and by extension, the latent and sensible heat fluxes

  5. How effective is advertising in duopoly markets?

    NASA Astrophysics Data System (ADS)

    Sznajd-Weron, K.; Weron, R.

    2003-06-01

    A simple Ising spin model which can describe the mechanism of advertising in a duopoly market is proposed. In contrast to other agent-based models, the influence does not flow inward from the surrounding neighbors to the center site, but spreads outward from the center to the neighbors. The model thus describes the spread of opinions among customers. It is shown via standard Monte Carlo simulations that very simple rules and inclusion of an external field-an advertising campaign-lead to phase transitions.

  6. Analysis of Physical and Numerical Factors for Prediction of UV Radiation from High Altitude Two-Phase Plumes

    DTIC Science & Technology

    2008-05-30

    varies from continuum inside the nozzle, to transitional in the near field, to free molecular in the far field of the plume. The scales of interest vary...unity based on the rocket length. This results in the formation of a viscous shock layer characterized by a bimodal molecular velocity distribution. The...transfer model. Previous analysis21 have shown that the heat transfer model implemented in CFD++ is reproduced closely by the free molecular model

  7. Linear viscoelasticity of a single semiflexible polymer with internal friction.

    PubMed

    Hiraiwa, Tetsuya; Ohta, Takao

    2010-07-28

    The linear viscoelastic behaviors of single semiflexible chains with internal friction are studied based on the wormlike-chain model. It is shown that the frequency dependence of the complex compliance in the high frequency limit is the same as that of the Voigt model. This asymptotic behavior appears also for the Rouse model with internal friction. We derive the characteristic times for both the high frequency limit and the low frequency limit and compare the results with those obtained by Khatri et al.

  8. Dynamic Modelling for Planar Extensible Continuum Robot Manipulators

    DTIC Science & Technology

    2006-01-01

    5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7... octopus arm [18]. The OCTARM, shown in Figure 1, is a three-section robot with nine degrees of freedom. Aside from two axis bending with constant... octopus arm. However, while allowing extensibility, the model is based on an approximation (by a Þnite number of linear models) to the true continuum

  9. Application of enthalpy model for floating zone silicon crystal growth

    NASA Astrophysics Data System (ADS)

    Krauze, A.; Bergfelds, K.; Virbulis, J.

    2017-09-01

    A 2D simplified crystal growth model based on the enthalpy method and coupled with a low-frequency harmonic electromagnetic model is developed to simulate the silicon crystal growth near the external triple point (ETP) and crystal melting on the open melting front of a polycrystalline feed rod in FZ crystal growth systems. Simulations of the crystal growth near the ETP show significant influence of the inhomogeneities of the EM power distribution on the crystal growth rate for a 4 in floating zone (FZ) system. The generated growth rate fluctuations are shown to be larger in the system with higher crystal pull rate. Simulations of crystal melting on the open melting front of the polycrystalline rod show the development of melt-filled grooves at the open melting front surface. The distance between the grooves is shown to grow with the increase of the skin-layer depth in the solid material.

  10. DEMONSTRATION OF EQUIVALENCY OF CANE AND SOFTWOOD BASED CELOTEX FOR MODEL 9975 SHIPPING PACKAGES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Watkins, R; Jason Varble, J

    2008-05-27

    Cane-based Celotex{trademark} has been used extensively in various Department of Energy (DOE) packages as a thermal insulator and impact absorber. Cane-based Celotex{trademark} fiberboard was only manufactured by Knight-Celotex Fiberboard at their Marrero Plant in Louisiana. However, Knight-Celotex Fiberboard shut down their Marrero Plant in early 2007 due to impacts from hurricane Katrina and other economic factors. Therefore, cane-based Celotex{trademark} fiberboard is no longer available for use in the manufacture of new shipping packages requiring the material as a component. Current consolidation plans for the DOE Complex require the procurement of several thousand new Model 9975 shipping packages requiring cane-based Celotex{trademark}more » fiberboard. Therefore, an alternative to cane-based Celotex{trademark} fiberboard is needed. Knight-Celotex currently manufactures Celotex{trademark} fiberboard from other cellulosic materials, such as hardwood and softwood. A review of the relevant literature has shown that softwood-based Celotex{trademark} meets all parameters important to the Model 9975 shipping package.« less

  11. Measurement of Online Student Engagement: Utilization of Continuous Online Student Behavior Indicators as Items in a Partial Credit Rasch Model

    ERIC Educational Resources Information Center

    Anderson, Elizabeth

    2017-01-01

    Student engagement has been shown to be essential to the development of research-based best practices for K-12 education. It has been defined and measured in numerous ways. The purpose of this research study was to develop a measure of online student engagement for grades 3 through 8 using a partial credit Rasch model and validate the measure…

  12. Monte Carlo calculation of dynamical properties of the two-dimensional Hubbard model

    NASA Technical Reports Server (NTRS)

    White, S. R.; Scalapino, D. J.; Sugar, R. L.; Bickers, N. E.

    1989-01-01

    A new method is introduced for analytically continuing imaginary-time data from quantum Monte Carlo calculations to the real-frequency axis. The method is based on a least-squares-fitting procedure with constraints of positivity and smoothness on the real-frequency quantities. Results are shown for the single-particle spectral-weight function and density of states for the half-filled, two-dimensional Hubbard model.

  13. Simulational nanoengineering: Molecular dynamics implementation of an atomistic Stirling engine.

    PubMed

    Rapaport, D C

    2009-04-01

    A nanoscale-sized Stirling engine with an atomistic working fluid has been modeled using molecular dynamics simulation. The design includes heat exchangers based on thermostats, pistons attached to a flywheel under load, and a regenerator. Key aspects of the behavior, including the time-dependent flows, are described. The model is shown to be capable of stable operation while producing net work at a moderate level of efficiency.

  14. The use of fractional order derivatives for eddy current non-destructive testing

    NASA Astrophysics Data System (ADS)

    Sikora, Ryszard; Grzywacz, Bogdan; Chady, Tomasz

    2018-04-01

    The paper presents the possibility of using the fractional derivatives for non-destructive testing when a multi-frequency method based on eddy current is applied. It is shown that frequency characteristics obtained during tests can be approximated by characteristics of a proposed model in the form of fractional order transfer function, and values of parameters of this model can be utilized for detection and identification of defects.

  15. The impact of using different modern climate data sets in pollen-based paleoclimate reconstructions of North America

    NASA Astrophysics Data System (ADS)

    Ladd, M.; Way, R. G.; Viau, A. E.

    2015-03-01

    The use of different modern climate data sets is shown to impact a continental-scale pollen-based reconstruction of mean July temperature (TJUL) over the last 2000 years for North America. Data from climate stations, physically modeled from climate stations and reanalysis products are used to calibrate the reconstructions. Results show that the use of reanalysis products produces warmer and/or smoother reconstructions as compared to the use of station based data sets. The reconstructions during the period of 1050-1550 CE are shown to be more variable because of a high latitude cold-bias in the modern TJUL data. The ultra-high resolution WorldClim gridded data may only useful if the modern pollen sites have at least the same spatial precision as the gridded dataset. Hence we justify the use of the lapse-rate corrected University of East Anglia Climate Research Unit (CRU) based Whitmore modern climate data set for North American pollen-based climate reconstructions.

  16. Identifying Seizure Onset Zone From the Causal Connectivity Inferred Using Directed Information

    NASA Astrophysics Data System (ADS)

    Malladi, Rakesh; Kalamangalam, Giridhar; Tandon, Nitin; Aazhang, Behnaam

    2016-10-01

    In this paper, we developed a model-based and a data-driven estimator for directed information (DI) to infer the causal connectivity graph between electrocorticographic (ECoG) signals recorded from brain and to identify the seizure onset zone (SOZ) in epileptic patients. Directed information, an information theoretic quantity, is a general metric to infer causal connectivity between time-series and is not restricted to a particular class of models unlike the popular metrics based on Granger causality or transfer entropy. The proposed estimators are shown to be almost surely convergent. Causal connectivity between ECoG electrodes in five epileptic patients is inferred using the proposed DI estimators, after validating their performance on simulated data. We then proposed a model-based and a data-driven SOZ identification algorithm to identify SOZ from the causal connectivity inferred using model-based and data-driven DI estimators respectively. The data-driven SOZ identification outperforms the model-based SOZ identification algorithm when benchmarked against visual analysis by neurologist, the current clinical gold standard. The causal connectivity analysis presented here is the first step towards developing novel non-surgical treatments for epilepsy.

  17. A bootstrap based space-time surveillance model with an application to crime occurrences

    NASA Astrophysics Data System (ADS)

    Kim, Youngho; O'Kelly, Morton

    2008-06-01

    This study proposes a bootstrap-based space-time surveillance model. Designed to find emerging hotspots in near-real time, the bootstrap based model is characterized by its use of past occurrence information and bootstrap permutations. Many existing space-time surveillance methods, using population at risk data to generate expected values, have resulting hotspots bounded by administrative area units and are of limited use for near-real time applications because of the population data needed. However, this study generates expected values for local hotspots from past occurrences rather than population at risk. Also, bootstrap permutations of previous occurrences are used for significant tests. Consequently, the bootstrap-based model, without the requirement of population at risk data, (1) is free from administrative area restriction, (2) enables more frequent surveillance for continuously updated registry database, and (3) is readily applicable to criminology and epidemiology surveillance. The bootstrap-based model performs better for space-time surveillance than the space-time scan statistic. This is shown by means of simulations and an application to residential crime occurrences in Columbus, OH, year 2000.

  18. A GPS Phase-Locked Loop Performance Metric Based on the Phase Discriminator Output

    PubMed Central

    Stevanovic, Stefan; Pervan, Boris

    2018-01-01

    We propose a novel GPS phase-lock loop (PLL) performance metric based on the standard deviation of tracking error (defined as the discriminator’s estimate of the true phase error), and explain its advantages over the popular phase jitter metric using theory, numerical simulation, and experimental results. We derive an augmented GPS phase-lock loop (PLL) linear model, which includes the effect of coherent averaging, to be used in conjunction with this proposed metric. The augmented linear model allows more accurate calculation of tracking error standard deviation in the presence of additive white Gaussian noise (AWGN) as compared to traditional linear models. The standard deviation of tracking error, with a threshold corresponding to half of the arctangent discriminator pull-in region, is shown to be a more reliable/robust measure of PLL performance under interference conditions than the phase jitter metric. In addition, the augmented linear model is shown to be valid up until this threshold, which facilitates efficient performance prediction, so that time-consuming direct simulations and costly experimental testing can be reserved for PLL designs that are much more likely to be successful. The effect of varying receiver reference oscillator quality on the tracking error metric is also considered. PMID:29351250

  19. Adaptive regularization network based neural modeling paradigm for nonlinear adaptive estimation of cerebral evoked potentials.

    PubMed

    Zhang, Jian-Hua; Böhme, Johann F

    2007-11-01

    In this paper we report an adaptive regularization network (ARN) approach to realizing fast blind separation of cerebral evoked potentials (EPs) from background electroencephalogram (EEG) activity with no need to make any explicit assumption on the statistical (or deterministic) signal model. The ARNs are proposed to construct nonlinear EEG and EP signal models. A novel adaptive regularization training (ART) algorithm is proposed to improve the generalization performance of the ARN. Two adaptive neural modeling methods based on the ARN are developed and their implementation and performance analysis are also presented. The computer experiments using simulated and measured visual evoked potential (VEP) data have shown that the proposed ARN modeling paradigm yields computationally efficient and more accurate VEP signal estimation owing to its intrinsic model-free and nonlinear processing characteristics.

  20. Modeling and performance analysis of QoS data

    NASA Astrophysics Data System (ADS)

    Strzeciwilk, Dariusz; Zuberek, Włodzimierz M.

    2016-09-01

    The article presents the results of modeling and analysis of data transmission performance on systems that support quality of service. Models are designed and tested, taking into account multiservice network architecture, i.e. supporting the transmission of data related to different classes of traffic. Studied were mechanisms of traffic shaping systems, which are based on the Priority Queuing with an integrated source of data and the various sources of data that is generated. Discussed were the basic problems of the architecture supporting QoS and queuing systems. Designed and built were models based on Petri nets, supported by temporal logics. The use of simulation tools was to verify the mechanisms of shaping traffic with the applied queuing algorithms. It is shown that temporal models of Petri nets can be effectively used in the modeling and analysis of the performance of computer networks.

  1. Viral Booster Vaccines Improve Mycobacterium bovis BCG-Induced Protection Against Bovine Tuberculosis

    USDA-ARS?s Scientific Manuscript database

    Previous work in small animal laboratory models of tuberculosis have shown that vaccination strategies based on heterologous prime-boost protocols using Mycobacterium bovis bacille Calmette-Guerin (BCG) to prime and Modified Vaccinia Ankara strain (MVA85A) or recombinant attenuated adenoviruses (Ad8...

  2. A PHYSIOLOGICALLY-BASED PHARMACOKINETIC MODEL FOR DEVELOPMENTAL EXPOSURE TO PBDE-47 IN RODENTS

    EPA Science Inventory

    Polybrominated diphenyl ethers (PBDEs) are used commercially as additive flame retardants and have been shown to transfer into environmental compartments where they have the potential to bioaccumulate in wildlife and in people. These compounds have been detected in blood and oth...

  3. An Optimal Control Modification to Model-Reference Adaptive Control for Fast Adaptation

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan T.; Krishnakumar, Kalmanje; Boskovic, Jovan

    2008-01-01

    This paper presents a method that can achieve fast adaptation for a class of model-reference adaptive control. It is well-known that standard model-reference adaptive control exhibits high-gain control behaviors when a large adaptive gain is used to achieve fast adaptation in order to reduce tracking error rapidly. High gain control creates high-frequency oscillations that can excite unmodeled dynamics and can lead to instability. The fast adaptation approach is based on the minimization of the squares of the tracking error, which is formulated as an optimal control problem. The necessary condition of optimality is used to derive an adaptive law using the gradient method. This adaptive law is shown to result in uniform boundedness of the tracking error by means of the Lyapunov s direct method. Furthermore, this adaptive law allows a large adaptive gain to be used without causing undesired high-gain control effects. The method is shown to be more robust than standard model-reference adaptive control. Simulations demonstrate the effectiveness of the proposed method.

  4. Complex networks untangle competitive advantage in Australian football

    NASA Astrophysics Data System (ADS)

    Braham, Calum; Small, Michael

    2018-05-01

    We construct player-based complex network models of Australian football teams for the 2014 Australian Football League season; modelling the passes between players as weighted, directed edges. We show that analysis of these measures can give an insight into the underlying structure and strategy of Australian football teams, quantitatively distinguishing different playing styles. The relationships observed between network properties and match outcomes suggest that successful teams exhibit well-connected passing networks with the passes distributed between all 22 players as evenly as possible. Linear regression models of team scores and match margins show significant improvements in R2 and Bayesian information criterion when network measures are added to models that use conventional measures, demonstrating that network analysis measures contain useful, extra information. Several measures, particularly the mean betweenness centrality, are shown to be useful in predicting the outcomes of future matches, suggesting they measure some aspect of the intrinsic strength of teams. In addition, several local centrality measures are shown to be useful in analysing individual players' differing contributions to the team's structure.

  5. Complex networks untangle competitive advantage in Australian football.

    PubMed

    Braham, Calum; Small, Michael

    2018-05-01

    We construct player-based complex network models of Australian football teams for the 2014 Australian Football League season; modelling the passes between players as weighted, directed edges. We show that analysis of these measures can give an insight into the underlying structure and strategy of Australian football teams, quantitatively distinguishing different playing styles. The relationships observed between network properties and match outcomes suggest that successful teams exhibit well-connected passing networks with the passes distributed between all 22 players as evenly as possible. Linear regression models of team scores and match margins show significant improvements in R 2 and Bayesian information criterion when network measures are added to models that use conventional measures, demonstrating that network analysis measures contain useful, extra information. Several measures, particularly the mean betweenness centrality, are shown to be useful in predicting the outcomes of future matches, suggesting they measure some aspect of the intrinsic strength of teams. In addition, several local centrality measures are shown to be useful in analysing individual players' differing contributions to the team's structure.

  6. Steady state numerical solutions for determining the location of MEMS on projectile

    NASA Astrophysics Data System (ADS)

    Abiprayu, K.; Abdigusna, M. F. F.; Gunawan, P. H.

    2018-03-01

    This paper is devoted to compare the numerical solutions for the steady and unsteady state heat distribution model on projectile. Here, the best location for installing of the MEMS on the projectile based on the surface temperature is investigated. Numerical iteration methods, Jacobi and Gauss-Seidel have been elaborated to solve the steady state heat distribution model on projectile. The results using Jacobi and Gauss-Seidel are shown identical but the discrepancy iteration cost for each methods is gained. Using Jacobi’s method, the iteration cost is 350 iterations. Meanwhile, using Gauss-Seidel 188 iterations are obtained, faster than the Jacobi’s method. The comparison of the simulation by steady state model and the unsteady state model by a reference is shown satisfying. Moreover, the best candidate for installing MEMS on projectile is observed at pointT(10, 0) which has the lowest temperature for the other points. The temperature using Jacobi and Gauss-Seidel for scenario 1 and 2 atT(10, 0) are 307 and 309 Kelvin respectively.

  7. A numerical procedure for recovering true scattering coefficients from measurements with wide-beam antennas

    NASA Technical Reports Server (NTRS)

    Wang, Qinglin; Gogineni, S. P.

    1991-01-01

    A numerical procedure for estimating the true scattering coefficient, sigma(sup 0), from measurements made using wide-beam antennas. The use of wide-beam antennas results in an inaccurate estimate of sigma(sup 0) if the narrow-beam approximation is used in the retrieval process for sigma(sup 0). To reduce this error, a correction procedure was proposed that estimates the error resulting from the narrow-beam approximation and uses the error to obtain a more accurate estimate of sigma(sup 0). An exponential model was assumed to take into account the variation of sigma(sup 0) with incidence angles, and the model parameters are estimated from measured data. Based on the model and knowledge of the antenna pattern, the procedure calculates the error due to the narrow-beam approximation. The procedure is shown to provide a significant improvement in estimation of sigma(sup 0) obtained with wide-beam antennas. The proposed procedure is also shown insensitive to the assumed sigma(sup 0) model.

  8. Accurate electromagnetic modeling of terahertz detectors

    NASA Technical Reports Server (NTRS)

    Focardi, Paolo; McGrath, William R.

    2004-01-01

    Twin slot antennas coupled to superconducting devices have been developed over the years as single pixel detectors in the terahertz (THz) frequency range for space-based and astronomy applications. Used either for mixing or direct detection, they have been object of several investigations, and are currently being developed for several missions funded or co-funded by NASA. Although they have shown promising performance in terms of noise and sensitivity, so far they have usually also shown a considerable disagreement in terms of performance between calculations and measurements, especially when considering center frequency and bandwidth. In this paper we present a thorough and accurate electromagnetic model of complete detector and we compare the results of calculations with measurements. Starting from a model of the embedding circuit, the effect of all the other elements in the detector in the coupled power have been analyzed. An extensive variety of measured and calculated data, as presented in this paper, demonstrates the effectiveness and reliability of the electromagnetic model at frequencies between 600 GHz and 2.5THz.

  9. Noise source and reactor stability estimation in a boiling water reactor using a multivariate autoregressive model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kanemoto, S.; Andoh, Y.; Sandoz, S.A.

    1984-10-01

    A method for evaluating reactor stability in boiling water reactors has been developed. The method is based on multivariate autoregressive (M-AR) modeling of steady-state neutron and process noise signals. In this method, two kinds of power spectral densities (PSDs) for the measured neutron signal and the corresponding noise source signal are separately identified by the M-AR modeling. The closed- and open-loop stability parameters are evaluated from these PSDs. The method is applied to actual plant noise data that were measured together with artificial perturbation test data. Stability parameters identified from noise data are compared to those from perturbation test data,more » and it is shown that both results are in good agreement. In addition to these stability estimations, driving noise sources for the neutron signal are evaluated by the M-AR modeling. Contributions from void, core flow, and pressure noise sources are quantitatively evaluated, and the void noise source is shown to be the most dominant.« less

  10. The Minimum-Mass Surface Density of the Solar Nebula using the Disk Evolution Equation

    NASA Technical Reports Server (NTRS)

    Davis, Sanford S.

    2005-01-01

    The Hayashi minimum-mass power law representation of the pre-solar nebula (Hayashi 1981, Prog. Theo. Phys.70,35) is revisited using analytic solutions of the disk evolution equation. A new cumulative-planetary-mass-model (an integrated form of the surface density) is shown to predict a smoother surface density compared with methods based on direct estimates of surface density from planetary data. First, a best-fit transcendental function is applied directly to the cumulative planetary mass data with the surface density obtained by direct differentiation. Next a solution to the time-dependent disk evolution equation is parametrically adapted to the planetary data. The latter model indicates a decay rate of r -1/2 in the inner disk followed by a rapid decay which results in a sharper outer boundary than predicted by the minimum mass model. The model is shown to be a good approximation to the finite-size early Solar Nebula and by extension to extra solar protoplanetary disks.

  11. Predicting dense nonaqueous phase liquid dissolution using a simplified source depletion model parameterized with partitioning tracers

    NASA Astrophysics Data System (ADS)

    Basu, Nandita B.; Fure, Adrian D.; Jawitz, James W.

    2008-07-01

    Simulations of nonpartitioning and partitioning tracer tests were used to parameterize the equilibrium stream tube model (ESM) that predicts the dissolution dynamics of dense nonaqueous phase liquids (DNAPLs) as a function of the Lagrangian properties of DNAPL source zones. Lagrangian, or stream-tube-based, approaches characterize source zones with as few as two trajectory-integrated parameters, in contrast to the potentially thousands of parameters required to describe the point-by-point variability in permeability and DNAPL in traditional Eulerian modeling approaches. The spill and subsequent dissolution of DNAPLs were simulated in two-dimensional domains having different hydrologic characteristics (variance of the log conductivity field = 0.2, 1, and 3) using the multiphase flow and transport simulator UTCHEM. Nonpartitioning and partitioning tracers were used to characterize the Lagrangian properties (travel time and trajectory-integrated DNAPL content statistics) of DNAPL source zones, which were in turn shown to be sufficient for accurate prediction of source dissolution behavior using the ESM throughout the relatively broad range of hydraulic conductivity variances tested here. The results were found to be relatively insensitive to travel time variability, suggesting that dissolution could be accurately predicted even if the travel time variance was only coarsely estimated. Estimation of the ESM parameters was also demonstrated using an approximate technique based on Eulerian data in the absence of tracer data; however, determining the minimum amount of such data required remains for future work. Finally, the stream tube model was shown to be a more unique predictor of dissolution behavior than approaches based on the ganglia-to-pool model for source zone characterization.

  12. A seasonal Bartlett-Lewis Rectangular Pulse model

    NASA Astrophysics Data System (ADS)

    Ritschel, Christoph; Agbéko Kpogo-Nuwoklo, Komlan; Rust, Henning; Ulbrich, Uwe; Névir, Peter

    2016-04-01

    Precipitation time series with a high temporal resolution are needed as input for several hydrological applications, e.g. river runoff or sewer system models. As adequate observational data sets are often not available, simulated precipitation series come to use. Poisson-cluster models are commonly applied to generate these series. It has been shown that this class of stochastic precipitation models is able to well reproduce important characteristics of observed rainfall. For the gauge based case study presented here, the Bartlett-Lewis rectangular pulse model (BLRPM) has been chosen. As it has been shown that certain model parameters vary with season in a midlatitude moderate climate due to different rainfall mechanisms dominating in winter and summer, model parameters are typically estimated separately for individual seasons or individual months. Here, we suggest a simultaneous parameter estimation for the whole year under the assumption that seasonal variation of parameters can be described with harmonic functions. We use an observational precipitation series from Berlin with a high temporal resolution to exemplify the approach. We estimate BLRPM parameters with and without this seasonal extention and compare the results in terms of model performance and robustness of the estimation.

  13. Global parameter optimization of a Mather-type plasma focus in the framework of the Gratton-Vargas two-dimensional snowplow model

    NASA Astrophysics Data System (ADS)

    Auluck, S. K. H.

    2014-12-01

    Dense plasma focus (DPF) is known to produce highly energetic ions, electrons and plasma environment which can be used for breeding short-lived isotopes, plasma nanotechnology and other material processing applications. Commercial utilization of DPF in such areas would need a design tool that can be deployed in an automatic search for the best possible device configuration for a given application. The recently revisited (Auluck 2013 Phys. Plasmas 20 112501) Gratton-Vargas (GV) two-dimensional analytical snowplow model of plasma focus provides a numerical formula for dynamic inductance of a Mather-type plasma focus fitted to thousands of automated computations, which enables the construction of such a design tool. This inductance formula is utilized in the present work to explore global optimization, based on first-principles optimality criteria, in a four-dimensional parameter-subspace of the zero-resistance GV model. The optimization process is shown to reproduce the empirically observed constancy of the drive parameter over eight decades in capacitor bank energy. The optimized geometry of plasma focus normalized to the anode radius is shown to be independent of voltage, while the optimized anode radius is shown to be related to capacitor bank inductance.

  14. Physics based performance model of a UV missile seeker

    NASA Astrophysics Data System (ADS)

    James, I.

    2017-10-01

    Electro-optically (EO) guided surface to air missiles (SAM) have developed to use Ultraviolet (UV) wavebands supplementary to the more common Infrared (IR) wavebands. Missiles such as the US Stinger have been around for some time, these have been joined recently by Chinese FN-16 and Russian SA-29 (Verba) and there is a much higher potential proliferation risk. The purpose of this paper is to introduce a first-principles, physics based, model of a typical seeker arrangement. The model is constructed from various calculations that aim to characterise the physical effects that will affect the performance of the system. Data has been gathered from a number of sources to provide realism to the variables within the model. It will be demonstrated that many of the variables have the power to dramatically alter the performance of the system as a whole. Further, data will be shown to illustrate the expected performance of a typical UV detector within a SAM in detection range against a variety of target sizes. The trend for the detection range against aircraft size and skin reflectivity will be shown to be non-linear, this should have been expected owing to the exponential decay of a signal through atmosphere. Future work will validate the performance of the model against real world performance data for cameras (when this is available) to ensure that it is operates within acceptable errors.

  15. A multi-level strategy for anticipating future glacier lake formation and associated hazard potentials

    NASA Astrophysics Data System (ADS)

    Frey, H.; Haeberli, W.; Linsbauer, A.; Huggel, C.; Paul, F.

    2010-02-01

    In the course of glacier retreat, new glacier lakes can develop. As such lakes can be a source of natural hazards, strategies for predicting future glacier lake formation are important for an early planning of safety measures. In this article, a multi-level strategy for the identification of overdeepened parts of the glacier beds and, hence, sites with potential future lake formation, is presented. At the first two of the four levels of this strategy, glacier bed overdeepenings are estimated qualitatively and over large regions based on a digital elevation model (DEM) and digital glacier outlines. On level 3, more detailed and laborious models are applied for modeling the glacier bed topography over smaller regions; and on level 4, special situations must be investigated in-situ with detailed measurements such as geophysical soundings. The approaches of the strategy are validated using historical data from Trift Glacier, where a lake formed over the past decade. Scenarios of future glacier lakes are shown for the two test regions Aletsch and Bernina in the Swiss Alps. In the Bernina region, potential future lake outbursts are modeled, using a GIS-based hydrological flow routing model. As shown by a corresponding test, the ASTER GDEM and the SRTM DEM are both suitable to be used within the proposed strategy. Application of this strategy in other mountain regions of the world is therefore possible as well.

  16. A Context-Aware Model to Provide Positioning in Disaster Relief Scenarios

    PubMed Central

    Moreno, Daniel; Ochoa, Sergio F.; Meseguer, Roc

    2015-01-01

    The effectiveness of the work performed during disaster relief efforts is highly dependent on the coordination of activities conducted by the first responders deployed in the affected area. Such coordination, in turn, depends on an appropriate management of geo-referenced information. Therefore, enabling first responders to count on positioning capabilities during these activities is vital to increase the effectiveness of the response process. The positioning methods used in this scenario must assume a lack of infrastructure-based communication and electrical energy, which usually characterizes affected areas. Although positioning systems such as the Global Positioning System (GPS) have been shown to be useful, we cannot assume that all devices deployed in the area (or most of them) will have positioning capabilities by themselves. Typically, many first responders carry devices that are not capable of performing positioning on their own, but that require such a service. In order to help increase the positioning capability of first responders in disaster-affected areas, this paper presents a context-aware positioning model that allows mobile devices to estimate their position based on information gathered from their surroundings. The performance of the proposed model was evaluated using simulations, and the obtained results show that mobile devices without positioning capabilities were able to use the model to estimate their position. Moreover, the accuracy of the positioning model has been shown to be suitable for conducting most first response activities. PMID:26437406

  17. Evaluation of one dimensional analytical models for vegetation canopies

    NASA Technical Reports Server (NTRS)

    Goel, Narendra S.; Kuusk, Andres

    1992-01-01

    The SAIL model for one-dimensional homogeneous vegetation canopies has been modified to include the specular reflectance and hot spot effects. This modified model and the Nilson-Kuusk model are evaluated by comparing the reflectances given by them against those given by a radiosity-based computer model, Diana, for a set of canopies, characterized by different leaf area index (LAI) and leaf angle distribution (LAD). It is shown that for homogeneous canopies, the analytical models are generally quite accurate in the visible region, but not in the infrared region. For architecturally realistic heterogeneous canopies of the type found in nature, these models fall short. These shortcomings are quantified.

  18. Descriptive Linear modeling of steady-state visual evoked response

    NASA Technical Reports Server (NTRS)

    Levison, W. H.; Junker, A. M.; Kenner, K.

    1986-01-01

    A study is being conducted to explore use of the steady state visual-evoke electrocortical response as an indicator of cognitive task loading. Application of linear descriptive modeling to steady state Visual Evoked Response (VER) data is summarized. Two aspects of linear modeling are reviewed: (1) unwrapping the phase-shift portion of the frequency response, and (2) parsimonious characterization of task-loading effects in terms of changes in model parameters. Model-based phase unwrapping appears to be most reliable in applications, such as manual control, where theoretical models are available. Linear descriptive modeling of the VER has not yet been shown to provide consistent and readily interpretable results.

  19. Verifying Multi-Agent Systems via Unbounded Model Checking

    NASA Technical Reports Server (NTRS)

    Kacprzak, M.; Lomuscio, A.; Lasica, T.; Penczek, W.; Szreter, M.

    2004-01-01

    We present an approach to the problem of verification of epistemic properties in multi-agent systems by means of symbolic model checking. In particular, it is shown how to extend the technique of unbounded model checking from a purely temporal setting to a temporal-epistemic one. In order to achieve this, we base our discussion on interpreted systems semantics, a popular semantics used in multi-agent systems literature. We give details of the technique and show how it can be applied to the well known train, gate and controller problem. Keywords: model checking, unbounded model checking, multi-agent systems

  20. Lattice hydrodynamic model based traffic control: A transportation cyber-physical system approach

    NASA Astrophysics Data System (ADS)

    Liu, Hui; Sun, Dihua; Liu, Weining

    2016-11-01

    Lattice hydrodynamic model is a typical continuum traffic flow model, which describes the jamming transition of traffic flow properly. Previous studies in lattice hydrodynamic model have shown that the use of control method has the potential to improve traffic conditions. In this paper, a new control method is applied in lattice hydrodynamic model from a transportation cyber-physical system approach, in which only one lattice site needs to be controlled in this control scheme. The simulation verifies the feasibility and validity of this method, which can ensure the efficient and smooth operation of the traffic flow.

  1. Nonlinearity measure and internal model control based linearization in anti-windup design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perev, Kamen

    2013-12-18

    This paper considers the problem of internal model control based linearization in anti-windup design. The nonlinearity measure concept is used for quantifying the control system degree of nonlinearity. The linearizing effect of a modified internal model control structure is presented by comparing the nonlinearity measures of the open-loop and closed-loop systems. It is shown that the linearization properties are improved by increasing the control system local feedback gain. However, it is emphasized that at the same time the stability of the system deteriorates. The conflicting goals of stability and linearization are resolved by solving the design problem in different frequencymore » ranges.« less

  2. Estimation of clear-sky insolation using satellite and ground meteorological data

    NASA Technical Reports Server (NTRS)

    Staylor, W. F.; Darnell, W. L.; Gupta, S. K.

    1983-01-01

    Ground based pyranometer measurements were combined with meteorological data from the Tiros N satellite in order to estimate clear-sky insolations at five U.S. sites for five weeks during the spring of 1979. The estimates were used to develop a semi-empirical model of clear-sky insolation for the interpretation of input data from the Tiros Operational Vertical Sounder (TOVS). Using only satellite data, the estimated standard errors in the model were about 2 percent. The introduction of ground based data reduced errors to around 1 percent. It is shown that although the errors in the model were reduced by only 1 percent, TOVS data products are still adequate for estimating clear-sky insolation.

  3. Micromechanical modeling of damage growth in titanium based metal-matrix composites

    NASA Technical Reports Server (NTRS)

    Sherwood, James A.; Quimby, Howard M.

    1994-01-01

    The thermomechanical behavior of continuous-fiber reinforced titanium based metal-matrix composites (MMC) is studied using the finite element method. A thermoviscoplastic unified state variable constitutive theory is employed to capture inelastic and strain-rate sensitive behavior in the Timetal-21s matrix. The SCS-6 fibers are modeled as thermoplastic. The effects of residual stresses generated during the consolidation process on the tensile response of the composites are investigated. Unidirectional and cross-ply geometries are considered. Differences between the tensile responses in composites with perfectly bonded and completely debonded fiber/matrix interfaces are discussed. Model simulations for the completely debonded-interface condition are shown to correlate well with experimental results.

  4. Influence of government controls over the currency exchange rate in the evolution of Hurst's exponent: An autonomous agent-based model

    NASA Astrophysics Data System (ADS)

    Chávez Muñoz, Pablo; Fernandes da Silva, Marcus; Vivas Miranda, José; Claro, Francisco; Gomez Diniz, Raimundo

    2007-12-01

    We have studied the performance of the Hurst's index associated with the currency exchange rate in Brazil and Chile. It is shown that this index maps the degree of government control in the exchange rate. A model of supply and demand based in an autonomous agent is proposed, that simulates a virtual market of sale and purchase, where buyer or seller are forced to negotiate through an intermediary. According to this model, the average of the price of daily transactions correspond to the theoretical balance proposed by the law of supply and demand. The influence of an added tendency factor is also analyzed.

  5. Polarimetric SAR image classification based on discriminative dictionary learning model

    NASA Astrophysics Data System (ADS)

    Sang, Cheng Wei; Sun, Hong

    2018-03-01

    Polarimetric SAR (PolSAR) image classification is one of the important applications of PolSAR remote sensing. It is a difficult high-dimension nonlinear mapping problem, the sparse representations based on learning overcomplete dictionary have shown great potential to solve such problem. The overcomplete dictionary plays an important role in PolSAR image classification, however for PolSAR image complex scenes, features shared by different classes will weaken the discrimination of learned dictionary, so as to degrade classification performance. In this paper, we propose a novel overcomplete dictionary learning model to enhance the discrimination of dictionary. The learned overcomplete dictionary by the proposed model is more discriminative and very suitable for PolSAR classification.

  6. Studies on tribology

    PubMed Central

    Hori, Yukio; Kato, Koji

    2008-01-01

    In high speed rotating machines such as turbines and generators, vibrations of a rotating shaft often hinder the smooth operation of the machine or even cause failure. Oil whip is one of such vibrations due to oil film action of journal bearing. Its mechanism and preventive method is explained and proposed in this paper. Further theoretical and experimental analyses are made for considering heat generation and temperature rise in hydrodynamic lubrication. The usefulness of the lubrication theory based on the k–ε model is also shown for bearings with high eccentricity ratios. In the latter half of this paper, water lubrication, nitrogen gas lubrication and tribo-coated indium lubrication are shown as new promising methods, and their mechanisms are discussed and the importance of tribo-layer is explained. Some mechanisms of wear are introduced for better understanding of tribo-layer. In the last part of this paper, the mechanisms of generating static friction are shown for the cases of plastic contact and elastic contact, which is the base for understanding the mechanism of initiation of macroscopic sliding. PMID:18941304

  7. Studies on tribology.

    PubMed

    Hori, Yukio; Kato, Koji

    2008-01-01

    In high speed rotating machines such as turbines and generators, vibrations of a rotating shaft often hinder the smooth operation of the machine or even cause failure. Oil whip is one of such vibrations due to oil film action of journal bearing. Its mechanism and preventive method is explained and proposed in this paper. Further theoretical and experimental analyses are made for considering heat generation and temperature rise in hydrodynamic lubrication. The usefulness of the lubrication theory based on the k-epsilon model is also shown for bearings with high eccentricity ratios. In the latter half of this paper, water lubrication, nitrogen gas lubrication and tribo-coated indium lubrication are shown as new promising methods, and their mechanisms are discussed and the importance of tribo-layer is explained. Some mechanisms of wear are introduced for better understanding of tribo-layer. In the last part of this paper, the mechanisms of generating static friction are shown for the cases of plastic contact and elastic contact, which is the base for understanding the mechanism of initiation of macroscopic sliding.

  8. Overview of physical models of liquid entrainment in annular gas-liquid flow

    NASA Astrophysics Data System (ADS)

    Cherdantsev, Andrey V.

    2018-03-01

    A number of recent papers devoted to development of physically-based models for prediction of liquid entrainment in annular regime of two-phase flow are analyzed. In these models shearing-off the crests of disturbance waves by the gas drag force is supposed to be the physical mechanism of entrainment phenomenon. The models are based on a number of assumptions on wavy structure, including inception of disturbance waves due to Kelvin-Helmholtz instability, linear velocity profile inside liquid film and high degree of three-dimensionality of disturbance waves. Validity of the assumptions is analyzed by comparison to modern experimental observations. It was shown that nearly every assumption is in strong qualitative and quantitative disagreement with experiments, which leads to massive discrepancies between the modeled and real properties of the disturbance waves. As a result, such models over-predict the entrained fraction by several orders of magnitude. The discrepancy is usually reduced using various kinds of empirical corrections. This, combined with empiricism already included in the models, turns the models into another kind of empirical correlations rather than physically-based models.

  9. Modeling subjective evaluation of soundscape quality in urban open spaces: An artificial neural network approach.

    PubMed

    Yu, Lei; Kang, Jian

    2009-09-01

    This research aims to explore the feasibility of using computer-based models to predict the soundscape quality evaluation of potential users in urban open spaces at the design stage. With the data from large scale field surveys in 19 urban open spaces across Europe and China, the importance of various physical, behavioral, social, demographical, and psychological factors for the soundscape evaluation has been statistically analyzed. Artificial neural network (ANN) models have then been explored at three levels. It has been shown that for both subjective sound level and acoustic comfort evaluation, a general model for all the case study sites is less feasible due to the complex physical and social environments in urban open spaces; models based on individual case study sites perform well but the application range is limited; and specific models for certain types of location/function would be reliable and practical. The performance of acoustic comfort models is considerably better than that of sound level models. Based on the ANN models, soundscape quality maps can be produced and this has been demonstrated with an example.

  10. A level set approach for shock-induced α-γ phase transition of RDX

    NASA Astrophysics Data System (ADS)

    Josyula, Kartik; Rahul; De, Suvranu

    2018-02-01

    We present a thermodynamically consistent level sets approach based on regularization energy functional which can be directly incorporated into a Galerkin finite element framework to model interface motion. The regularization energy leads to a diffusive form of flux that is embedded within the level sets evolution equation which maintains the signed distance property of the level set function. The scheme is shown to compare well with the velocity extension method in capturing the interface position. The proposed level sets approach is employed to study the α-γphase transformation in RDX single crystal shocked along the (100) plane. Example problems in one and three dimensions are presented. We observe smooth evolution of the phase interface along the shock direction in both models. There is no diffusion of the interface during the zero level set evolution in the three dimensional model. The level sets approach is shown to capture the characteristics of the shock-induced α-γ phase transformation such as stress relaxation behind the phase interface and the finite time required for the phase transformation to complete. The regularization energy based level sets approach is efficient, robust, and easy to implement.

  11. Optimization-based image reconstruction in x-ray computed tomography by sparsity exploitation of local continuity and nonlocal spatial self-similarity

    NASA Astrophysics Data System (ADS)

    Han-Ming, Zhang; Lin-Yuan, Wang; Lei, Li; Bin, Yan; Ai-Long, Cai; Guo-En, Hu

    2016-07-01

    The additional sparse prior of images has been the subject of much research in problems of sparse-view computed tomography (CT) reconstruction. A method employing the image gradient sparsity is often used to reduce the sampling rate and is shown to remove the unwanted artifacts while preserve sharp edges, but may cause blocky or patchy artifacts. To eliminate this drawback, we propose a novel sparsity exploitation-based model for CT image reconstruction. In the presented model, the sparse representation and sparsity exploitation of both gradient and nonlocal gradient are investigated. The new model is shown to offer the potential for better results by introducing a similarity prior information of the image structure. Then, an effective alternating direction minimization algorithm is developed to optimize the objective function with a robust convergence result. Qualitative and quantitative evaluations have been carried out both on the simulation and real data in terms of accuracy and resolution properties. The results indicate that the proposed method can be applied for achieving better image-quality potential with the theoretically expected detailed feature preservation. Project supported by the National Natural Science Foundation of China (Grant No. 61372172).

  12. The Application of Simulation Method in Isothermal Elastic Natural Gas Pipeline

    NASA Astrophysics Data System (ADS)

    Xing, Chunlei; Guan, Shiming; Zhao, Yue; Cao, Jinggang; Chu, Yanji

    2018-02-01

    This Elastic pipeline mathematic model is of crucial importance in natural gas pipeline simulation because of its compliance with the practical industrial cases. The numerical model of elastic pipeline will bring non-linear complexity to the discretized equations. Hence the Newton-Raphson method cannot achieve fast convergence in this kind of problems. Therefore A new Newton Based method with Powell-Wolfe Condition to simulate the Isothermal elastic pipeline flow is presented. The results obtained by the new method aregiven based on the defined boundary conditions. It is shown that the method converges in all cases and reduces significant computational cost.

  13. Brief Report: Effects of Video-Based Group Instruction on Spontaneous Social Interaction of Adolescents with Autism Spectrum Disorders.

    PubMed

    Plavnick, Joshua B; Dueñas, Ana D

    2018-06-01

    Four adolescents with autism spectrum disorder (ASD) were taught to interact with peers by asking social questions or commenting about others during game play or group activities. Participants were shown a video model and then given an opportunity to perform the social behavior depicted in the model when playing a game with one another. All participants demonstrated an increase in both social interaction skills, replicating previous research on video-based group instruction for adolescents with ASD. The results suggest the procedure may be useful for teaching social skills that occur under natural conditions.

  14. Modeling colorimetric characteristics of ON-OFF behavior of photochromic dyes based on bis-azospiropyrans

    NASA Astrophysics Data System (ADS)

    Kandi, Saeideh Gorji; Nourmohammadian, Farahnaz

    2013-10-01

    In the present study, colorimetric characteristics of six photochromic bis-azospiropyrans based dyes are investigated. The colorimetric axes of these dyes in terms of hue, chroma and lightness are considered via UV exposure time. Results showed that, the hue angle of these dyes changes rapidly at the first times of UV irradiation, and then remains almost constant. However, chroma and lightness continuously change till about 80-240 s for different dyes. In addition, it is shown that variations of lightness and chroma by UV exposure time have an exponential trend, which can be modeled mathematically.

  15. Default contagion risks in Russian interbank market

    NASA Astrophysics Data System (ADS)

    Leonidov, A. V.; Rumyantsev, E. L.

    2016-06-01

    Systemic risks of default contagion in the Russian interbank market are investigated. The analysis is based on considering the bow-tie structure of the weighted oriented graph describing the structure of the interbank loans. A probabilistic model of interbank contagion explicitly taking into account the empirical bow-tie structure reflecting functionality of the corresponding nodes (borrowers, lenders, borrowers and lenders simultaneously), degree distributions and disassortativity of the interbank network under consideration based on empirical data is developed. The characteristics of contagion-related systemic risk calculated with this model are shown to be in agreement with those of explicit stress tests.

  16. Gummel Symmetry Test on charge based drain current expression using modified first-order hyperbolic velocity-field expression

    NASA Astrophysics Data System (ADS)

    Singh, Kirmender; Bhattacharyya, A. B.

    2017-03-01

    Gummel Symmetry Test (GST) has been a benchmark industry standard for MOSFET models and is considered as one of important tests by the modeling community. BSIM4 MOSFET model fails to pass GST as the drain current equation is not symmetrical because drain and source potentials are not referenced to bulk. BSIM6 MOSFET model overcomes this limitation by taking all terminal biases with reference to bulk and using proper velocity saturation (v -E) model. The drain current equation in BSIM6 is charge based and continuous in all regions of operation. It, however, adopts a complicated method to compute source and drain charges. In this work we propose to use conventional charge based method formulated by Enz for obtaining simpler analytical drain current expression that passes GST. For this purpose we adopt two steps: (i) In the first step we use a modified first-order hyperbolic v -E model with adjustable coefficients which is integrable, simple and accurate, and (ii) In the second we use a multiplying factor in the modified first-order hyperbolic v -E expression to obtain correct monotonic asymptotic behavior around the origin of lateral electric field. This factor is of empirical form, which is a function of drain voltage (vd) and source voltage (vs) . After considering both the above steps we obtain drain current expression whose accuracy is similar to that obtained from second-order hyperbolic v -E model. In modified first-order hyperbolic v -E expression if vd and vs is replaced by smoothing functions for the effective drain voltage (vdeff) and effective source voltage (vseff), it will as well take care of discontinuity between linear to saturation regions of operation. The condition of symmetry is shown to be satisfied by drain current and its higher order derivatives, as both of them are odd functions and their even order derivatives smoothly pass through the origin. In strong inversion region and technology node of 22 nm the GST is shown to pass till sixth-order derivative and for weak inversion it is shown till fifth-order derivative. In the expression of drain current major short channel phenomena like vertical field mobility reduction, velocity saturation and velocity overshoot have been taken into consideration.

  17. Portfolio optimization by using linear programing models based on genetic algorithm

    NASA Astrophysics Data System (ADS)

    Sukono; Hidayat, Y.; Lesmana, E.; Putra, A. S.; Napitupulu, H.; Supian, S.

    2018-01-01

    In this paper, we discussed the investment portfolio optimization using linear programming model based on genetic algorithms. It is assumed that the portfolio risk is measured by absolute standard deviation, and each investor has a risk tolerance on the investment portfolio. To complete the investment portfolio optimization problem, the issue is arranged into a linear programming model. Furthermore, determination of the optimum solution for linear programming is done by using a genetic algorithm. As a numerical illustration, we analyze some of the stocks traded on the capital market in Indonesia. Based on the analysis, it is shown that the portfolio optimization performed by genetic algorithm approach produces more optimal efficient portfolio, compared to the portfolio optimization performed by a linear programming algorithm approach. Therefore, genetic algorithms can be considered as an alternative on determining the investment portfolio optimization, particularly using linear programming models.

  18. Unified computational model of transport in metal-insulating oxide-metal systems

    NASA Astrophysics Data System (ADS)

    Tierney, B. D.; Hjalmarson, H. P.; Jacobs-Gedrim, R. B.; Agarwal, Sapan; James, C. D.; Marinella, M. J.

    2018-04-01

    A unified physics-based model of electron transport in metal-insulator-metal (MIM) systems is presented. In this model, transport through metal-oxide interfaces occurs by electron tunneling between the metal electrodes and oxide defect states. Transport in the oxide bulk is dominated by hopping, modeled as a series of tunneling events that alter the electron occupancy of defect states. Electron transport in the oxide conduction band is treated by the drift-diffusion formalism and defect chemistry reactions link all the various transport mechanisms. It is shown that the current-limiting effect of the interface band offsets is a function of the defect vacancy concentration. These results provide insight into the underlying physical mechanisms of leakage currents in oxide-based capacitors and steady-state electron transport in resistive random access memory (ReRAM) MIM devices. Finally, an explanation of ReRAM bipolar switching behavior based on these results is proposed.

  19. Spatial generalised linear mixed models based on distances.

    PubMed

    Melo, Oscar O; Mateu, Jorge; Melo, Carlos E

    2016-10-01

    Risk models derived from environmental data have been widely shown to be effective in delineating geographical areas of risk because they are intuitively easy to understand. We present a new method based on distances, which allows the modelling of continuous and non-continuous random variables through distance-based spatial generalised linear mixed models. The parameters are estimated using Markov chain Monte Carlo maximum likelihood, which is a feasible and a useful technique. The proposed method depends on a detrending step built from continuous or categorical explanatory variables, or a mixture among them, by using an appropriate Euclidean distance. The method is illustrated through the analysis of the variation in the prevalence of Loa loa among a sample of village residents in Cameroon, where the explanatory variables included elevation, together with maximum normalised-difference vegetation index and the standard deviation of normalised-difference vegetation index calculated from repeated satellite scans over time. © The Author(s) 2013.

  20. Modeling of copper sorption onto GFH and design of full-scale GFH adsorbers.

    PubMed

    Steiner, Michele; Pronk, Wouter; Boller, Markus A

    2006-03-01

    During rain events, copper wash-off occurring from copper roofs results in environmental hazards. In this study, columns filled with granulated ferric hydroxide (GFH) were used to treat copper-containing roof runoff. It was shown that copper could be removed to a high extent. A model was developed to describe this removal process. The model was based on the Two Region Model (TRM), extended with an additional diffusion zone. The extended model was able to describe the copper removal in long-term experiments (up to 125 days) with variable flow rates reflecting realistic runoff events. The four parameters of the model were estimated based on data gained with specific column experiments according to maximum sensitivity for each parameter. After model validation, the parameter set was used for the design of full-scale adsorbers. These full-scale adsorbers show high removal rates during extended periods of time.

  1. Use of DNS Data for the Evaluation of Closure Models for Rotating Turbulent Channel Flow

    NASA Astrophysics Data System (ADS)

    Hsieh, Alan; Biringen, Sedat; Kucala, Alec

    2013-11-01

    A direct numerical simulation (DNS) of a turbulent channel flow rotating about the spanwise axis was conducted at a Reynolds number (based on the centerline velocity and channel half height) 8000, Prandtl number 0.71, and Rossby number 26. Several Reynolds-Averaged Navier-Stokes (RANS) based turbulence models for rotating flows were analyzed and tested. It was shown that the closure approximations in the pressure-strain correlation term proposed by the Speziale, Sarkar, and Gatski (SSG) RSM model were more accurate than the Girimaji EARSM model. The Reynolds stresses, primarily the shear stresses, produced by the Girimaji model were compared to the DNS data and revealed an evident discontinuity in the modeled Reynolds stress profiles; consequently, a smoothing function was generated and applied as a correction so that there is significantly better agreement between the Reynolds shear stress profiles produced by the DNS data and the modified Girimaji model.

  2. Model-Based Battery Management Systems: From Theory to Practice

    NASA Astrophysics Data System (ADS)

    Pathak, Manan

    Lithium-ion batteries are now extensively being used as the primary storage source. Capacity and power fade, and slow recharging times are key issues that restrict its use in many applications. Battery management systems are critical to address these issues, along with ensuring its safety. This dissertation focuses on exploring various control strategies using detailed physics-based electrochemical models developed previously for lithium-ion batteries, which could be used in advanced battery management systems. Optimal charging profiles for minimizing capacity fade based on SEI-layer formation are derived and the benefits of using such control strategies are shown by experimentally testing them on a 16 Ah NMC-based pouch cell. This dissertation also explores different time-discretization strategies for non-linear models, which gives an improved order of convergence for optimal control problems. Lastly, this dissertation also explores a physics-based model for predicting the linear impedance of a battery, and develops a freeware that is extremely robust and computationally fast. Such a code could be used for estimating transport, kinetic and material properties of the battery based on the linear impedance spectra.

  3. A rigorous multiple independent binding site model for determining cell-based equilibrium dissociation constants.

    PubMed

    Drake, Andrew W; Klakamp, Scott L

    2007-01-10

    A new 4-parameter nonlinear equation based on the standard multiple independent binding site model (MIBS) is presented for fitting cell-based ligand titration data in order to calculate the ligand/cell receptor equilibrium dissociation constant and the number of receptors/cell. The most commonly used linear (Scatchard Plot) or nonlinear 2-parameter model (a single binding site model found in commercial programs like Prism(R)) used for analysis of ligand/receptor binding data assumes only the K(D) influences the shape of the titration curve. We demonstrate using simulated data sets that, depending upon the cell surface receptor expression level, the number of cells titrated, and the magnitude of the K(D) being measured, this assumption of always being under K(D)-controlled conditions can be erroneous and can lead to unreliable estimates for the binding parameters. We also compare and contrast the fitting of simulated data sets to the commonly used cell-based binding equation versus our more rigorous 4-parameter nonlinear MIBS model. It is shown through these simulations that the new 4-parameter MIBS model, when used for cell-based titrations under optimal conditions, yields highly accurate estimates of all binding parameters and hence should be the preferred model to fit cell-based experimental nonlinear titration data.

  4. Enabling full-field physics-based optical proximity correction via dynamic model generation

    NASA Astrophysics Data System (ADS)

    Lam, Michael; Clifford, Chris; Raghunathan, Ananthan; Fenger, Germain; Adam, Kostas

    2017-07-01

    As extreme ultraviolet lithography becomes closer to reality for high volume production, its peculiar modeling challenges related to both inter and intrafield effects have necessitated building an optical proximity correction (OPC) infrastructure that operates with field position dependency. Previous state-of-the-art approaches to modeling field dependency used piecewise constant models where static input models are assigned to specific x/y-positions within the field. OPC and simulation could assign the proper static model based on simulation-level placement. However, in the realm of 7 and 5 nm feature sizes, small discontinuities in OPC from piecewise constant model changes can cause unacceptable levels of edge placement errors. The introduction of dynamic model generation (DMG) can be shown to effectively avoid these dislocations by providing unique mask and optical models per simulation region, allowing a near continuum of models through the field. DMG allows unique models for electromagnetic field, apodization, aberrations, etc. to vary through the entire field and provides a capability to precisely and accurately model systematic field signatures.

  5. Neural Network Based Modeling and Analysis of LP Control Surface Allocation

    NASA Technical Reports Server (NTRS)

    Langari, Reza; Krishnakumar, Kalmanje; Gundy-Burlet, Karen

    2003-01-01

    This paper presents an approach to interpretive modeling of LP based control allocation in intelligent flight control. The emphasis is placed on a nonlinear interpretation of the LP allocation process as a static map to support analytical study of the resulting closed loop system, albeit in approximate form. The approach makes use of a bi-layer neural network to capture the essential functioning of the LP allocation process. It is further shown via Lyapunov based analysis that under certain relatively mild conditions the resulting closed loop system is stable. Some preliminary conclusions from a study at Ames are stated and directions for further research are given at the conclusion of the paper.

  6. Modeling languages for biochemical network simulation: reaction vs equation based approaches.

    PubMed

    Wiechert, Wolfgang; Noack, Stephan; Elsheikh, Atya

    2010-01-01

    Biochemical network modeling and simulation is an essential task in any systems biology project. The systems biology markup language (SBML) was established as a standardized model exchange language for mechanistic models. A specific strength of SBML is that numerous tools for formulating, processing, simulation and analysis of models are freely available. Interestingly, in the field of multidisciplinary simulation, the problem of model exchange between different simulation tools occurred much earlier. Several general modeling languages like Modelica have been developed in the 1990s. Modelica enables an equation based modular specification of arbitrary hierarchical differential algebraic equation models. Moreover, libraries for special application domains can be rapidly developed. This contribution compares the reaction based approach of SBML with the equation based approach of Modelica and explains the specific strengths of both tools. Several biological examples illustrating essential SBML and Modelica concepts are given. The chosen criteria for tool comparison are flexibility for constraint specification, different modeling flavors, hierarchical, modular and multidisciplinary modeling. Additionally, support for spatially distributed systems, event handling and network analysis features is discussed. As a major result it is shown that the choice of the modeling tool has a strong impact on the expressivity of the specified models but also strongly depends on the requirements of the application context.

  7. Unique Practice, Unique Place: Exploring Two Assertive Community Treatment Teams in Maine.

    PubMed

    Schroeder, Rebecca A

    2018-06-01

    Assertive Community Treatment (ACT) is a model of care that provides comprehensive community-based psychiatric care for persons with serious mental illness. This model has been widely documented and has shown to be an evidence-based model of care for reducing hospitalizations for this targeted population. Critical ingredients of the ACT model are the holistic nature of their services, a team based approach to treatment and nurses who assist with illness management, medication monitoring, and provider collaboration. Although the model remains strong there are clear differences between urban and rural teams. This article describes present day practice in two disparate ACT programs in urban and rural Maine. It offers a new perspective on the evolving and innovative program of services that treat those with serious mental illness along with a review of literature pertinent to the ACT model and future recommendations for nursing practice. The success and longevity of these two ACT programs are testament to the quality of care and commitment of staff that work with seriously mentally ill consumers. Integrative care models such as these community-based treatment teams and nursing driven interventions are prime elements of this successful model.

  8. Incorporating video modeling into a school-based intervention for students with autism spectrum disorders.

    PubMed

    Wilson, Kaitlyn P

    2013-01-01

    Video modeling is an intervention strategy that has been shown to be effective in improving the social and communication skills of students with autism spectrum disorders, or ASDs. The purpose of this tutorial is to outline empirically supported, step-by-step instructions for the use of video modeling by school-based speech-language pathologists (SLPs) serving students with ASDs. This tutorial draws from the many reviews and meta-analyses of the video modeling literature that have been conducted over the past decade, presenting empirically supported considerations for school-based SLPs who are planning to incorporate video modeling into their service delivery for students with ASD. The 5 overarching procedural phases presented in this tutorial are (a) preparation, (b) recording of the video model, (c) implementation of the video modeling intervention, (d) monitoring of the student's response to the intervention, and (e) planning of the next steps. Video modeling is not only a promising intervention strategy for students with ASD, but it is also a practical and efficient tool that is well-suited to the school setting. This tutorial will facilitate school-based SLPs' incorporation of this empirically supported intervention into their existing strategies for intervention for students with ASD.

  9. Damage evaluation by a guided wave-hidden Markov model based method

    NASA Astrophysics Data System (ADS)

    Mei, Hanfei; Yuan, Shenfang; Qiu, Lei; Zhang, Jinjin

    2016-02-01

    Guided wave based structural health monitoring has shown great potential in aerospace applications. However, one of the key challenges of practical engineering applications is the accurate interpretation of the guided wave signals under time-varying environmental and operational conditions. This paper presents a guided wave-hidden Markov model based method to improve the damage evaluation reliability of real aircraft structures under time-varying conditions. In the proposed approach, an HMM based unweighted moving average trend estimation method, which can capture the trend of damage propagation from the posterior probability obtained by HMM modeling is used to achieve a probabilistic evaluation of the structural damage. To validate the developed method, experiments are performed on a hole-edge crack specimen under fatigue loading condition and a real aircraft wing spar under changing structural boundary conditions. Experimental results show the advantage of the proposed method.

  10. Cross-Beam Energy Transfer Driven by Incoherent Laser Beams with Frequency Detuning

    NASA Astrophysics Data System (ADS)

    Maximov, A.; Myatt, J. F.; Short, R. W.; Igumenshchev, I. V.; Seka, W.

    2015-11-01

    In the direct-drive method of the inertial confinement fusion (ICF), the coupling of laser energy to target plasmas is strongly influenced by the effect of cross-beam energy transfer (CBET) between multiple driving laser beams. The laser -plasma interaction (LPI) model of CBET is based on the nonparaxial laser light propagation coupled with the low-frequency ion-acoustic-domain plasma response. Common ion waves driven by multiple laser beams play a very important role in CBET. The effect of the frequency detuning (colors) in the driving laser beams is studied and it is shown to significantly reduce the level of common ion waves and therefore the level of CBET. The differences between the LPI-based CBET model and the ray-based CBET model used in hydrocodes are discussed. This material is based upon work supported by the Department of Energy National Nuclear Security Administration under Award Number DE-NA0001944.

  11. Teaching genetics using hands-on models, problem solving, and inquiry-based methods

    NASA Astrophysics Data System (ADS)

    Hoppe, Stephanie Ann

    Teaching genetics can be challenging because of the difficulty of the content and misconceptions students might hold. This thesis focused on using hands-on model activities, problem solving, and inquiry-based teaching/learning methods in order to increase student understanding in an introductory biology class in the area of genetics. Various activities using these three methods were implemented into the classes to address any misconceptions and increase student learning of the difficult concepts. The activities that were implemented were shown to be successful based on pre-post assessment score comparison. The students were assessed on the subjects of inheritance patterns, meiosis, and protein synthesis and demonstrated growth in all of the areas. It was found that hands-on models, problem solving, and inquiry-based activities were more successful in learning concepts in genetics and the students were more engaged than tradition styles of lecture.

  12. LMI-based stability analysis of fuzzy-model-based control systems using approximated polynomial membership functions.

    PubMed

    Narimani, Mohammand; Lam, H K; Dilmaghani, R; Wolfe, Charles

    2011-06-01

    Relaxed linear-matrix-inequality-based stability conditions for fuzzy-model-based control systems with imperfect premise matching are proposed. First, the derivative of the Lyapunov function, containing the product terms of the fuzzy model and fuzzy controller membership functions, is derived. Then, in the partitioned operating domain of the membership functions, the relations between the state variables and the mentioned product terms are represented by approximated polynomials in each subregion. Next, the stability conditions containing the information of all subsystems and the approximated polynomials are derived. In addition, the concept of the S-procedure is utilized to release the conservativeness caused by considering the whole operating region for approximated polynomials. It is shown that the well-known stability conditions can be special cases of the proposed stability conditions. Simulation examples are given to illustrate the validity of the proposed approach.

  13. Analytical Modelling of a Refractive Index Sensor Based on an Intrinsic Micro Fabry-Perot Interferometer

    PubMed Central

    Vargas-Rodriguez, Everardo; Guzman-Chavez, Ana D.; Cano-Contreras, Martin; Gallegos-Arellano, Eloisa; Jauregui-Vazquez, Daniel; Hernández-García, Juan C.; Estudillo-Ayala, Julian M.; Rojas-Laguna, Roberto

    2015-01-01

    In this work a refractive index sensor based on a combination of the non-dispersive sensing (NDS) and the Tunable Laser Spectroscopy (TLS) principles is presented. Here, in order to have one reference and one measurement channel a single-beam dual-path configuration is used for implementing the NDS principle. These channels are monitored with a couple of identical optical detectors which are correlated to calculate the overall sensor response, called here the depth of modulation. It is shown that this is useful to minimize drifting errors due to source power variations. Furthermore, a comprehensive analysis of a refractive index sensing setup, based on an intrinsic micro Fabry-Perot Interferometer (FPI) is described. Here, the changes over the FPI pattern as the exit refractive index is varied are analytically modelled by using the characteristic matrix method. Additionally, our simulated results are supported by experimental measurements which are also provided. Finally it is shown that by using this principle a simple refractive index sensor with a resolution in the order of 2.15 × 10−4 RIU can be implemented by using a couple of standard and low cost photodetectors. PMID:26501277

  14. Rapid perfusion quantification using Welch-Satterthwaite approximation and analytical spectral filtering

    NASA Astrophysics Data System (ADS)

    Krishnan, Karthik; Reddy, Kasireddy V.; Ajani, Bhavya; Yalavarthy, Phaneendra K.

    2017-02-01

    CT and MR perfusion weighted imaging (PWI) enable quantification of perfusion parameters in stroke studies. These parameters are calculated from the residual impulse response function (IRF) based on a physiological model for tissue perfusion. The standard approach for estimating the IRF is deconvolution using oscillatory-limited singular value decomposition (oSVD) or Frequency Domain Deconvolution (FDD). FDD is widely recognized as the fastest approach currently available for deconvolution of CT Perfusion/MR PWI. In this work, three faster methods are proposed. The first is a direct (model based) crude approximation to the final perfusion quantities (Blood flow, Blood volume, Mean Transit Time and Delay) using the Welch-Satterthwaite approximation for gamma fitted concentration time curves (CTC). The second method is a fast accurate deconvolution method, we call Analytical Fourier Filtering (AFF). The third is another fast accurate deconvolution technique using Showalter's method, we call Analytical Showalter's Spectral Filtering (ASSF). Through systematic evaluation on phantom and clinical data, the proposed methods are shown to be computationally more than twice as fast as FDD. The two deconvolution based methods, AFF and ASSF, are also shown to be quantitatively accurate compared to FDD and oSVD.

  15. Analytical modelling of a refractive index sensor based on an intrinsic micro Fabry-Perot interferometer.

    PubMed

    Vargas-Rodriguez, Everardo; Guzman-Chavez, Ana D; Cano-Contreras, Martin; Gallegos-Arellano, Eloisa; Jauregui-Vazquez, Daniel; Hernández-García, Juan C; Estudillo-Ayala, Julian M; Rojas-Laguna, Roberto

    2015-10-15

    In this work a refractive index sensor based on a combination of the non-dispersive sensing (NDS) and the Tunable Laser Spectroscopy (TLS) principles is presented. Here, in order to have one reference and one measurement channel a single-beam dual-path configuration is used for implementing the NDS principle. These channels are monitored with a couple of identical optical detectors which are correlated to calculate the overall sensor response, called here the depth of modulation. It is shown that this is useful to minimize drifting errors due to source power variations. Furthermore, a comprehensive analysis of a refractive index sensing setup, based on an intrinsic micro Fabry-Perot Interferometer (FPI) is described. Here, the changes over the FPI pattern as the exit refractive index is varied are analytically modelled by using the characteristic matrix method. Additionally, our simulated results are supported by experimental measurements which are also provided. Finally it is shown that by using this principle a simple refractive index sensor with a resolution in the order of 2.15 × 10(-4) RIU can be implemented by using a couple of standard and low cost photodetectors.

  16. The immune system in space, including Earth-based benefits of space-based research.

    PubMed

    Sonnenfeld, Gerald

    2005-08-01

    Exposure to space flight conditions has been shown to result in alterations in immune responses. Changes in immune responses of humans and experimental animals have been shown to be altered during and after space flight of humans and experimental animals or cell cultures of lymphoid cells. Exposure of subjects to ground-based models of space flight conditions, such as hindlimb unloading of rodents or chronic bed rest of humans, has also resulted in changes in the immune system. The relationship of these changes to compromised resistance to infection or tumors in space flight has not been fully established, but results from model systems suggest that alterations in the immune system that occur in space flight conditions may be related to decreases in resistance to infection. The establishment of such a relationship could lead to the development of countermeasures that could prevent or ameliorate any compromises in resistance to infection resulting from exposure to space flight conditions. An understanding of the mechanisms of space flight conditions effects on the immune response and development of countermeasures to prevent them could contribute to the development of treatments for compromised immunity on earth.

  17. 30 CFR 250.302 - Definitions concerning air quality.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... and 250.304 of this part: Air pollutant means any combination of agents for which the Environmental... shown by monitored data or which is calculated by air quality modeling (or other methods determined by... standards established by EPA. Best available control technology (BACT) means an emission limitation based on...

  18. Computer simulation for integrated pest management of spruce budworms

    Treesearch

    Carroll B. Williams; Patrick J. Shea

    1982-01-01

    Some field studies of the effects of various insecticides on the spruce budworm (Choristoneura sp.) and their parasites have shown severe suppression of host (budworm) populations and increased parasitism after treatment. Computer simulation using hypothetical models of spruce budworm-parasite systems based on these field data revealed that (1)...

  19. Uncertainty plus Prior Equals Rational Bias: An Intuitive Bayesian Probability Weighting Function

    ERIC Educational Resources Information Center

    Fennell, John; Baddeley, Roland

    2012-01-01

    Empirical research has shown that when making choices based on probabilistic options, people behave as if they overestimate small probabilities, underestimate large probabilities, and treat positive and negative outcomes differently. These distortions have been modeled using a nonlinear probability weighting function, which is found in several…

  20. Parasitic nematode-induced modulation of body weight and associated metabolic dysfunction in mouse models of obesity

    USDA-ARS?s Scientific Manuscript database

    Obesity is associated with a chronic low grade inflammation characterized by high level of pro-inflammatory cytokines and mediators implicated in disrupted metabolic homeostasis. Parasitic nematode infection induces a polarized Th2 cytokine response and has been shown to modulate immune-based pathol...

  1. Teacher Burnout and Perceived Job Security (Dynamics and Implications).

    ERIC Educational Resources Information Center

    Smith, Roy L.; McCarthy, Marilyn Bartlett

    Research has shown that: (1) Physiological and psychological aspects of stress and burnout are equated with emotional exhaustion and (2) Individual responses to relationships and the working environment are based, to a large extent, upon the individual's expectations. A model was developed that accounts for individual perceptions of reasonable…

  2. A Model-Based Approach to Inventory Stratification

    Treesearch

    Ronald E. McRoberts

    2006-01-01

    Forest inventory programs report estimates of forest variables for areas of interest ranging in size from municipalities to counties to States and Provinces. Classified satellite imagery has been shown to be an effective source of ancillary data that, when used with stratified estimation techniques, contributes to increased precision with little corresponding increase...

  3. A Probabilistic Corpus-Based Model of Syntactic Parallelism

    ERIC Educational Resources Information Center

    Dubey, Amit; Keller, Frank; Sturt, Patrick

    2008-01-01

    Work in experimental psycholinguistics has shown that the processing of coordinate structures is facilitated when the two conjuncts share the same syntactic structure [Frazier, L., Munn, A., & Clifton, C. (2000). "Processing coordinate structures." "Journal of Psycholinguistic Research," 29(4) 343-370]. In the present paper, we argue that this…

  4. Estimating error cross-correlations in soil moisture data sets using extended collocation analysis

    USDA-ARS?s Scientific Manuscript database

    Consistent global soil moisture records are essential for studying the role of hydrologic processes within the larger earth system. Various studies have shown the benefit of assimilating satellite-based soil moisture data into water balance models or merging multi-source soil moisture retrievals int...

  5. Replication of Child-Parent Psychotherapy in Community Settings: Models for Training

    ERIC Educational Resources Information Center

    Van Horn, Patricia; Osofsky, Joy D.; Henderson, Dorothy; Korfmacher, Jon; Thomas, Kandace; Lieberman, Alicia F.

    2012-01-01

    Child-parent psychotherapy (CPP), an evidence-based dyadic therapeutic intervention for very young children exposed to trauma, is becoming the go-to therapeutic intervention for infant mental health practitioners. Although CPP has been shown to be effective for rebuilding the parent-child relationship, reducing trauma symptoms, and reducing…

  6. Higher Education and Employment Markets in France.

    ERIC Educational Resources Information Center

    Mingat, Alain; Eicher, J. C.

    1982-01-01

    The link between educational investment and individual earnings is discussed based on a 1974 study in Dijon. An investigation of the relationship between these two elements is shown to need to consider the choice model used by the individual student entering higher education. Academic and social background influence the student's choice.…

  7. The effect of family processes on school achievement as moderated by socioeconomic context.

    PubMed

    Oxford, Monica L; Lee, Jungeun Olivia

    2011-10-01

    This longitudinal study examined a model of early school achievement in reading and math, as it varies by socioeconomic context, using data from the NICHD Study of Early Child Care and Youth Development. A conceptual model was tested that included features of family stress, early parenting, and school readiness, through both a single-group analysis and also a multiple-group analysis. Latent profile analysis was used to identify subgroups of more advantaged and less advantaged families. Family stress and parenting were shown to operate differently depending on the socioeconomic context, whereas child-based school readiness characteristics were shown to operate similarly across socieodemographic contexts. Implications for intervention are discussed. Copyright © 2011 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.

  8. A geometrical optics polarimetric bidirectional reflectance distribution function for dielectric and metallic surfaces.

    PubMed

    Hyde, M W; Schmidt, J D; Havrilla, M J

    2009-11-23

    A polarimetric bidirectional reflectance distribution function (pBRDF), based on geometrical optics, is presented. The pBRDF incorporates a visibility (shadowing/masking) function and a Lambertian (diffuse) component which distinguishes it from other geometrical optics pBRDFs in literature. It is shown that these additions keep the pBRDF bounded (and thus a more realistic physical model) as the angle of incidence or observation approaches grazing and better able to model the behavior of light scattered from rough, reflective surfaces. In this paper, the theoretical development of the pBRDF is shown and discussed. Simulation results of a rough, perfect reflecting surface obtained using an exact, electromagnetic solution and experimental Mueller matrix results of two, rough metallic samples are presented to validate the pBRDF.

  9. 3D/2D model-to-image registration by imitation learning for cardiac procedures.

    PubMed

    Toth, Daniel; Miao, Shun; Kurzendorfer, Tanja; Rinaldi, Christopher A; Liao, Rui; Mansi, Tommaso; Rhode, Kawal; Mountney, Peter

    2018-05-12

    In cardiac interventions, such as cardiac resynchronization therapy (CRT), image guidance can be enhanced by involving preoperative models. Multimodality 3D/2D registration for image guidance, however, remains a significant research challenge for fundamentally different image data, i.e., MR to X-ray. Registration methods must account for differences in intensity, contrast levels, resolution, dimensionality, field of view. Furthermore, same anatomical structures may not be visible in both modalities. Current approaches have focused on developing modality-specific solutions for individual clinical use cases, by introducing constraints, or identifying cross-modality information manually. Machine learning approaches have the potential to create more general registration platforms. However, training image to image methods would require large multimodal datasets and ground truth for each target application. This paper proposes a model-to-image registration approach instead, because it is common in image-guided interventions to create anatomical models for diagnosis, planning or guidance prior to procedures. An imitation learning-based method, trained on 702 datasets, is used to register preoperative models to intraoperative X-ray images. Accuracy is demonstrated on cardiac models and artificial X-rays generated from CTs. The registration error was [Formula: see text] on 1000 test cases, superior to that of manual ([Formula: see text]) and gradient-based ([Formula: see text]) registration. High robustness is shown in 19 clinical CRT cases. Besides the proposed methods feasibility in a clinical environment, evaluation has shown good accuracy and high robustness indicating that it could be applied in image-guided interventions.

  10. Models and Frameworks: A Synergistic Association for Developing Component-Based Applications

    PubMed Central

    Sánchez-Ledesma, Francisco; Sánchez, Pedro; Pastor, Juan A.; Álvarez, Bárbara

    2014-01-01

    The use of frameworks and components has been shown to be effective in improving software productivity and quality. However, the results in terms of reuse and standardization show a dearth of portability either of designs or of component-based implementations. This paper, which is based on the model driven software development paradigm, presents an approach that separates the description of component-based applications from their possible implementations for different platforms. This separation is supported by automatic integration of the code obtained from the input models into frameworks implemented using object-oriented technology. Thus, the approach combines the benefits of modeling applications from a higher level of abstraction than objects, with the higher levels of code reuse provided by frameworks. In order to illustrate the benefits of the proposed approach, two representative case studies that use both an existing framework and an ad hoc framework, are described. Finally, our approach is compared with other alternatives in terms of the cost of software development. PMID:25147858

  11. Models and frameworks: a synergistic association for developing component-based applications.

    PubMed

    Alonso, Diego; Sánchez-Ledesma, Francisco; Sánchez, Pedro; Pastor, Juan A; Álvarez, Bárbara

    2014-01-01

    The use of frameworks and components has been shown to be effective in improving software productivity and quality. However, the results in terms of reuse and standardization show a dearth of portability either of designs or of component-based implementations. This paper, which is based on the model driven software development paradigm, presents an approach that separates the description of component-based applications from their possible implementations for different platforms. This separation is supported by automatic integration of the code obtained from the input models into frameworks implemented using object-oriented technology. Thus, the approach combines the benefits of modeling applications from a higher level of abstraction than objects, with the higher levels of code reuse provided by frameworks. In order to illustrate the benefits of the proposed approach, two representative case studies that use both an existing framework and an ad hoc framework, are described. Finally, our approach is compared with other alternatives in terms of the cost of software development.

  12. Evaluating the robustness of conceptual rainfall-runoff models under climate variability in northern Tunisia

    NASA Astrophysics Data System (ADS)

    Dakhlaoui, H.; Ruelland, D.; Tramblay, Y.; Bargaoui, Z.

    2017-07-01

    To evaluate the impact of climate change on water resources at the catchment scale, not only future projections of climate are necessary but also robust rainfall-runoff models that must be fairly reliable under changing climate conditions. The aim of this study was thus to assess the robustness of three conceptual rainfall-runoff models (GR4j, HBV and IHACRES) on five basins in northern Tunisia under long-term climate variability, in the light of available future climate scenarios for this region. The robustness of the models was evaluated using a differential split sample test based on a climate classification of the observation period that simultaneously accounted for precipitation and temperature conditions. The study catchments include the main hydrographical basins in northern Tunisia, which produce most of the surface water resources in the country. A 30-year period (1970-2000) was used to capture a wide range of hydro-climatic conditions. The calibration was based on the Kling-Gupta Efficiency (KGE) criterion, while model transferability was evaluated based on the Nash-Sutcliffe efficiency criterion and volume error. The three hydrological models were shown to behave similarly under climate variability. The models simulated the runoff pattern better when transferred to wetter and colder conditions than to drier and warmer ones. It was shown that their robustness became unacceptable when climate conditions involved a decrease of more than 25% in annual precipitation and an increase of more than +1.75 °C in annual mean temperatures. The reduction in model robustness may be partly due to the climate dependence of some parameters. When compared to precipitation and temperature projections in the region, the limits of transferability obtained in this study are generally respected for short and middle term. For long term projections under the most pessimistic emission gas scenarios, the limits of transferability are generally not respected, which may hamper the use of conceptual models for hydrological projections in northern Tunisia.

  13. Support of surgical process modeling by using adaptable software user interfaces

    NASA Astrophysics Data System (ADS)

    Neumuth, T.; Kaschek, B.; Czygan, M.; Goldstein, D.; Strauß, G.; Meixensberger, J.; Burgert, O.

    2010-03-01

    Surgical Process Modeling (SPM) is a powerful method for acquiring data about the evolution of surgical procedures. Surgical Process Models are used in a variety of use cases including evaluation studies, requirements analysis and procedure optimization, surgical education, and workflow management scheme design. This work proposes the use of adaptive, situation-aware user interfaces for observation support software for SPM. We developed a method to support the modeling of the observer by using an ontological knowledge base. This is used to drive the graphical user interface for the observer to restrict the search space of terminology depending on the current situation. In the evaluation study it is shown, that the workload of the observer was decreased significantly by using adaptive user interfaces. 54 SPM observation protocols were analyzed by using the NASA Task Load Index and it was shown that the use of the adaptive user interface disburdens the observer significantly in workload criteria effort, mental demand and temporal demand, helping him to concentrate on his essential task of modeling the Surgical Process.

  14. Thermal barrier coating life prediction model development, phase 1

    NASA Technical Reports Server (NTRS)

    Demasi, Jeanine T.; Ortiz, Milton

    1989-01-01

    The objective of this program was to establish a methodology to predict thermal barrier coating (TBC) life on gas turbine engine components. The approach involved experimental life measurement coupled with analytical modeling of relevant degradation modes. Evaluation of experimental and flight service components indicate the predominant failure mode to be thermomechanical spallation of the ceramic coating layer resulting from propagation of a dominant near interface crack. Examination of fractionally exposed specimens indicated that dominant crack formation results from progressive structural damage in the form of subcritical microcrack link-up. Tests conducted to isolate important life drivers have shown MCrAlY oxidation to significantly affect the rate of damage accumulation. Mechanical property testing has shown the plasma deposited ceramic to exhibit a non-linear stress-strain response, creep and fatigue. The fatigue based life prediction model developed accounts for the unusual ceramic behavior and also incorporates an experimentally determined oxide rate model. The model predicts the growth of this oxide scale to influence the intensity of the mechanic driving force, resulting from cyclic strains and stresses caused by thermally induced and externally imposed mechanical loads.

  15. Water immersion and its computer simulation as analogs of weightlessness

    NASA Technical Reports Server (NTRS)

    Leonard, J. I.

    1982-01-01

    Experimental studies and computer simulations of water immersion are summarized and discussed with regard to their utility as analogs of weightlessness. Emphasis is placed on describing and interpreting the renal, endocrine, fluid, and circulatory changes that take place during immersion. A mathematical model, based on concepts of fluid volume regulation, is shown to be well suited to simulate the dynamic responses to water immersion. Further, it is shown that such a model provides a means to study specific mechanisms and pathways involved in the immersion response. A number of hypotheses are evaluated with the model related to the effects of dehydration, venous pressure disturbances, the control of ADH, and changes in plasma-interstitial volume. By inference, it is suggested that most of the model's responses to water immersion are plausible predictions of the acute changes expected, but not yet measured, during space flight. One important prediction of the model is that previous attempts to measure a diuresis during space flight failed because astronauts may have been dehydrated and urine samples were pooled over 24-hour periods.

  16. A nonlinear control method based on ANFIS and multiple models for a class of SISO nonlinear systems and its application.

    PubMed

    Zhang, Yajun; Chai, Tianyou; Wang, Hong

    2011-11-01

    This paper presents a novel nonlinear control strategy for a class of uncertain single-input and single-output discrete-time nonlinear systems with unstable zero-dynamics. The proposed method combines adaptive-network-based fuzzy inference system (ANFIS) with multiple models, where a linear robust controller, an ANFIS-based nonlinear controller and a switching mechanism are integrated using multiple models technique. It has been shown that the linear controller can ensure the boundedness of the input and output signals and the nonlinear controller can improve the dynamic performance of the closed loop system. Moreover, it has also been shown that the use of the switching mechanism can simultaneously guarantee the closed loop stability and improve its performance. As a result, the controller has the following three outstanding features compared with existing control strategies. First, this method relaxes the assumption of commonly-used uniform boundedness on the unmodeled dynamics and thus enhances its applicability. Second, since ANFIS is used to estimate and compensate the effect caused by the unmodeled dynamics, the convergence rate of neural network learning has been increased. Third, a "one-to-one mapping" technique is adapted to guarantee the universal approximation property of ANFIS. The proposed controller is applied to a numerical example and a pulverizing process of an alumina sintering system, respectively, where its effectiveness has been justified.

  17. Modeling Fetal Weight for Gestational Age: A Comparison of a Flexible Multi-level Spline-based Model with Other Approaches

    PubMed Central

    Villandré, Luc; Hutcheon, Jennifer A; Perez Trejo, Maria Esther; Abenhaim, Haim; Jacobsen, Geir; Platt, Robert W

    2011-01-01

    We present a model for longitudinal measures of fetal weight as a function of gestational age. We use a linear mixed model, with a Box-Cox transformation of fetal weight values, and restricted cubic splines, in order to flexibly but parsimoniously model median fetal weight. We systematically compare our model to other proposed approaches. All proposed methods are shown to yield similar median estimates, as evidenced by overlapping pointwise confidence bands, except after 40 completed weeks, where our method seems to produce estimates more consistent with observed data. Sex-based stratification affects the estimates of the random effects variance-covariance structure, without significantly changing sex-specific fitted median values. We illustrate the benefits of including sex-gestational age interaction terms in the model over stratification. The comparison leads to the conclusion that the selection of a model for fetal weight for gestational age can be based on the specific goals and configuration of a given study without affecting the precision or value of median estimates for most gestational ages of interest. PMID:21931571

  18. Potential formulation of sleep dynamics

    NASA Astrophysics Data System (ADS)

    Phillips, A. J. K.; Robinson, P. A.

    2009-02-01

    A physiologically based model of the mechanisms that control the human sleep-wake cycle is formulated in terms of an equivalent nonconservative mechanical potential. The potential is analytically simplified and reduced to a quartic two-well potential, matching the bifurcation structure of the original model. This yields a dynamics-based model that is analytically simpler and has fewer parameters than the original model, allowing easier fitting to experimental data. This model is first demonstrated to semiquantitatively match the dynamics of the physiologically based model from which it is derived, and is then fitted directly to a set of experimentally derived criteria. These criteria place rigorous constraints on the parameter values, and within these constraints the model is shown to reproduce normal sleep-wake dynamics and recovery from sleep deprivation. Furthermore, this approach enables insights into the dynamics by direct analogies to phenomena in well studied mechanical systems. These include the relation between friction in the mechanical system and the timecourse of neurotransmitter action, and the possible relation between stochastic resonance and napping behavior. The model derived here also serves as a platform for future investigations of sleep-wake phenomena from a dynamical perspective.

  19. Efficient operation scheduling for adsorption chillers using predictive optimization-based control methods

    NASA Astrophysics Data System (ADS)

    Bürger, Adrian; Sawant, Parantapa; Bohlayer, Markus; Altmann-Dieses, Angelika; Braun, Marco; Diehl, Moritz

    2017-10-01

    Within this work, the benefits of using predictive control methods for the operation of Adsorption Cooling Machines (ACMs) are shown on a simulation study. Since the internal control decisions of series-manufactured ACMs often cannot be influenced, the work focuses on optimized scheduling of an ACM considering its internal functioning as well as forecasts for load and driving energy occurrence. For illustration, an assumed solar thermal climate system is introduced and a system model suitable for use within gradient-based optimization methods is developed. The results of a system simulation using a conventional scheme for ACM scheduling are compared to the results of a predictive, optimization-based scheduling approach for the same exemplary scenario of load and driving energy occurrence. The benefits of the latter approach are shown and future actions for application of these methods for system control are addressed.

  20. Robustness of delayed multistable systems with application to droop-controlled inverter-based microgrids

    NASA Astrophysics Data System (ADS)

    Efimov, Denis; Schiffer, Johannes; Ortega, Romeo

    2016-05-01

    Motivated by the problem of phase-locking in droop-controlled inverter-based microgrids with delays, the recently developed theory of input-to-state stability (ISS) for multistable systems is extended to the case of multistable systems with delayed dynamics. Sufficient conditions for ISS of delayed systems are presented using Lyapunov-Razumikhin functions. It is shown that ISS multistable systems are robust with respect to delays in a feedback. The derived theory is applied to two examples. First, the ISS property is established for the model of a nonlinear pendulum and delay-dependent robustness conditions are derived. Second, it is shown that, under certain assumptions, the problem of phase-locking analysis in droop-controlled inverter-based microgrids with delays can be reduced to the stability investigation of the nonlinear pendulum. For this case, corresponding delay-dependent conditions for asymptotic phase-locking are given.

  1. Structures in general relativity

    NASA Astrophysics Data System (ADS)

    Tieu, Steven

    Structures within general relativity are examined. The differences between man-made structures and those predicted by the Einstein differential equations are very subtle. Exotic structures such as the Godel Universe and the Gott cosmic string are examined with emphasis on closed time-like curves. Newtonian models are seen to also have an exotic aspect in that a vast halo consisting of unknown matter dominates the galaxy. We introduce a model for galaxies based on a general relativity framework with the goal of excluding such artifacts from the system while describing the flat-rotation curves. Structures within this model were speculated to be exotic but after close scrutiny, their nature is shown to be benign. Numerical approaches are applied to model four galaxies: The Milky Way, NGC 3031, NGC 3198 and NGC 7331. Density and mass are deduced from these models and compared to the Newtonian models. Within the visible/HI region, there is 30% reduction in total mass. Extending the model to 10 times beyond the HI region using various fall-off scenerios, it is shown that there is only modest increase of the accumulated mass. In comparison to the Newtonian approach to galactic dynamics, the massive halos are not required to account for the flat velocity profiles.

  2. Validation Assessment of a Glass-to-Metal Seal Finite-Element Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jamison, Ryan Dale; Buchheit, Thomas E.; Emery, John M

    Sealing glasses are ubiquitous in high pressure and temperature engineering applications, such as hermetic feed-through electrical connectors. A common connector technology are glass-to-metal seals where a metal shell compresses a sealing glass to create a hermetic seal. Though finite-element analysis has been used to understand and design glass-to-metal seals for many years, there has been little validation of these models. An indentation technique was employed to measure the residual stress on the surface of a simple glass-to-metal seal. Recently developed rate- dependent material models of both Schott 8061 and 304L VAR stainless steel have been applied to a finite-element modelmore » of the simple glass-to-metal seal. Model predictions of residual stress based on the evolution of material models are shown. These model predictions are compared to measured data. Validity of the finite- element predictions is discussed. It will be shown that the finite-element model of the glass-to-metal seal accurately predicts the mean residual stress in the glass near the glass-to-metal interface and is valid for this quantity of interest.« less

  3. Dependability and performability analysis

    NASA Technical Reports Server (NTRS)

    Trivedi, Kishor S.; Ciardo, Gianfranco; Malhotra, Manish; Sahner, Robin A.

    1993-01-01

    Several practical issues regarding specifications and solution of dependability and performability models are discussed. Model types with and without rewards are compared. Continuous-time Markov chains (CTMC's) are compared with (continuous-time) Markov reward models (MRM's) and generalized stochastic Petri nets (GSPN's) are compared with stochastic reward nets (SRN's). It is shown that reward-based models could lead to more concise model specifications and solution of a variety of new measures. With respect to the solution of dependability and performability models, three practical issues were identified: largeness, stiffness, and non-exponentiality, and a variety of approaches are discussed to deal with them, including some of the latest research efforts.

  4. An accurate nonlinear stochastic model for MEMS-based inertial sensor error with wavelet networks

    NASA Astrophysics Data System (ADS)

    El-Diasty, Mohammed; El-Rabbany, Ahmed; Pagiatakis, Spiros

    2007-12-01

    The integration of Global Positioning System (GPS) with Inertial Navigation System (INS) has been widely used in many applications for positioning and orientation purposes. Traditionally, random walk (RW), Gauss-Markov (GM), and autoregressive (AR) processes have been used to develop the stochastic model in classical Kalman filters. The main disadvantage of classical Kalman filter is the potentially unstable linearization of the nonlinear dynamic system. Consequently, a nonlinear stochastic model is not optimal in derivative-based filters due to the expected linearization error. With a derivativeless-based filter such as the unscented Kalman filter or the divided difference filter, the filtering process of a complicated highly nonlinear dynamic system is possible without linearization error. This paper develops a novel nonlinear stochastic model for inertial sensor error using a wavelet network (WN). A wavelet network is a highly nonlinear model, which has recently been introduced as a powerful tool for modelling and prediction. Static and kinematic data sets are collected using a MEMS-based IMU (DQI-100) to develop the stochastic model in the static mode and then implement it in the kinematic mode. The derivativeless-based filtering method using GM, AR, and the proposed WN-based processes are used to validate the new model. It is shown that the first-order WN-based nonlinear stochastic model gives superior positioning results to the first-order GM and AR models with an overall improvement of 30% when 30 and 60 seconds GPS outages are introduced.

  5. Beta-decay half-lives for short neutron rich nuclei involved into the r-process

    NASA Astrophysics Data System (ADS)

    Panov, I.; Lutostansky, Yu; Thielemann, F.-K.

    2018-01-01

    The beta-strength function model based on Finite Fermi-Systems Theory is applied for calculations of the beta-decay half-lives for short neutron rich nuclei involved into the r- process. It is shown that the accuracy of beta-decay half-lives of short-lived neutron-rich nuclei is improving with increasing neutron excess and can be used for modeling of nucleosynthesis of heavy nuclei in the r-process.

  6. Micromechanism Based Modeling of Structural Life in Metal Matrix Composites

    DTIC Science & Technology

    1997-03-23

    wWchJndttees-^anätational eigenstrain ; and the embrittlement of material at the metal-ox^ interface, in addition, the influence of various heat...of two factors: the development of a surface layer consisting primarily of stoichiometric Ti02 which induces a dilatational eigenstrain ; and the...as the dilatational eigenstrain in order to capture the life reduction mechanism. As shown in Fig. 5, for the case of monotonic loading, the model

  7. TeV-photon paradox and space with SU(2) fuzziness

    NASA Astrophysics Data System (ADS)

    Shariati, A.; Khorrami, M.; Fatollahi, A. H.

    2008-02-01

    The possibility is examined that a model based on space noncommutativity of linear type can explain why photons from distant sources with multi-TeV energies can reach the Earth. In particular within a model in which space coordinates satisfy the algebra of the SU(2) Lie group, it is shown that there is a possibility that the pair production through the reaction of CMB and energetic photons be forbidden kinematically.

  8. Fatigue crack propagation behavior of stainless steel welds

    NASA Astrophysics Data System (ADS)

    Kusko, Chad S.

    The fatigue crack propagation behavior of austenitic and duplex stainless steel base and weld metals has been investigated using various fatigue crack growth test procedures, ferrite measurement techniques, light optical microscopy, stereomicroscopy, scanning electron microscopy, and optical profilometry. The compliance offset method has been incorporated to measure crack closure during testing in order to determine a stress ratio at which such closure is overcome. Based on this method, an empirically determined stress ratio of 0.60 has been shown to be very successful in overcoming crack closure for all da/dN for gas metal arc and laser welds. This empirically-determined stress ratio of 0.60 has been applied to testing of stainless steel base metal and weld metal to understand the influence of microstructure. Regarding the base metal investigation, for 316L and AL6XN base metals, grain size and grain plus twin size have been shown to influence resulting crack growth behavior. The cyclic plastic zone size model has been applied to accurately model crack growth behavior for austenitic stainless steels when the average grain plus twin size is considered. Additionally, the effect of the tortuous crack paths observed for the larger grain size base metals can be explained by a literature model for crack deflection. Constant Delta K testing has been used to characterize the crack growth behavior across various regions of the gas metal arc and laser welds at the empirically determined stress ratio of 0.60. Despite an extensive range of stainless steel weld metal FN and delta-ferrite morphologies, neither delta-ferrite morphology significantly influence the room temperature crack growth behavior. However, variations in weld metal da/dN can be explained by local surface roughness resulting from large columnar grains and tortuous crack paths in the weld metal.

  9. Artificial retina model for the retinally blind based on wavelet transform

    NASA Astrophysics Data System (ADS)

    Zeng, Yan-an; Song, Xin-qiang; Jiang, Fa-gang; Chang, Da-ding

    2007-01-01

    Artificial retina is aimed for the stimulation of remained retinal neurons in the patients with degenerated photoreceptors. Microelectrode arrays have been developed for this as a part of stimulator. Design such microelectrode arrays first requires a suitable mathematical method for human retinal information processing. In this paper, a flexible and adjustable human visual information extracting model is presented, which is based on the wavelet transform. With the flexible of wavelet transform to image information processing and the consistent to human visual information extracting, wavelet transform theory is applied to the artificial retina model for the retinally blind. The response of the model to synthetic image is shown. The simulated experiment demonstrates that the model behaves in a manner qualitatively similar to biological retinas and thus may serve as a basis for the development of an artificial retina.

  10. Calibration and LOD/LOQ estimation of a chemiluminescent hybridization assay for residual DNA in recombinant protein drugs expressed in E. coli using a four-parameter logistic model.

    PubMed

    Lee, K R; Dipaolo, B; Ji, X

    2000-06-01

    Calibration is the process of fitting a model based on reference data points (x, y), then using the model to estimate an unknown x based on a new measured response, y. In DNA assay, x is the concentration, and y is the measured signal volume. A four-parameter logistic model was used frequently for calibration of immunoassay when the response is optical density for enzyme-linked immunosorbent assay (ELISA) or adjusted radioactivity count for radioimmunoassay (RIA). Here, it is shown that the same model or a linearized version of the curve are equally useful for the calibration of a chemiluminescent hybridization assay for residual DNA in recombinant protein drugs and calculation of performance measures of the assay.

  11. Neural control of fast nonlinear systems--application to a turbocharged SI engine with VCT.

    PubMed

    Colin, Guillaume; Chamaillard, Yann; Bloch, Gérard; Corde, Gilles

    2007-07-01

    Today, (engine) downsizing using turbocharging appears as a major way in reducing fuel consumption and pollutant emissions of spark ignition (SI) engines. In this context, an efficient control of the air actuators [throttle, turbo wastegate, and variable camshaft timing (VCT)] is needed for engine torque control. This paper proposes a nonlinear model-based control scheme which combines separate, but coordinated, control modules. Theses modules are based on different control strategies: internal model control (IMC), model predictive control (MPC), and optimal control. It is shown how neural models can be used at different levels and included in the control modules to replace physical models, which are too complex to be online embedded, or to estimate nonmeasured variables. The results obtained from two different test benches show the real-time applicability and good control performance of the proposed methods.

  12. Increasing the relevance of GCM simulations for Climate Services

    NASA Astrophysics Data System (ADS)

    Smith, L. A.; Suckling, E.

    2012-12-01

    The design and interpretation of model simulations for climate services differ significantly from experimental design for the advancement of the fundamental research on predictability that underpins it. Climate services consider the sources of best information available today; this calls for a frank evaluation of model skill in the face of statistical benchmarks defined by empirical models. The fact that Physical simulation models are thought to provide the only reliable method for extrapolating into conditions not previously observed has no bearing on whether or not today's simulation models outperform empirical models. Evidence on the length scales on which today's simulation models fail to outperform empirical benchmarks is presented; it is illustrated that this occurs even on global scales in decadal prediction. At all timescales considered thus far (as of July 2012), predictions based on simulation models are improved by blending with the output of statistical models. Blending is shown to be more interesting in the climate context than it is in the weather context, where blending with a history-based climatology is straightforward. As GCMs improve and as the Earth's climate moves further from that of the last century, the skill from simulation models and their relevance to climate services is expected to increase. Examples from both seasonal and decadal forecasting will be used to discuss a third approach that may increase the role of current GCMs more quickly. Specifically, aspects of the experimental design in previous hind cast experiments are shown to hinder the use of GCM simulations for climate services. Alternative designs are proposed. The value in revisiting Thompson's classic approach to improving weather forecasting in the fifties in the context of climate services is discussed.

  13. Physically-Based Reduced Order Modelling of a Uni-Axial Polysilicon MEMS Accelerometer

    PubMed Central

    Ghisi, Aldo; Mariani, Stefano; Corigliano, Alberto; Zerbini, Sarah

    2012-01-01

    In this paper, the mechanical response of a commercial off-the-shelf, uni-axial polysilicon MEMS accelerometer subject to drops is numerically investigated. To speed up the calculations, a simplified physically-based (beams and plate), two degrees of freedom model of the movable parts of the sensor is adopted. The capability and the accuracy of the model are assessed against three-dimensional finite element simulations, and against outcomes of experiments on instrumented samples. It is shown that the reduced order model provides accurate outcomes as for the system dynamics. To also get rather accurate results in terms of stress fields within regions that are prone to fail upon high-g shocks, a correction factor is proposed by accounting for the local stress amplification induced by re-entrant corners. PMID:23202031

  14. CABS-flex: Server for fast simulation of protein structure fluctuations.

    PubMed

    Jamroz, Michal; Kolinski, Andrzej; Kmiecik, Sebastian

    2013-07-01

    The CABS-flex server (http://biocomp.chem.uw.edu.pl/CABSflex) implements CABS-model-based protocol for the fast simulations of near-native dynamics of globular proteins. In this application, the CABS model was shown to be a computationally efficient alternative to all-atom molecular dynamics--a classical simulation approach. The simulation method has been validated on a large set of molecular dynamics simulation data. Using a single input (user-provided file in PDB format), the CABS-flex server outputs an ensemble of protein models (in all-atom PDB format) reflecting the flexibility of the input structure, together with the accompanying analysis (residue mean-square-fluctuation profile and others). The ensemble of predicted models can be used in structure-based studies of protein functions and interactions.

  15. Climatology of Ultra Violet(UV) Irradiance at the Surface of the Earth as Measured by the Belgian UV Radiation Monitoring Network

    NASA Astrophysics Data System (ADS)

    Pandey, Praveen; Gillotay, Didier; Depiesse, Cedric

    2016-08-01

    In this paper we describe the network of ground-based ultraviolet (UV) radiation monitoring stations in Belgium. The evolution of the entire network, together with the details of measuring instruments is given. The observed cumulative irradiations -UVB, UVA and total solar irradiation (TSI)- over the course of measurement for three stations -a northern (Ostende), central (Uccle) and a southern (Redu)- are shown. The longest series of measurement shown in this study is at Uccle, Brussels, from 1995 till 2014. Thus, the variation of the UV index, together with the variation of irradiations during summer and winter months at Uccle are shown as a part of this climatological study. The trend of UVB irradiance over the above mentioned three stations is shown. This UVB trend is studied in conjunction with the long-term satellite-based total column ozone value over Belgium, which shows two distinct trends marked by a change point. The total column ozone trend following the change point is positive. It is also seen that the UVB trend is positive for the urban/sub-urban sites: Uccle and Redu. Whereas the UVB trend at Ostende, which is a coastal site, is not positive. A possible explanation of this relation between total column ozone and UVB trend could be associated with aerosols, which is shown in this paper by means of a radiative transfer model based study -as a part of a preliminary investigation. It is seen that the UVI is influenced by the type of aerosols.

  16. System Model for MEMS based Laser Ultrasonic Receiver

    NASA Technical Reports Server (NTRS)

    Wilson, William C.

    2002-01-01

    A need has been identified for more advanced nondestructive Evaluation technologies for assuring the integrity of airframe structures, wiring, etc. Laser ultrasonic inspection instruments have been shown to detect flaws in structures. However, these instruments are generally too bulky to be used in the confined spaces that are typical of aerospace vehicles. Microsystems technology is one key to reducing the size of current instruments and enabling increased inspection coverage in areas that were previously inaccessible due to instrument size and weight. This paper investigates the system modeling of a Micro OptoElectroMechanical System (MOEMS) based laser ultrasonic receiver. The system model is constructed in software using MATLAB s dynamical simulator, Simulink. The optical components are modeled using geometrical matrix methods and include some image processing. The system model includes a test bench which simulates input stimuli and models the behavior of the material under test.

  17. Geoinformation modeling system for analysis of atmosphere pollution impact on vegetable biosystems using space images

    NASA Astrophysics Data System (ADS)

    Polichtchouk, Yuri; Ryukhko, Viatcheslav; Tokareva, Olga; Alexeeva, Mary

    2002-02-01

    Geoinformation modeling system structure for assessment of the environmental impact of atmospheric pollution on forest- swamp ecosystems of West Siberia is considered. Complex approach to the assessment of man-caused impact based on the combination of sanitary-hygienic and landscape-geochemical approaches is reported. Methodical problems of analysis of atmosphere pollution impact on vegetable biosystems using geoinformation systems and remote sensing data are developed. Landscape structure of oil production territories in southern part of West Siberia are determined on base of processing of space images from spaceborn Resource-O. Particularities of atmosphere pollution zones modeling caused by gas burning in torches in territories of oil fields are considered. For instance, a pollution zones were revealed modeling of contaminants dispersal in atmosphere by standard model. Polluted landscapes areas are calculated depending on oil production volume. It is shown calculated data is well approximated by polynomial models.

  18. Parameter dimensionality reduction of a conceptual model for streamflow prediction in Canadian, snowmelt dominated ungauged basins

    NASA Astrophysics Data System (ADS)

    Arsenault, Richard; Poissant, Dominique; Brissette, François

    2015-11-01

    This paper evaluated the effects of parametric reduction of a hydrological model on five regionalization methods and 267 catchments in the province of Quebec, Canada. The Sobol' variance-based sensitivity analysis was used to rank the model parameters by their influence on the model results and sequential parameter fixing was performed. The reduction in parameter correlations improved parameter identifiability, however this improvement was found to be minimal and was not transposed in the regionalization mode. It was shown that 11 of the HSAMI models' 23 parameters could be fixed with little or no loss in regionalization skill. The main conclusions were that (1) the conceptual lumped models used in this study did not represent physical processes sufficiently well to warrant parameter reduction for physics-based regionalization methods for the Canadian basins examined and (2) catchment descriptors did not adequately represent the relevant hydrological processes, namely snow accumulation and melt.

  19. Simulating an underwater vehicle self-correcting guidance system with Simulink

    NASA Astrophysics Data System (ADS)

    Fan, Hui; Zhang, Yu-Wen; Li, Wen-Zhe

    2008-09-01

    Underwater vehicles have already adopted self-correcting directional guidance algorithms based on multi-beam self-guidance systems, not waiting for research to determine the most effective algorithms. The main challenges facing research on these guidance systems have been effective modeling of the guidance algorithm and a means to analyze the simulation results. A simulation structure based on Simulink that dealt with both issues was proposed. Initially, a mathematical model of relative motion between the vehicle and the target was developed, which was then encapsulated as a subsystem. Next, steps for constructing a model of the self-correcting guidance algorithm based on the Stateflow module were examined in detail. Finally, a 3-D model of the vehicle and target was created in VRML, and by processing mathematical results, the model was shown moving in a visual environment. This process gives more intuitive results for analyzing the simulation. The results showed that the simulation structure performs well. The simulation program heavily used modularization and encapsulation, so has broad applicability to simulations of other dynamic systems.

  20. Transmission-line model to design matching stage for light coupling into two-dimensional photonic crystals.

    PubMed

    Miri, Mehdi; Khavasi, Amin; Mehrany, Khashayar; Rashidian, Bizhan

    2010-01-15

    The transmission-line analogy of the planar electromagnetic reflection problem is exploited to obtain a transmission-line model that can be used to design effective, robust, and wideband interference-based matching stages. The proposed model based on a new definition for a scalar impedance is obtained by using the reflection coefficient of the zeroth-order diffracted plane wave outside the photonic crystal. It is shown to be accurate for in-band applications, where the normalized frequency is low enough to ensure that the zeroth-order diffracted plane wave is the most important factor in determining the overall reflection. The frequency limitation of employing the proposed approach is explored, highly dispersive photonic crystals are considered, and wideband matching stages based on binomial impedance transformers are designed to work at the first two photonic bands.

  1. Device and circuit analysis of a sub 20 nm double gate MOSFET with gate stack using a look-up-table-based approach

    NASA Astrophysics Data System (ADS)

    Chakraborty, S.; Dasgupta, A.; Das, R.; Kar, M.; Kundu, A.; Sarkar, C. K.

    2017-12-01

    In this paper, we explore the possibility of mapping devices designed in TCAD environment to its modeled version developed in cadence virtuoso environment using a look-up table (LUT) approach. Circuit simulation of newly designed devices in TCAD environment is a very slow and tedious process involving complex scripting. Hence, the LUT based modeling approach has been proposed as a faster and easier alternative in cadence environment. The LUTs are prepared by extracting data from the device characteristics obtained from device simulation in TCAD. A comparative study is shown between the TCAD simulation and the LUT-based alternative to showcase the accuracy of modeled devices. Finally the look-up table approach is used to evaluate the performance of circuits implemented using 14 nm nMOSFET.

  2. Power law-based local search in spider monkey optimisation for lower order system modelling

    NASA Astrophysics Data System (ADS)

    Sharma, Ajay; Sharma, Harish; Bhargava, Annapurna; Sharma, Nirmala

    2017-01-01

    The nature-inspired algorithms (NIAs) have shown efficiency to solve many complex real-world optimisation problems. The efficiency of NIAs is measured by their ability to find adequate results within a reasonable amount of time, rather than an ability to guarantee the optimal solution. This paper presents a solution for lower order system modelling using spider monkey optimisation (SMO) algorithm to obtain a better approximation for lower order systems and reflects almost original higher order system's characteristics. Further, a local search strategy, namely, power law-based local search is incorporated with SMO. The proposed strategy is named as power law-based local search in SMO (PLSMO). The efficiency, accuracy and reliability of the proposed algorithm is tested over 20 well-known benchmark functions. Then, the PLSMO algorithm is applied to solve the lower order system modelling problem.

  3. Influence of spatial beam inhomogeneities on the parameters of a petawatt laser system based on multi-stage parametric amplification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frolov, S A; Trunov, V I; Pestryakov, Efim V

    2013-05-31

    We have developed a technique for investigating the evolution of spatial inhomogeneities in high-power laser systems based on multi-stage parametric amplification. A linearised model of the inhomogeneity development is first devised for parametric amplification with the small-scale self-focusing taken into account. It is shown that the application of this model gives the results consistent (with high accuracy and in a wide range of inhomogeneity parameters) with the calculation without approximations. Using the linearised model, we have analysed the development of spatial inhomogeneities in a petawatt laser system based on multi-stage parametric amplification, developed at the Institute of Laser Physics, Siberianmore » Branch of the Russian Academy of Sciences (ILP SB RAS). (control of laser radiation parameters)« less

  4. System Identification of a Heaving Point Absorber: Design of Experiment and Device Modeling

    DOE PAGES

    Bacelli, Giorgio; Coe, Ryan; Patterson, David; ...

    2017-04-01

    Empirically based modeling is an essential aspect of design for a wave energy converter. These models are used in structural, mechanical and control design processes, as well as for performance prediction. The design of experiments and methods used to produce models from collected data have a strong impact on the quality of the model. This study considers the system identification and model validation process based on data collected from a wave tank test of a model-scale wave energy converter. Experimental design and data processing techniques based on general system identification procedures are discussed and compared with the practices often followedmore » for wave tank testing. The general system identification processes are shown to have a number of advantages. The experimental data is then used to produce multiple models for the dynamics of the device. These models are validated and their performance is compared against one and other. Furthermore, while most models of wave energy converters use a formulation with wave elevation as an input, this study shows that a model using a hull pressure sensor to incorporate the wave excitation phenomenon has better accuracy.« less

  5. System Identification of a Heaving Point Absorber: Design of Experiment and Device Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bacelli, Giorgio; Coe, Ryan; Patterson, David

    Empirically based modeling is an essential aspect of design for a wave energy converter. These models are used in structural, mechanical and control design processes, as well as for performance prediction. The design of experiments and methods used to produce models from collected data have a strong impact on the quality of the model. This study considers the system identification and model validation process based on data collected from a wave tank test of a model-scale wave energy converter. Experimental design and data processing techniques based on general system identification procedures are discussed and compared with the practices often followedmore » for wave tank testing. The general system identification processes are shown to have a number of advantages. The experimental data is then used to produce multiple models for the dynamics of the device. These models are validated and their performance is compared against one and other. Furthermore, while most models of wave energy converters use a formulation with wave elevation as an input, this study shows that a model using a hull pressure sensor to incorporate the wave excitation phenomenon has better accuracy.« less

  6. PharmDock: a pharmacophore-based docking program

    PubMed Central

    2014-01-01

    Background Protein-based pharmacophore models are enriched with the information of potential interactions between ligands and the protein target. We have shown in a previous study that protein-based pharmacophore models can be applied for ligand pose prediction and pose ranking. In this publication, we present a new pharmacophore-based docking program PharmDock that combines pose sampling and ranking based on optimized protein-based pharmacophore models with local optimization using an empirical scoring function. Results Tests of PharmDock on ligand pose prediction, binding affinity estimation, compound ranking and virtual screening yielded comparable or better performance to existing and widely used docking programs. The docking program comes with an easy-to-use GUI within PyMOL. Two features have been incorporated in the program suite that allow for user-defined guidance of the docking process based on previous experimental data. Docking with those features demonstrated superior performance compared to unbiased docking. Conclusion A protein pharmacophore-based docking program, PharmDock, has been made available with a PyMOL plugin. PharmDock and the PyMOL plugin are freely available from http://people.pharmacy.purdue.edu/~mlill/software/pharmdock. PMID:24739488

  7. Electromagnetic on-aircraft antenna radiation in the presence of composite plates

    NASA Technical Reports Server (NTRS)

    Kan, S. H-T.; Rojas, R. G.

    1994-01-01

    The UTD-based NEWAIR3 code is modified such that it can model modern aircraft by composite plates. One good model of conductor-backed composites is the impedance boundary condition where the composites are replaced by surfaces with complex impedances. This impedance-plate model is then used to model the composite plates in the NEWAIR3 code. In most applications, the aircraft distorts the desired radiation pattern of the antenna. However, test examples conducted in this report have shown that the undesired scattered fields are minimized if the right impedance values are chosen for the surface impedance plates.

  8. Two Strain Dengue Model with Temporary Cross Immunity and Seasonality

    NASA Astrophysics Data System (ADS)

    Aguiar, Maíra; Ballesteros, Sebastien; Stollenwerk, Nico

    2010-09-01

    Models on dengue fever epidemiology have previously shown critical fluctuations with power law distributions and also deterministic chaos in some parameter regions due to the multi-strain structure of the disease pathogen. In our first model including well known biological features, we found a rich dynamical structure including limit cycles, symmetry breaking bifurcations, torus bifurcations, coexisting attractors including isola solutions and deterministic chaos (as indicated by positive Lyapunov exponents) in a much larger parameter region, which is also biologically more plausible than the previous results of other researches. Based on these findings we will investigate the model structures further including seasonality.

  9. The properties of an ion selective enzymatic asymmetric synthetic membrane.

    NASA Technical Reports Server (NTRS)

    Mitz, M. A.

    1971-01-01

    With the aid of a simple model membrane system, the properties of cellulose enzymes and of membrane selectivity and pump-like action are considered. The model is based on materials possibly present on a primitive earth, as well as on a membrane able to sort or concentrate these materials. An overview of the model membrane system is presented in terms of how it is constructed, what its properties are, and what to expect in performance characteristics. The model system is shown to be useful for studying the selective and in some cases accelerated transfer of nutrients and metabolites.

  10. Modelling and control of a diffusion/LPCVD furnace

    NASA Astrophysics Data System (ADS)

    Dewaard, H.; Dekoning, W. L.

    1988-12-01

    Heat transfer inside a cylindrical resistance diffusion/Low Pressure Chemical Vapor Deposition (LPCVD) furnace is studied with the aim of developing an improved temperature controller. A model of the thermal behavior is derived, which covers the important class of furnaces equipped with semitransparent quartz process tubes. The model takes into account the thermal behavior of the thermocouples. Currently used temperature controllers are shown to be highly inefficient for very large scale integration applications. Based on the model an alternative temperature controller of the LQG (linear quadratic Gaussian) type is proposed which features direct wafer temperature control. Some simulation results are given.

  11. An averaging battery model for a lead-acid battery operating in an electric car

    NASA Technical Reports Server (NTRS)

    Bozek, J. M.

    1979-01-01

    A battery model is developed based on time averaging the current or power, and is shown to be an effective means of predicting the performance of a lead acid battery. The effectiveness of this battery model was tested on battery discharge profiles expected during the operation of an electric vehicle following the various SAE J227a driving schedules. The averaging model predicts the performance of a battery that is periodically charged (regenerated) if the regeneration energy is assumed to be converted to retrievable electrochemical energy on a one-to-one basis.

  12. Two Strain Dengue Model with Temporary Cross Immunity and Seasonality

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aguiar, Maira; Ballesteros, Sebastien; Stollenwerk, Nico

    Models on dengue fever epidemiology have previously shown critical fluctuations with power law distributions and also deterministic chaos in some parameter regions due to the multi-strain structure of the disease pathogen. In our first model including well known biological features, we found a rich dynamical structure including limit cycles, symmetry breaking bifurcations, torus bifurcations, coexisting attractors including isola solutions and deterministic chaos (as indicated by positive Lyapunov exponents) in a much larger parameter region, which is also biologically more plausible than the previous results of other researches. Based on these findings we will investigate the model structures further including seasonality.

  13. Interference Path Loss Prediction in A319/320 Airplanes Using Modulated Fuzzy Logic and Neural Networks

    NASA Technical Reports Server (NTRS)

    Jafri, Madiha J.; Ely, Jay J.; Vahala, Linda L.

    2007-01-01

    In this paper, neural network (NN) modeling is combined with fuzzy logic to estimate Interference Path Loss measurements on Airbus 319 and 320 airplanes. Interference patterns inside the aircraft are classified and predicted based on the locations of the doors, windows, aircraft structures and the communication/navigation system-of-concern. Modeled results are compared with measured data. Combining fuzzy logic and NN modeling is shown to improve estimates of measured data over estimates obtained with NN alone. A plan is proposed to enhance the modeling for better prediction of electromagnetic coupling problems inside aircraft.

  14. Remote sensing inputs to landscape models which predict future spatial land use patterns for hydrologic models

    NASA Technical Reports Server (NTRS)

    Miller, L. D.; Tom, C.; Nualchawee, K.

    1977-01-01

    A tropical forest area of Northern Thailand provided a test case of the application of the approach in more natural surroundings. Remote sensing imagery subjected to proper computer analysis has been shown to be a very useful means of collecting spatial data for the science of hydrology. Remote sensing products provide direct input to hydrologic models and practical data bases for planning large and small-scale hydrologic developments. Combining the available remote sensing imagery together with available map information in the landscape model provides a basis for substantial improvements in these applications.

  15. Enhancement of ELDA Tracker Based on CNN Features and Adaptive Model Update.

    PubMed

    Gao, Changxin; Shi, Huizhang; Yu, Jin-Gang; Sang, Nong

    2016-04-15

    Appearance representation and the observation model are the most important components in designing a robust visual tracking algorithm for video-based sensors. Additionally, the exemplar-based linear discriminant analysis (ELDA) model has shown good performance in object tracking. Based on that, we improve the ELDA tracking algorithm by deep convolutional neural network (CNN) features and adaptive model update. Deep CNN features have been successfully used in various computer vision tasks. Extracting CNN features on all of the candidate windows is time consuming. To address this problem, a two-step CNN feature extraction method is proposed by separately computing convolutional layers and fully-connected layers. Due to the strong discriminative ability of CNN features and the exemplar-based model, we update both object and background models to improve their adaptivity and to deal with the tradeoff between discriminative ability and adaptivity. An object updating method is proposed to select the "good" models (detectors), which are quite discriminative and uncorrelated to other selected models. Meanwhile, we build the background model as a Gaussian mixture model (GMM) to adapt to complex scenes, which is initialized offline and updated online. The proposed tracker is evaluated on a benchmark dataset of 50 video sequences with various challenges. It achieves the best overall performance among the compared state-of-the-art trackers, which demonstrates the effectiveness and robustness of our tracking algorithm.

  16. Enhancement of ELDA Tracker Based on CNN Features and Adaptive Model Update

    PubMed Central

    Gao, Changxin; Shi, Huizhang; Yu, Jin-Gang; Sang, Nong

    2016-01-01

    Appearance representation and the observation model are the most important components in designing a robust visual tracking algorithm for video-based sensors. Additionally, the exemplar-based linear discriminant analysis (ELDA) model has shown good performance in object tracking. Based on that, we improve the ELDA tracking algorithm by deep convolutional neural network (CNN) features and adaptive model update. Deep CNN features have been successfully used in various computer vision tasks. Extracting CNN features on all of the candidate windows is time consuming. To address this problem, a two-step CNN feature extraction method is proposed by separately computing convolutional layers and fully-connected layers. Due to the strong discriminative ability of CNN features and the exemplar-based model, we update both object and background models to improve their adaptivity and to deal with the tradeoff between discriminative ability and adaptivity. An object updating method is proposed to select the “good” models (detectors), which are quite discriminative and uncorrelated to other selected models. Meanwhile, we build the background model as a Gaussian mixture model (GMM) to adapt to complex scenes, which is initialized offline and updated online. The proposed tracker is evaluated on a benchmark dataset of 50 video sequences with various challenges. It achieves the best overall performance among the compared state-of-the-art trackers, which demonstrates the effectiveness and robustness of our tracking algorithm. PMID:27092505

  17. Modeling International Space Station (ISS) Floating Potentials

    NASA Technical Reports Server (NTRS)

    Ferguson, Dale C.; Gardner, Barbara

    2002-01-01

    The floating potential of the International Space Station (ISS) as a function of the electron current collection of its high voltage solar array panels is derived analytically. Based on Floating Potential Probe (FPP) measurements of the ISS potential and ambient plasma characteristics, it is shown that the ISS floating potential is a strong function of the electron temperature of the surrounding plasma. While the ISS floating potential has so far not attained the pre-flight predicted highly negative values, it is shown that for future mission builds, ISS must continue to provide two-fault tolerant arc-hazard protection for astronauts on EVA.

  18. Magnetic Random Access Memory for Embedded Computing

    DTIC Science & Technology

    2007-10-29

    layer, w he free layer hose resistan .  ce  2. Develop and model  data  storage circuits  based  on the MTJ cells. 3. Integrated the MTJ cells into a CMOS...suggested the two methods shown in Fig. 4.5 [95]. The circuit shown at the top of the figure uses NMOS pass transistors to load data , which is the simplest... method but requires careful design to avoid charge sharing and accommodate the data -dependent loading seen at the DATA input. With additional

  19. Vector-based model of elastic bonds for simulation of granular solids.

    PubMed

    Kuzkin, Vitaly A; Asonov, Igor E

    2012-11-01

    A model (further referred to as the V model) for the simulation of granular solids, such as rocks, ceramics, concrete, nanocomposites, and agglomerates, composed of bonded particles (rigid bodies), is proposed. It is assumed that the bonds, usually representing some additional gluelike material connecting particles, cause both forces and torques acting on the particles. Vectors rigidly connected with the particles are used to describe the deformation of a single bond. The expression for potential energy of the bond and corresponding expressions for forces and torques are derived. Formulas connecting parameters of the model with longitudinal, shear, bending, and torsional stiffnesses of the bond are obtained. It is shown that the model makes it possible to describe any values of the bond stiffnesses exactly; that is, the model is applicable for the bonds with arbitrary length/thickness ratio. Two different calibration procedures depending on bond length/thickness ratio are proposed. It is shown that parameters of the model can be chosen so that under small deformations the bond is equivalent to either a Bernoulli-Euler beam or a Timoshenko beam or short cylinder connecting particles. Simple analytical expressions, relating parameters of the V model with geometrical and mechanical characteristics of the bond, are derived. Two simple examples of computer simulation of thin granular structures using the V model are given.

  20. Bayesian Inference for Functional Dynamics Exploring in fMRI Data.

    PubMed

    Guo, Xuan; Liu, Bing; Chen, Le; Chen, Guantao; Pan, Yi; Zhang, Jing

    2016-01-01

    This paper aims to review state-of-the-art Bayesian-inference-based methods applied to functional magnetic resonance imaging (fMRI) data. Particularly, we focus on one specific long-standing challenge in the computational modeling of fMRI datasets: how to effectively explore typical functional interactions from fMRI time series and the corresponding boundaries of temporal segments. Bayesian inference is a method of statistical inference which has been shown to be a powerful tool to encode dependence relationships among the variables with uncertainty. Here we provide an introduction to a group of Bayesian-inference-based methods for fMRI data analysis, which were designed to detect magnitude or functional connectivity change points and to infer their functional interaction patterns based on corresponding temporal boundaries. We also provide a comparison of three popular Bayesian models, that is, Bayesian Magnitude Change Point Model (BMCPM), Bayesian Connectivity Change Point Model (BCCPM), and Dynamic Bayesian Variable Partition Model (DBVPM), and give a summary of their applications. We envision that more delicate Bayesian inference models will be emerging and play increasingly important roles in modeling brain functions in the years to come.

Top