Sample records for n-point asperity model

  1. Multi-asperity models of slow slip and tremor

    NASA Astrophysics Data System (ADS)

    Ampuero, Jean Paul; Luo, Yingdi; Lengline, Olivier; Inbal, Asaf

    2016-04-01

    Field observations of exhumed faults indicate that fault zones can comprise mixtures of materials with different dominant deformation mechanisms, including contrasts in strength, frictional stability and hydrothermal transport properties. Computational modeling helps quantify the potential effects of fault zone heterogeneity on fault slip styles from seismic to aseismic slip, including slow slip and tremor phenomena, foreshocks sequences and swarms, high- and low-frequency radiation during large earthquakes. We will summarize results of ongoing modeling studies of slow slip and tremor in which fault zone structure comprises a collection of frictionally unstable patches capable of seismic slip (tremorgenic asperities) embedded in a frictionally stable matrix hosting aseismic transient slips. Such models are consistent with the current view that tremors result from repeated shear failure of multiple asperities as Low Frequency Earthquakes (LFEs). The collective behavior of asperities embedded in creeping faults generate a rich spectrum of tremor migration patterns, as observed in natural faults, whose seismicity rate, recurrence time and migration speed can be mechanically related to the underlying transient slow slip rate. Tremor activity and slow slip also responds to periodic loadings induced by tides or surface waves, and models relate tremor tidal sensitivity to frictional properties, fluid pressure and creep rate. The overall behavior of a heterogeneous fault is affected by structural parameters, such as the ratio of stable to unstable materials, but also by time-dependent variables, such as pore pressure and loading rate. Some behaviors are well predicted by homogenization theory based on spatially-averaged frictional properties, but others are somewhat unexpected, such as seismic slip behavior found in asperities that are much smaller than their nucleation size. Two end-member regimes are obtained in rate-and-state models with velocity-weakening asperities

  2. Contact stiffness of regularly patterned multi-asperity interfaces

    NASA Astrophysics Data System (ADS)

    Li, Shen; Yao, Quanzhou; Li, Qunyang; Feng, Xi-Qiao; Gao, Huajian

    2018-02-01

    Contact stiffness is a fundamental mechanical index of solid surfaces and relevant to a wide range of applications. Although the correlation between contact stiffness, contact size and load has long been explored for single-asperity contacts, our understanding of the contact stiffness of rough interfaces is less clear. In this work, the contact stiffness of hexagonally patterned multi-asperity interfaces is studied using a discrete asperity model. We confirm that the elastic interaction among asperities is critical in determining the mechanical behavior of rough contact interfaces. More importantly, in contrast to the common wisdom that the interplay of asperities is solely dictated by the inter-asperity spacing, we show that the number of asperities in contact (or equivalently, the apparent size of contact) also plays an indispensable role. Based on the theoretical analysis, we propose a new parameter for gauging the closeness of asperities. Our theoretical model is validated by a set of experiments. To facilitate the application of the discrete asperity model, we present a general equation for contact stiffness estimation of regularly rough interfaces, which is further proved to be applicable for interfaces with single-scale random roughness.

  3. Interactions and triggering in a 3D rate and state asperity model

    NASA Astrophysics Data System (ADS)

    Dublanchet, P.; Bernard, P.

    2012-12-01

    Precise relocation of micro-seismicity and careful analysis of seismic source parameters have progressively imposed the concept of seismic asperities embedded in a creeping fault segment as being one of the most important aspect that should appear in a realistic representation of micro-seismic sources. Another important issue concerning micro-seismic activity is the existence of robust empirical laws describing the temporal and magnitude distribution of earthquakes, such as the Omori law, the distribution of inter-event time and the Gutenberg-Richter law. In this framework, this study aims at understanding statistical properties of earthquakes, by generating synthetic catalogs with a 3D, quasi-dynamic continuous rate and state asperity model, that takes into account a realistic geometry of asperities. Our approach contrasts with ETAS models (Kagan and Knopoff, 1981) usually implemented to produce earthquake catalogs, in the sense that the non linearity observed in rock friction experiments (Dieterich, 1979) is fully taken into account by the use of rate and state friction law. Furthermore, our model differs from discrete models of faults (Ziv and Cochard, 2006) because the continuity allows us to define realistic geometries and distributions of asperities by the assembling of sub-critical computational cells that always fail in a single event. Moreover, this model allows us to adress the question of the influence of barriers and distribution of asperities on the event statistics. After recalling the main observations of asperities in the specific case of Parkfield segment of San-Andreas Fault, we analyse earthquake statistical properties computed for this area. Then, we present synthetic statistics obtained by our model that allow us to discuss the role of barriers on clustering and triggering phenomena among a population of sources. It appears that an effective size of barrier, that depends on its frictional strength, controls the presence or the absence, in the

  4. Numerical Modeling Describing the Effects of Heterogeneous Distributions of Asperities on the Quasi-static Evolution of Frictional Slip

    NASA Astrophysics Data System (ADS)

    Selvadurai, P. A.; Parker, J. M.; Glaser, S. D.

    2017-12-01

    A better understanding of how slip accumulates along faults and its relation to the breakdown of shear stress is beneficial to many engineering disciplines, such as, hydraulic fracture and understanding induced seismicity (among others). Asperities forming along a preexisting fault resist the relative motion of the two sides of the interface and occur due to the interaction of the surface topographies. Here, we employ a finite element model to simulate circular partial slip asperities along a nominally flat frictional interface. Shear behavior of our partial slip asperity model closely matched the theory described by Cattaneo. The asperity model was employed to simulate a small section of an experimental fault formed between two bodies of polymethyl methacrylate, which consisted of multiple asperities whose location and sizes were directly measured using a pressure sensitive film. The quasi-static shear behavior of the interface was modeled for cyclical loading conditions, and the frictional dissipation (hysteresis) was normal stress dependent. We further our understanding by synthetically modeling lognormal size distributions of asperities that were randomly distributed in space. Synthetic distributions conserved the real contact area and aspects of the size distributions from the experimental case, allowing us to compare the constitutive behaviors based solely on spacing effects. Traction-slip behavior of the experimental interface appears to be considerably affected by spatial clustering of asperities that was not present in the randomly spaced, synthetic asperity distributions. Estimates of bulk interfacial shear stiffness were determined from the constitutive traction-slip behavior and were comparable to the theoretical estimates of multi-contact interfaces with non-interacting asperities.

  5. Complex interplay between stress perturbations and viscoelastic relaxation in a two-asperity fault model

    NASA Astrophysics Data System (ADS)

    Lorenzano, Emanuele; Dragoni, Michele

    2018-03-01

    We consider a plane fault with two asperities embedded in a shear zone, subject to a uniform strain rate owing to tectonic loading. After an earthquake, the static stress field is relaxed by viscoelastic deformation in the asthenosphere. We treat the fault as a discrete dynamical system with 3 degrees of freedom: the slip deficits of the asperities and the variation of their difference due to viscoelastic deformation. The evolution of the fault is described in terms of inter-seismic intervals and slip episodes, which may involve the slip of a single asperity or both. We consider the effect of stress transfers connected to earthquakes produced by neighbouring faults. The perturbation alters the slip deficits of both asperities and the stress redistribution on the fault associated with viscoelastic relaxation. The interplay between the stress perturbation and the viscoelastic relaxation significantly complicates the evolution of the fault and its seismic activity. We show that the presence of viscoelastic relaxation prevents any simple correlation between the change of Coulomb stresses on the asperities and the anticipation or delay of their failures. As an application, we study the effects of the 1999 Hector Mine, California, earthquake on the post-seismic evolution of the fault that generated the 1992 Landers, California, earthquake, which we model as a two-mode event associated with the consecutive failure of two asperities.

  6. Modulation of carbon tetrachloride-induced nephrotoxicity in rats by n-hexane extract of Sonchus asper.

    PubMed

    Khan, Rahmat Ali; Khan, Muhammad Rashid; Shah, Naseer Ali; Sahreen, Sumaira; Siddiq, Pakiza

    2015-10-01

    Sonchus asper is traditionally used in the treatment of renal dysfunction. In the present study, protective effects of S. asper against carbon tetrachloride (CCl4)-induced nephrotoxicity of rats were determined. In this study, 24 male albino rats (190-200 g) were equally divided into four groups. Group I (control group) was given saline (1 ml/kg body weight (b.w.), 0.85% NaCl) and dimethyl sulfoxide (1 ml/kg b.w.); group II was treated with CCl4 (1 ml/kg b.w. intraperitoneally); groups III and IV were administered with CCl4 and after 48 h with S. asper n-hexane extract (SHE; 100 and 200 mg/kg b.w.). All the treatments were given twice a week for 4 weeks. The results revealed that CCl4-induced oxidative stress as evidenced by the significant depletion of antioxidant enzymes, namely, superoxide dismutase, catalase, peroxidase, glutathione-S-transferase, glutathione peroxidase, glutathione reductase, and glutathione contents, while increased lipid peroxidation (thiobarbituric acid-reactive substances contents). Administration of SHE significantly ameliorated (p < 0.01) the activity of antioxidant enzymes and reduced lipid peroxides. Coadministration revealed that S. asper extract can protect the kidney against CCl4-mediated oxidative damage by restoring the activity of antioxidant enzyme, due to the presence of plant bioactive constituents. © The Author(s) 2013.

  7. Effects of asperity contact on stick-slip dynamics

    NASA Astrophysics Data System (ADS)

    Yamaguchi, Tetsuo

    2017-04-01

    It is believed that asperity contact plays an important role in fricton, in particular in onset of dynamic slip or stick-slip motions. However, there remains very few studies controling asperities and observing their effects on mascoscopic stick-slip behavior or frictional constitutive laws. Here we perform stick-slip friction experiments between compliant gels with well-controlled asperity shape/size/configurations by molding technique. We find that, as curvature radius of the asperity becomes larger and the normal stress becomes smaller, velocity dependence turns from rate-strengthening to rate-weakening and accordingly, frictional behavior transitions from steady sliding, slow slip to fast slip. In this talk, we discuss the asperity size effects based on microscopic/macroscopic observations as well as a theoretical argument.

  8. Atomistic Simulation of Single Asperity Contact

    NASA Astrophysics Data System (ADS)

    Philip; Kromer; Marder, Michael

    2003-03-01

    In the standard (Bowden and Tabor) model of friction, the macroscopic behavior of sliding results from the deformation of microscopic asperities in contact. A recent idea instead extracts macroscopic friction from the aggregate behavior of traveling, self-healing interfacial cracks: certain families of cracks are found to be mathematically forbidden, and the envelope of allowed cracks dictates the familiar Coulomb law of friction. To explore the connection between the new and traditional pictures of friction, we conducted molecular dynamics (MD) simulations of single-asperity contact subjected to an oscillatory sliding force -- a geometry important for the problem of fretting (damage due to small-scale vibratory contact). Our simulations reveal the importance of traveling interface cracks to the dynamics of slip at the interface, and illuminate the dynamics of crack initiation and suppression.

  9. Neuroprotective effect of the methanolic extract of Hibiscus asper leaves in 6-hydroxydopamine-lesioned rat model of Parkinson's disease.

    PubMed

    Hritcu, Lucian; Foyet, Harquin Simplice; Stefan, Marius; Mihasan, Marius; Asongalem, Acha Emmanuel; Kamtchouing, Pierre

    2011-09-01

    While the Hibiscus asper Hook.f. (Malvaceae) is a traditional herb largely used in tropical region of the Africa as vegetable, potent sedative, tonic and restorative, anti-inflammatory and antidepressive drug, there is very little scientific data concerning the efficacy of this. The antioxidant and antiapoptotic activities of the methanolic extract of Hibiscus asper leaves (50 and 100 mg/kg) were assessed using superoxide dismutase (SOD), glutathione peroxidase (GPX) and catalase (CAT) specific activities, total glutathione (GSH) content, malondialdehyde (MDA) level (lipid peroxidation) and DNA fragmentation assays in male Wistar rats subjected to unilateral 6-hydroxydopamine (6-OHDA)-lesion. In 6-OHDA-lesioned rats, methanolic extract of Hibiscus asper leaves showed potent antioxidant and antiapoptotic activities. Chronic administration of the methanolic extract (50 and 100 mg/kg, i.p., daily, for 7 days) significantly increased antioxidant enzyme activities (SOD, GPX and CAT), total GSH content and reduced lipid peroxidation (MDA level) in rat temporal lobe homogenates, suggesting antioxidant activity. Also, DNA cleavage patterns were absent in the 6-OHDA-lesioned rats treated with methanolic extract of Hibiscus asper leaves, suggesting antiapoptotic activity. Taken together, our results suggest that the methanolic extract of Hibiscus asper leaves possesses neuroprotective activity against 6-OHDA-induced toxicity through antioxidant and antiapoptotic activities in Parkinson's disease model. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  10. Characteristics of Asperity Damage and Its Influence on the Shear Behavior of Granite Joints

    NASA Astrophysics Data System (ADS)

    Meng, Fanzhen; Zhou, Hui; Wang, Zaiquan; Zhang, Chuanqing; Li, Shaojun; Zhang, Liming; Kong, Liang

    2018-02-01

    Surface roughness significantly affects the shear behavior of rock joints; thus, studies on the asperity damage characteristics and its influence on the shear behavior of joints are extremely important. In this paper, shear tests were conducted on tensile granite joints; asperity damage was evaluated based on acoustic emission (AE) events; and the influence of asperity damage on joint shear behavior was analyzed. The results indicated that the total AE events tended to increase with normal stress. In addition, the asperity damage initiation shear stress, which is defined as the transition point from slow growth to rapid growth in the cumulative events curve, was approximately 0.485 of the peak shear strength regardless of the normal stress. Moreover, 63-85% of the AE events were generated after the peak shear stress, indicating that most of the damage occurred in this stage. Both the dilation and the total AE events decreased with shear cycles because of the damage inflicted on asperities during the previous shear cycle. Two stages were observed in the normal displacement curves under low normal stress, whereas three stages (compression, dilation and compression again) were observed at a higher normal stress; the second compression stage may be caused by tensile failure outside the shear plane. The magnitude of the normal stress and the state of asperity are two important factors controlling the post-peak stress drop and stick-slip of granite joints. Serious deterioration of asperities will stop stick-slip from recurring under the same normal stress because the ability to accumulate energy is decreased. The AE b-value increases with the number of shear cycles, indicating that the stress concentration inside the fault plane is reduced because of asperity damage; thus, the potential for dynamic disasters, such as fault-slip rockbursts, will be decreased.

  11. An Analytical Model for Two-Order Asperity Degradation of Rock Joints Under Constant Normal Stiffness Conditions

    NASA Astrophysics Data System (ADS)

    Li, Yingchun; Wu, Wei; Li, Bo

    2018-05-01

    Jointed rock masses during underground excavation are commonly located under the constant normal stiffness (CNS) condition. This paper presents an analytical formulation to predict the shear behaviour of rough rock joints under the CNS condition. The dilatancy and deterioration of two-order asperities are quantified by considering the variation of normal stress. We separately consider the dilation angles of waviness and unevenness, which decrease to zero as the normal stress approaches the transitional stress. The sinusoidal function naturally yields the decay of dilation angle as a function of relative normal stress. We assume that the magnitude of transitional stress is proportionate to the square root of asperity geometric area. The comparison between the analytical prediction and experimental data shows the reliability of the analytical model. All the parameters involved in the analytical model possess explicit physical meanings and are measurable from laboratory tests. The proposed model is potentially practicable for assessing the stability of underground structures at various field scales.

  12. Two-dimensional random surface model for asperity-contact in elastohydrodynamic lubrication

    NASA Technical Reports Server (NTRS)

    Coy, J. J.; Sidik, S. M.

    1979-01-01

    Relations for the asperity-contact time function during elastohydrodynamic lubrication of a ball bearing are presented. The analysis is based on a two-dimensional random surface model, and actual profile traces of the bearing surfaces are used as statistical sample records. The results of the analysis show that transition from 90 percent contact to 1 percent contact occurs within a dimensionless film thickness range of approximately four to five. This thickness ratio is several times large than reported in the literature where one-dimensional random surface models were used. It is shown that low pass filtering of the statistical records will bring agreement between the present results and those in the literature.

  13. Microscopic asperity contact and deformation of ultrahigh molecular weight polyethylene bearing surfaces.

    PubMed

    Wang, F C; Jin, Z M; McEwen, H M J; Fisher, J

    2003-01-01

    The effect of the roughness and topography of ultrahigh molecular weight polyethylene (UHMWPE) bearing surfaces on the microscopic contact mechanics with a metallic counterface was investigated in the present study. Both simple sinusoidal roughness forms, with a wide range of amplitudes and wavelengths, and real surface topographies, measured before and after wear testing in a simple pin-on-plate machine, were considered in the theoretical analysis. The finite difference method was used to solve the microscopic contact between the rough UHMWPE bearing surface and a smooth hard counterface. The fast Fourier transform (FFT) was used to cope with the large number of mesh points required to represent the surface topography of the UHMWPE bearing surface. It was found that only isolated asperity contacts occurred under physiological loading, and the real contact area was only a small fraction of the nominal contact area. Consequently, the average contact pressure experienced at the articulating surfaces was significantly higher than the nominal contact pressure. Furthermore, it was shown that the majority of asperities on the worn UHMWPE pin were deformed in the elastic region, and consideration of the plastic deformation only resulted in a negligible increase in the predicted asperity contact area. Microscopic asperity contact and deformation mechanisms may play an important role in the understanding of the wear mechanisms of UHMWPE bearing surfaces.

  14. An artificial stress asperity for initialization of spontaneous rupture propagation - a parametric study of a dynamic model with linear slip-weakening friction

    NASA Astrophysics Data System (ADS)

    Galis, M.; Pelties, C.; Kristek, J.; Moczo, P.

    2012-04-01

    Artificial procedures are used to initiate spontaneous rupture on faults with the linear slip-weakening (LSW) friction law. Probably the most frequent technique is the stress asperity. It is important to minimize effects of the artificial initialization on the phase of the spontaneous rupture propagation. The effects may strongly depend on the geometry and size of the asperity, spatial distribution of the stress in and around the asperity, and a maximum stress-overshoot value. A square initialization zone with the stress discontinuously falling down at the asperity border to the level of the initial stress has been frequently applied (e.g., in the SCEC verification exercise). Galis et al. (2010) and Bizzarri (2010) independently introduced the elliptical asperity with a smooth spatial stress distribution in and around the asperity. In both papers the width of smoothing/tapering zone was only ad-hoc defined. Numerical simulations indicate that the ADER-DG method can account for a discontinuous-stress initialization more accurately than the FE method. Considering the ADER-DG solution a reference we performed numerical simulations in order to define the width of the smoothing/tapering zone to be used in the FE and FD-FE hybrid methods for spontaneous rupture propagation. We considered different sizes of initialization zone, different shapes of the initialization zone (square, circle, ellipse), different spatial distributions of stress (smooth, discontinuous), and different stress-overshoot values to investigate conditions of the spontaneous rupture propagation. We compare our numerical results with the 2D and 3D estimates by Andrews (1976a,b), Day (1982), Campillo & Ionescu (1997), Favreau at al. (1999) and Uenishi & Rice (2003, 2004). Results of our study may help modelers to better setup the initialization zone in order to avoid, e.g., a too large initialization zone and reduce numerical artifacts.

  15. Snake venomics and antivenomics of Bothrops colombiensis, a medically important pitviper of the Bothrops atrox-asper complex endemic to Venezuela: Contributing to its taxonomy and snakebite management.

    PubMed

    Calvete, Juan J; Borges, Adolfo; Segura, Alvaro; Flores-Díaz, Marietta; Alape-Girón, Alberto; Gutiérrez, José María; Diez, Nardy; De Sousa, Leonardo; Kiriakos, Demetrio; Sánchez, Eladio; Faks, José G; Escolano, José; Sanz, Libia

    2009-03-06

    The taxonomic status of the medically important pitviper of the Bothrops atrox-asper complex endemic to Venezuela, which has been classified as Bothrops colombiensis, remains incertae cedis. To help resolving this question, the venom proteome of B. colombiensis was characterized by reverse-phase HPLC fractionation followed by analysis of each chromatographic fraction by SDS-PAGE, N-terminal sequencing, MALDI-TOF mass fingerprinting, and collision-induced dissociation tandem mass spectrometry of tryptic peptides. The venom contained proteins belonging to 8 types of families. PI Zn(2+)-metalloproteinases and K49 PLA(2) molecules comprise over 65% of the venom proteins. Other venom protein families comprised PIII Zn(2+)-metalloproteinases (11.3%), D49 PLA(2)s (10.2%), l-amino acid oxidase (5.7%), the medium-sized disintegrin colombistatin (5.6%), serine proteinases (1%), bradykinin-potentiating peptides (0.8%), a DC-fragment (0.5%), and a CRISP protein (0.1%). A comparison of the venom proteomes of B. colombiensis and B. atrox did not support the suggested synonymy between these two species. The closest homologues to B. colombiensis venom proteins appeared to be toxins from B. asper. A rough estimation of the similarity between the venoms of B. colombiensis and B. asper indicated that these species share approximately 65-70% of their venom proteomes. The close kinship of B. colombiensis and B. asper points at the ancestor of B. colombiensis as the founding Central American B. asper ancestor. This finding may be relevant for reconstructing the natural history and cladogenesis of Bothrops. Further, the virtually indistinguishable immunological crossreactivity of a Venezuelan ABC antiserum (raised against a mixture of B. colombiensis and Crotalus durissus cumanensis venoms) and the Costa Rican ICP polyvalent antivenom (generated against a mixture of B. asper, Crotalus simus, and Lachesis stenophrys venoms) towards the venoms of B. colombiensis and B. asper, supports this

  16. Investigating Alkylsilane Monolayer Tribology at a Single-Asperity Contact with Molecular Dynamics Simulation.

    PubMed

    Summers, Andrew Z; Iacovella, Christopher R; Cummings, Peter T; McCabe, Clare

    2017-10-24

    Chemisorbed monolayer films are known to possess favorable characteristics for nanoscale lubrication of micro- and nanoelectromechanical systems (MEMS/NEMS). Prior studies have shown that the friction observed for monolayer-coated surfaces features a strong dependence on the geometry of contact. Specifically, tip-like geometries have been shown to penetrate into monolayer films, inducing defects in the monolayer chains and leading to plowing mechanisms during shear, which result in higher coefficients of friction (COF) than those observed for planar geometries. In this work, we use molecular dynamics simulations to examine the tribology of model silica single-asperity contacts under shear with monolayer-coated substrates featuring various film densities. It is observed that lower monolayer densities lead to reduced COFs, in contrast to results for planar systems where COF is found to be nearly independent of monolayer density. This is attributed to a liquid-like response to shear, whereby fewer defects are imparted in monolayer chains from the asperity, and chains are easily displaced by the tip as a result of the higher free volume. This transition in the mechanism of molecular plowing suggests that liquid-like films should provide favorable lubrication at single-asperity contacts.

  17. 'Two go together': Near-simultaneous moment release of two asperities during the 2016 Mw 6.6 Muji, China earthquake

    NASA Astrophysics Data System (ADS)

    Bie, Lidong; Hicks, Stephen; Garth, Thomas; Gonzalez, Pablo; Rietbrock, Andreas

    2018-06-01

    On 25 November 2016, a Mw 6.6 earthquake ruptured the Muji fault in western Xinjiang, China. We investigate the earthquake rupture independently using geodetic observations from Interferometric Synthetic Aperture Radar (InSAR) and regional seismic recordings. To constrain the fault geometry and slip distribution, we test different combinations of fault dip and slip direction to reproduce InSAR observations. Both InSAR observations and optimal distributed slip model suggest buried rupture of two asperities separated by a gap of greater than 5 km. Additional seismic gaps exist at the end of both asperities that failed in the 2016 earthquake. To reveal the dynamic history of asperity failure, we inverted regional seismic waveforms for multiple centroid moment tensors and construct a moment rate function. The results show a small centroid time gap of 2.6 s between the two sub-events. Considering the >5 km gap between the two asperities and short time interval, we propose that the two asperities failed near-simultaneously, rather than in a cascading rupture propagation style. The second sub-event locates ∼39 km to the east of the epicenter and the centroid time is at 10.7 s. It leads to an estimate of average velocity of 3.7 km/s as an upper bound, consistent with upper crust shear wave velocity in this region. We interpret that the rupture front is propagating at sub-shear wave velocities, but that the second sub-event has a reduced or asymmetric rupture time, leading to the apparent near-simultaneous moment release of the two asperities.

  18. Asperity-Level Origins of Transition from Mild to Severe Wear

    NASA Astrophysics Data System (ADS)

    Aghababaei, Ramin; Brink, Tobias; Molinari, Jean-François

    2018-05-01

    Wear is the inevitable damage process of surfaces during sliding contact. According to the well-known Archard's wear law, the wear volume scales with the real contact area and as a result is proportional to the load. Decades of wear experiments, however, show that this relation only holds up to a certain load limit, above which the linearity is broken and a transition from mild to severe wear occurs. We investigate the microscopic origins of this breakdown and the corresponding wear transition at the asperity level. Our atomistic simulations reveal that the interaction between subsurface stress fields of neighboring contact spots promotes the transition from mild to severe wear. The results show that this interaction triggers the deep propagation of subsurface cracks and the eventual formation of large debris particles, with a size corresponding to the apparent contact area of neighboring contact spots. This observation explains the breakdown of the linear relation between the wear volume and the normal load in the severe wear regime. This new understanding highlights the critical importance of studying contact beyond the elastic limit and single-asperity models.

  19. Habitat separation of prickly sculpin, Cottus asper, and coastrange sculpin, Cottus aleuticus, in the mainstem Smith River, northwestern California

    Treesearch

    Jason L. White; Bret Harvey

    1999-01-01

    Sympatric coastrange sculpin, Cottus aleuticus, and prickly sculpin, C. asper, occupied distinct habitats in the mainstem Smith River, northwestern California. For example, 90% of coastrange sculpin (n = 294) used habitat with water velocity > 5 cm s-1, whereas 89% of prickly sculpin (n = 981) used...

  20. Trench Parallel Bouguer Anomaly (TPBA): A robust measure for statically detecting asperities along the forearc of subduction zones

    NASA Astrophysics Data System (ADS)

    Raeesi, M.

    2009-05-01

    During 1970s some researchers noticed that large earthquakes occur repeatedly at the same locations. These observations led to the asperity hypothesis. At the same times some researchers noticed that there was a relationship between the location of great interplate earthquakes and the submarine structures, basins in particular, over the rupture area in the forearc regions. Despite these observations there was no comprehensive and reliable hypothesis explaining the relationship. There were numerous cons and pros to the various hypotheses given in this regard. In their pioneering study, Song and Simons (2003) approached the problem using gravity data. This was a turning point in seismology. Although their approach was correct, appropriate gravity anomaly had to be used in order to reveal the location and extent of the asperities. Following the method of Song and Simons (2003) but using the Bouguer gravity anomaly that we called "Trench Parallel Bouguer Anomaly", TPBA, we found strong, logical, and convincing relation between the TPBA-derived asperities and the slip distribution as well as earthquake distribution, foreshocks and aftershocks in particular. Various parameters with different levels of importance are known that affect the contact between the subducting and the overriding plates, We found that the TPBA can show which are the important factors. Because the TPBA-derived asperities are based on static physical properties (gravity and elevation), they do not suffer from instabilities due to the trade-offs, as it happens for asperities derived in dynamic studies such as waveform inversion. Comparison of the TPBA-derived asperities with rupture processes of the well-studied great earthquakes, reveals the high level of accuracy of the TPBA. This new measure opens a forensic viewpoint on the rupture process along the subduction zones. The TPBA reveals the reason behind 9+ earthquakes and it explains where and why they occur. The TPBA reveals the areas that can

  1. EVALUATIN OF ANTIHYPERTENSIVE ACTIVITY OF SONCHUS ASPER L. IN RATS.

    PubMed

    Mushtaq, Muhammad Naveed; Akhtar, Muhammad Shoaib; Alamgeer; Ahmad, Taseer; Khan, Hafeez Ullah; Maheen, Safirah; Ahsan, Haseeb; Naz, Huma; Asif, Hira; Younis, Waqas; Tabassum, Nazia

    2016-01-01

    The present investigation was carried out to evaluate the effect of aerial parts of Sonchus asper L. in normotensive, glucose and egg feed diet induced hypertensive rats. Aqueous-methanolic extract of Sonchus asper in 250, 500 and 1000 mg/kg doses was studied in normotensive and glucose induced hypertensive rats using the non-invasive technique. The results obtained showed that the extract has significantly (p < 0.5 - p < 0.001) decreased the blood pressure and heart rate in dose dependent manner. The dose 1000 mg/kg of the extract produced the maximum antihypertensive effect and was selected for further experiments. The extract was found to prevent the rise in blood pressure of egg and glucose fed rats as compared to control group in 21 days study. The LD50 of the plant extract was 3500 mg/kg b.w. in mice and sub-chronic toxicity study showed that there was no significant alteration in the blood chemistry of the extract treated rats. It is conceivable, therefore, that the aqueous-methanolic extract of Sonchus asper has exerted considerable antihypertensive activity in rats and has duly supported traditional medicinal use of plant in hypertension.

  2. ASPER Research and Evaluation Projects 1970-79.

    ERIC Educational Resources Information Center

    1980

    This inventory of research and evaluation projects completed during calendar years 1970-1979 for the Office of the Assistant Secretary for Policy, Education, and Research (ASPER) of the Department of Labor contains summaries of projects on economic, social, and policy background; the labor market; the nature and impact of Department of Labor…

  3. Isolation and Functional Characterization of an Acidic Myotoxic Phospholipase A₂ from Colombian Bothrops asper Venom.

    PubMed

    Posada Arias, Silvia; Rey-Suárez, Paola; Pereáñez J, Andrés; Acosta, Cristian; Rojas, Mauricio; Delazari Dos Santos, Lucilene; Ferreira, Rui Seabra; Núñez, Vitelbina

    2017-10-26

    Myotoxic phospholipases A₂ (PLA₂) are responsible for many clinical manifestations in envenomation by Bothrops snakes. A new myotoxic acidic Asp49 PLA₂ (BaCol PLA₂) was isolated from Colombian Bothrops asper venom using reverse-phase high performance liquid chromatography (RP-HPLC). BaCol PLA₂ had a molecular mass of 14,180.69 Da (by mass spectrometry) and an isoelectric point of 4.4. The complete amino acid sequence was obtained by cDNA cloning (GenBank accession No. MF319968) and revealed a mature product of 124 amino acids with Asp at position 49. BaCol PLA₂ showed structural homology with other acidic PLA₂ isolated from Bothrops venoms, including a non-myotoxic PLA₂ from Costa Rican B. asper . In vitro studies showed cell membrane damage without exposure of phosphatidylserine, an early apoptosis hallmark. BaCol PLA₂ had high indirect hemolytic activity and moderate anticoagulant action. In mice, BaCol PLA₂ caused marked edema and myotoxicity, the latter seen as an increase in plasma creatine kinase and histological damage to gastrocnemius muscle fibers that included vacuolization and hyalinization necrosis of the sarcoplasm.

  4. A 3D contact analysis approach for the visualization of the electrical contact asperities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roussos, Constantinos C.; Swingler, Jonathan

    The electrical contact is an important phenomenon that should be given into consideration to achieve better performance and long term reliability for the design of devices. Based upon this importance, the electrical contact interface has been visualized as a “3D Contact Map” and used in order to investigate the contact asperities. The contact asperities describe the structures above and below the contact spots (the contact spots define the 3D contact map) to the two conductors which make the contact system. The contact asperities require the discretization of the 3D microstructures of the contact system into voxels. A contact analysis approachmore » has been developed and introduced in this paper which shows the way to the 3D visualization of the contact asperities of a given contact system. For the discretization of 3D microstructure of contact system into voxels, X-ray Computed Tomography (CT) method is used in order to collect the data of a 250 V, 16 A rated AC single pole rocker switch which is used as a contact system for investigation.« less

  5. A 3D contact analysis approach for the visualization of the electrical contact asperities

    PubMed Central

    Swingler, Jonathan

    2017-01-01

    The electrical contact is an important phenomenon that should be given into consideration to achieve better performance and long term reliability for the design of devices. Based upon this importance, the electrical contact interface has been visualized as a ‘‘3D Contact Map’’ and used in order to investigate the contact asperities. The contact asperities describe the structures above and below the contact spots (the contact spots define the 3D contact map) to the two conductors which make the contact system. The contact asperities require the discretization of the 3D microstructures of the contact system into voxels. A contact analysis approach has been developed and introduced in this paper which shows the way to the 3D visualization of the contact asperities of a given contact system. For the discretization of 3D microstructure of contact system into voxels, X-ray Computed Tomography (CT) method is used in order to collect the data of a 250 V, 16 A rated AC single pole rocker switch which is used as a contact system for investigation. PMID:28105383

  6. A 3D contact analysis approach for the visualization of the electrical contact asperities

    DOE PAGES

    Roussos, Constantinos C.; Swingler, Jonathan

    2017-01-11

    The electrical contact is an important phenomenon that should be given into consideration to achieve better performance and long term reliability for the design of devices. Based upon this importance, the electrical contact interface has been visualized as a “3D Contact Map” and used in order to investigate the contact asperities. The contact asperities describe the structures above and below the contact spots (the contact spots define the 3D contact map) to the two conductors which make the contact system. The contact asperities require the discretization of the 3D microstructures of the contact system into voxels. A contact analysis approachmore » has been developed and introduced in this paper which shows the way to the 3D visualization of the contact asperities of a given contact system. For the discretization of 3D microstructure of contact system into voxels, X-ray Computed Tomography (CT) method is used in order to collect the data of a 250 V, 16 A rated AC single pole rocker switch which is used as a contact system for investigation.« less

  7. "Habitat separation of prickly sculpin, Cottus asper, and coastrange sculpin, Cottus aleuticus, in the mainstem Smith River, northwestern California"

    Treesearch

    Jason L. White; Bret C. Harvey

    1999-01-01

    Sympatric coastrange sculpin, Cottus aleuticus, and prickly sculpin, C. asper, occupied distinct habitats in the mainstem Smith River, northwestern California. For example, 90% of coastrange sculpin (n = 294) used habitat with water velocity > 5 cm s -1 , whereas 89% of prickly sculpin (n = 981) used habitat with water velocity ? 5 cm s -1. Sixty-five percent of...

  8. The 2014, MW6.9 North Aegean earthquake: seismic and geodetic evidence for coseismic slip on persistent asperities

    NASA Astrophysics Data System (ADS)

    Konca, Ali Ozgun; Cetin, Seda; Karabulut, Hayrullah; Reilinger, Robert; Dogan, Ugur; Ergintav, Semih; Cakir, Ziyadin; Tari, Ergin

    2018-05-01

    We report that asperities with the highest coseismic slip in the 2014 MW6.9 North Aegean earthquake persisted through the interseismic, coseismic and immediate post-seismic periods. We use GPS and seismic data to obtain the source model of the 2014 earthquake, which is located on the western extension of the North Anatolian Fault (NAF). The earthquake ruptured a bilateral, 90 km strike-slip fault with three slip patches: one asperity located west of the hypocentre and two to the east with a rupture duration of 40 s. Relocated pre-earthquake seismicity and aftershocks show that zones with significant coseismic slip were relatively quiet during both the 7 yr of interseismic and the 3-month aftershock periods, while the surrounding regions generated significant seismicity during both the interseismic and post-seismic periods. We interpret the unusually long fault length and source duration, and distribution of pre- and post-main-shock seismicity as evidence for a rupture of asperities that persisted through strain accumulation and coseismic strain release in a partially coupled fault zone. We further suggest that the association of seismicity with fault creep may characterize the adjacent Izmit, Marmara Sea and Saros segments of the NAF. Similar behaviour has been reported for sections of the San Andreas Fault, and some large subduction zones, suggesting that the association of seismicity with creeping fault segments and rapid relocking of asperities may characterize many large earthquake faults.

  9. Phylogeography of the Central American lancehead Bothrops asper (SERPENTES: VIPERIDAE).

    PubMed

    Saldarriaga-Córdoba, Mónica; Parkinson, Christopher L; Daza, Juan M; Wüster, Wolfgang; Sasa, Mahmood

    2017-01-01

    The uplift and final connection of the Central American land bridge is considered the major event that allowed biotic exchange between vertebrate lineages of northern and southern origin in the New World. However, given the complex tectonics that shaped Middle America, there is still substantial controversy over details of this geographical reconnection, and its role in determining biogeographic patterns in the region. Here, we examine the phylogeography of Bothrops asper, a widely distributed pitviper in Middle America and northwestern South America, in an attempt to evaluate how the final Isthmian uplift and other biogeographical boundaries in the region influenced genealogical lineage divergence in this species. We examined sequence data from two mitochondrial genes (MT-CYB and MT-ND4) from 111 specimens of B. asper, representing 70 localities throughout the species' distribution. We reconstructed phylogeographic patterns using maximum likelihood and Bayesian methods and estimated divergence time using the Bayesian relaxed clock method. Within the nominal species, an early split led to two divergent lineages of B. asper: one includes five phylogroups distributed in Caribbean Middle America and southwestern Ecuador, and the other comprises five other groups scattered in the Pacific slope of Isthmian Central America and northwestern South America. Our results provide evidence of a complex transition that involves at least two dispersal events into Middle America during the final closure of the Isthmus.

  10. Phylogeography of the Central American lancehead Bothrops asper (SERPENTES: VIPERIDAE)

    PubMed Central

    Parkinson, Christopher L.; Daza, Juan M.; Wüster, Wolfgang

    2017-01-01

    The uplift and final connection of the Central American land bridge is considered the major event that allowed biotic exchange between vertebrate lineages of northern and southern origin in the New World. However, given the complex tectonics that shaped Middle America, there is still substantial controversy over details of this geographical reconnection, and its role in determining biogeographic patterns in the region. Here, we examine the phylogeography of Bothrops asper, a widely distributed pitviper in Middle America and northwestern South America, in an attempt to evaluate how the final Isthmian uplift and other biogeographical boundaries in the region influenced genealogical lineage divergence in this species. We examined sequence data from two mitochondrial genes (MT-CYB and MT-ND4) from 111 specimens of B. asper, representing 70 localities throughout the species’ distribution. We reconstructed phylogeographic patterns using maximum likelihood and Bayesian methods and estimated divergence time using the Bayesian relaxed clock method. Within the nominal species, an early split led to two divergent lineages of B. asper: one includes five phylogroups distributed in Caribbean Middle America and southwestern Ecuador, and the other comprises five other groups scattered in the Pacific slope of Isthmian Central America and northwestern South America. Our results provide evidence of a complex transition that involves at least two dispersal events into Middle America during the final closure of the Isthmus. PMID:29176806

  11. Reproductive biology of the endangered percid Zingel asper in captivity: a histological description of the male reproductive cycle.

    PubMed

    Chevalier, Christine; de Conto, Christine; Exbrayat, Jean-Marie

    2011-01-01

    The endemic Rhodanian percid Zingel asper (Linnaeus, 1758), is usually found throughout the Rhône basin, but this fish is now in sharp decline. Understanding its reproductive physiology is important so as to be able to artificially control its reproduction with a view to re-introducing it. This study was carried out on a population obtained by artificial fertilization and bred in external tanks. Fishes were observed from the juvenile stages through to adulthood. Patterns of testicular development were defined from histological observations. Testes of Z. asper were paired, elongated and fusiform dorsocaudal organs. The two lobes of each gonad joined together to form a duct that extended to the urogenital papillae. They showed a lobular structure. The testicular lobules were of the unrestricted spermatogonial type. Up to 10 months-old, most of the males were immature: their testes showed only type A spermatogonia. The appearance of type B spermatogonia in the lobules of a testis indicated the beginning of spermatogenesis in 10 months-old fish. Spermiogenesis occurred 24 months after the fertilization and, in 26 months-old fish, the cyst opened and released spermatozoa into the lumen of lobules. The spermiation belonged to a cystic type. During the third year, histological observations pointed to the same evolution of adult gonads as during the second year. Sexual maturity was reached in captive Z. asper after two years. The spawning occurred in May in the breeding conditions.

  12. A mechanistic understanding of the wear coefficient: From single to multiple asperities contact

    NASA Astrophysics Data System (ADS)

    Frérot, Lucas; Aghababaei, Ramin; Molinari, Jean-François

    2018-05-01

    Sliding contact between solids leads to material detaching from their surfaces in the form of debris particles, a process known as wear. According to the well-known Archard wear model, the wear volume (i.e. the volume of detached particles) is proportional to the load and the sliding distance, while being inversely proportional to the hardness. The influence of other parameters are empirically merged into a factor, referred to as wear coefficient, which does not stem from any theoretical development, thus limiting the predictive capacity of the model. Based on a recent understanding of a critical length-scale controlling wear particle formation, we present two novel derivations of the wear coefficient: one based on Archard's interpretation of the wear coefficient as the probability of wear particle detachment and one that follows naturally from the up-scaling of asperity-level physics into a generic multi-asperity wear model. As a result, the variation of wear rate and wear coefficient are discussed in terms of the properties of the interface, surface roughness parameters and applied load for various rough contact situations. Both new wear interpretations are evaluated analytically and numerically, and recover some key features of wear observed in experiments. This work shines new light on the understanding of wear, potentially opening a pathway for calculating the wear coefficient from first principles.

  13. Fixed point and anomaly mediation in partial {\\boldsymbol{N}}=2 supersymmetric standard models

    NASA Astrophysics Data System (ADS)

    Yin, Wen

    2018-01-01

    Motivated by the simple toroidal compactification of extra-dimensional SUSY theories, we investigate a partial N = 2 supersymmetric (SUSY) extension of the standard model which has an N = 2 SUSY sector and an N = 1 SUSY sector. We point out that below the scale of the partial breaking of N = 2 to N = 1, the ratio of Yukawa to gauge couplings embedded in the original N = 2 gauge interaction in the N = 2 sector becomes greater due to a fixed point. Since at the partial breaking scale the sfermion masses in the N = 2 sector are suppressed due to the N = 2 non-renormalization theorem, the anomaly mediation effect becomes important. If dominant, the anomaly-induced masses for the sfermions in the N = 2 sector are almost UV-insensitive due to the fixed point. Interestingly, these masses are always positive, i.e. there is no tachyonic slepton problem. From an example model, we show interesting phenomena differing from ordinary MSSM. In particular, the dark matter particle can be a sbino, i.e. the scalar component of the N = 2 vector multiplet of {{U}}{(1)}Y. To obtain the correct dark matter abundance, the mass of the sbino, as well as the MSSM sparticles in the N = 2 sector which have a typical mass pattern of anomaly mediation, is required to be small. Therefore, this scenario can be tested and confirmed in the LHC and may be further confirmed by the measurement of the N = 2 Yukawa couplings in future colliders. This model can explain dark matter, the muon g-2 anomaly, and gauge coupling unification, and relaxes some ordinary problems within the MSSM. It is also compatible with thermal leptogenesis.

  14. Preparación toxoide a partir de la fracción hemorrágica del veneno de Bothrops asper (serpiente de América Central y del Sur) (Toxoid preparation from hemorrhagic fraction of the venom from Bothrops asper (snake from Central and South America).

    PubMed

    Rodríguez-Acosta, A; Aguilar, I; Girón, M E

    1993-01-01

    A technique is described for preparing a toxoid from the hemorrhagic fraction of the Bothrops asper venom. This method conserves a high degree of immunogenicity although it eliminates lethal effects. None of the animals vaccinated with the toxoid from this fraction had hemorrhagic lesions after they were injected the venom from the hemorrhagic fraction.

  15. Scanning tunneling microscope-quartz crystal microbalance study of temperature gradients at an asperity contact.

    PubMed

    Pan, L; Krim, J

    2013-01-01

    Investigations of atomic-scale friction frequently involve setups where a tip and substrate are initially at different temperatures. The temperature of the sliding interface upon contact has thus become a topic of interest. A method for detecting initial tip-sample temperature differences at an asperity contact is described, which consists of a scanning tunneling microscope (STM) tip in contact with the surface electrode of a quartz crystal microbalance (QCM). The technique makes use of the fact that a QCM is extremely sensitive to abrupt changes in temperature. In order to demonstrate the technique's capabilities, QCM frequency shifts were recorded for varying initial tip-substrate temperature differences as an STM tip was brought into and out of contact. The results are interpreted within the context of a recent model for thermal heat conduction at an asperity contact, and it is concluded that the transient frequency response is attributable to small changes in temperature close to the region of contact rather than a change in the overall temperature of the QCM itself. For the assumed model parameters, the results moreover reveal substantial temperature discontinuities at the boundary between the tip and the sample, for example, on the order of 10-15 °C for initial temperature differences of 20 °C.

  16. Scanning tunneling microscope-quartz crystal microbalance study of temperature gradients at an asperity contact

    NASA Astrophysics Data System (ADS)

    Pan, L.; Krim, J.

    2013-01-01

    Investigations of atomic-scale friction frequently involve setups where a tip and substrate are initially at different temperatures. The temperature of the sliding interface upon contact has thus become a topic of interest. A method for detecting initial tip-sample temperature differences at an asperity contact is described, which consists of a scanning tunneling microscope (STM) tip in contact with the surface electrode of a quartz crystal microbalance (QCM). The technique makes use of the fact that a QCM is extremely sensitive to abrupt changes in temperature. In order to demonstrate the technique's capabilities, QCM frequency shifts were recorded for varying initial tip-substrate temperature differences as an STM tip was brought into and out of contact. The results are interpreted within the context of a recent model for thermal heat conduction at an asperity contact, and it is concluded that the transient frequency response is attributable to small changes in temperature close to the region of contact rather than a change in the overall temperature of the QCM itself. For the assumed model parameters, the results moreover reveal substantial temperature discontinuities at the boundary between the tip and the sample, for example, on the order of 10-15 °C for initial temperature differences of 20 °C.

  17. A prototype of the procedure of strong ground motion prediction for intraslab earthquake based on characterized source model

    NASA Astrophysics Data System (ADS)

    Iwata, T.; Asano, K.; Sekiguchi, H.

    2011-12-01

    We propose a prototype of the procedure to construct source models for strong motion prediction during intraslab earthquakes based on the characterized source model (Irikura and Miyake, 2011). The key is the characterized source model which is based on the empirical scaling relationships for intraslab earthquakes and involve the correspondence between the SMGA (strong motion generation area, Miyake et al., 2003) and the asperity (large slip area). Iwata and Asano (2011) obtained the empirical relationships of the rupture area (S) and the total asperity area (Sa) to the seismic moment (Mo) as follows, with assuming power of 2/3 dependency of S and Sa on M0, S (km**2) = 6.57×10**(-11)×Mo**(2/3) (Nm) (1) Sa (km**2) = 1.04 ×10**(-11)×Mo**(2/3) (Nm) (2). Iwata and Asano (2011) also pointed out that the position and the size of SMGA approximately corresponds to the asperity area for several intraslab events. Based on the empirical relationships, we gave a procedure for constructing source models of intraslab earthquakes for strong motion prediction. [1] Give the seismic moment, Mo. [2] Obtain the total rupture area and the total asperity area according to the empirical scaling relationships between S, Sa, and Mo given by Iwata and Asano (2011). [3] Square rupture area and asperities are assumed. [4] The source mechanism is assumed to be the same as that of small events in the source region. [5] Plural scenarios including variety of the number of asperities and rupture starting points are prepared. We apply this procedure by simulating strong ground motions for several observed events for confirming the methodology.

  18. Rediscovery of the enigmatic fungus-farming ant "Mycetosoritis" asper Mayr (Hymenoptera: Formicidae): Implications for taxonomy, phylogeny, and the evolution of agriculture in ants

    PubMed Central

    Ješovnik, Ana; Vasconcelos, Heraldo L.; Bacci, Mauricio; Schultz, Ted R.

    2017-01-01

    We report the rediscovery of the exceedingly rarely collected and enigmatic fungus-farming ant species Mycetosoritis asper. Since the description of the type specimen in 1887, only four additional specimens are known to have been added to the world's insect collections. Its biology is entirely unknown and its phylogenetic position within the fungus-farming ants has remained puzzling due to its aberrant morphology. In 2014 we excavated and collected twenty-one colonies of M. asper in the Floresta Nacional de Chapecó in Santa Catarina, Brazil. We describe here for the first time the male and larva of the species and complement the previous descriptions of both the queen and the worker. We describe, also for the first time, M. asper biology, nest architecture, and colony demographics, and identify its fungal cultivar. Molecular phylogenetic analyses indicate that both M. asper and M. clorindae are members of the genus Cyphomyrmex, which we show to be paraphyletic as currently defined. More precisely, M. asper is a member of the Cyphomyrmex strigatus group, which we also show to be paraphyletic with respect to the genus Mycetophylax. Based on these results, and in the interest of taxonomic stability, we transfer the species M. asper, M. clorindae, and all members of the C. strigatus group to the genus Mycetophylax, the oldest available name for this clade. Based on ITS sequence data, Mycetophylax asper practices lower agriculture, cultivating a fungal species that belongs to lower-attine fungal Clade 2, subclade F. PMID:28489860

  19. Biochemical Characterization, Action on Macrophages, and Superoxide Anion Production of Four Basic Phospholipases A2 from Panamanian Bothrops asper Snake Venom

    PubMed Central

    Rueda, Aristides Quintero; Rodríguez, Isela González; Arantes, Eliane C.; Setúbal, Sulamita S.; Calderon, Leonardo de A.; Zuliani, Juliana P.; Stábeli, Rodrigo G.; Soares, Andreimar M.

    2013-01-01

    Bothrops asper (Squamata: Viperidae) is the most important venomous snake in Central America, being responsible for the majority of snakebite accidents. Four basic PLA2s (pMTX-I to -IV) were purified from crude venom by a single-step chromatography using a CM-Sepharose ion-exchange column (1.5 × 15 cm). Analysis of the N-terminal sequence demonstrated that pMTX-I and III belong to the catalytically active Asp49 phospholipase A2 subclass, whereas pMTX-II and IV belong to the enzymatically inactive Lys49 PLA2s-like subclass. The PLA2s isolated from Panama Bothrops asper venom (pMTX-I, II, III, and IV) are able to induce myotoxic activity, inflammatory reaction mainly leukocyte migration to the muscle, and induce J774A.1 macrophages activation to start phagocytic activity and superoxide production. PMID:23509779

  20. N-point functions in rolling tachyon background

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jokela, Niko; Keski-Vakkuri, Esko; Department of Physics, P.O. Box 64, FIN-00014, University of Helsinki

    2009-04-15

    We study n-point boundary correlation functions in timelike boundary Liouville theory, relevant for open string multiproduction by a decaying unstable D brane. We give an exact result for the one-point function of the tachyon vertex operator and show that it is consistent with a previously proposed relation to a conserved charge in string theory. We also discuss when the one-point amplitude vanishes. Using a straightforward perturbative expansion, we find an explicit expression for a tachyon n-point amplitude for all n, however the result is still a toy model. The calculation uses a new asymptotic approximation for Toeplitz determinants, derived bymore » relating the system to a Dyson gas at finite temperature.« less

  1. Basin-centered asperities in great subduction zone earthquakes: A link between slip, subsidence, and subduction erosion?

    USGS Publications Warehouse

    Wells, R.E.; Blakely, R.J.; Sugiyama, Y.; Scholl, D. W.; Dinterman, P.A.

    2003-01-01

    Published areas of high coseismic slip, or asperities, for 29 of the largest Circum-Pacific megathrust earthquakes are compared to forearc structure revealed by satellite free-air gravity, bathymetry, and seismic profiling. On average, 71% of an earthquake's seismic moment and 79% of its asperity area occur beneath the prominent gravity low outlining the deep-sea terrace; 57% of an earthquake's asperity area, on average, occurs beneath the forearc basins that lie within the deep-sea terrace. In SW Japan, slip in the 1923, 1944, 1946, and 1968 earthquakes was largely centered beneath five forearc basins whose landward edge overlies the 350??C isotherm on the plate boundary, the inferred downdip limit of the locked zone. Basin-centered coseismic slip also occurred along the Aleutian, Mexico, Peru, and Chile subduction zones but was ambiguous for the great 1964 Alaska earthquake. Beneath intrabasin structural highs, seismic slip tends to be lower, possibly due to higher temperatures and fluid pressures. Kilometers of late Cenozoic subsidence and crustal thinning above some of the source zones are indicated by seismic profiling and drilling and are thought to be caused by basal subduction erosion. The deep-sea terraces and basins may evolve not just by growth of the outer arc high but also by interseismic subsidence not recovered during earthquakes. Basin-centered asperities could indicate a link between subsidence, subduction erosion, and seismogenesis. Whatever the cause, forearc basins may be useful indicators of long-term seismic moment release. The source zone for Cascadia's 1700 A.D. earthquake contains five large, basin-centered gravity lows that may indicate potential asperities at depth. The gravity gradient marking the inferred downdip limit to large coseismic slip lies offshore, except in northwestern Washington, where the low extends landward beneath the coast. Transverse gravity highs between the basins suggest that the margin is seismically segmented and

  2. Protective effects of Sonchus asper against KBrO3 induced lipid peroxidation in rats

    PubMed Central

    2012-01-01

    Background Sonchus asper is traditionally used in Pakistan for the treatment of reproductive dysfunction and oxidative stress. The present investigation was aimed to evaluate chloroform extract of Sonchus asper (SACE) against potassium bromate-induced reproductive stress in male rats. Methods 20 mg/kg body weight (b.w.) potassium bromate (KBrO3) was induced in 36 rats for four weeks and checked the protective efficacy of SACE at various hormonal imbalances, alteration of antioxidant enzymes, and DNA fragmentation levels. High performance chromatography (HPLC) was used for determination of bioactive constituents responsible. Results The level of hormonal secretion was significantly altered by potassium bromate. DNA fragmentation%, activity of antioxidant enzymes; catalase (CAT), peroxidase (POD), superoxide dismutase (SOD) and phase II metabolizing enzymes viz; glutathione reductase (GSR), glutathione peroxidase (GSHpx), glutathione-S-tansase (GST) and reduced glutathione (GSH) was decreased while hydrogen per oxide contents and thiobarbituric acid reactive substances (TBARS) were increased with KBrO3 treatment. Treatment with SACE effectively ameliorated the alterations in the biochemical markers; hormonal and molecular levels while HPLC characterization revealed the presence of catechin, kaempferol, rutin and quercetin. Conclusion Protective effects of Sonchus asper vs. KBrO3 induced lipid peroxidation might be due to bioactive compound present in SACE. PMID:23186106

  3. Protective effects of Sonchus asper against KBrO3 induced lipid peroxidation in rats.

    PubMed

    Khan, Rahmat Ali; Khan, Muhammad Rashid; Sahreen, Sumaira

    2012-11-27

    Sonchus asper is traditionally used in Pakistan for the treatment of reproductive dysfunction and oxidative stress. The present investigation was aimed to evaluate chloroform extract of Sonchus asper (SACE) against potassium bromate-induced reproductive stress in male rats. 20 mg/kg body weight (b.w.) potassium bromate (KBrO3) was induced in 36 rats for four weeks and checked the protective efficacy of SACE at various hormonal imbalances, alteration of antioxidant enzymes, and DNA fragmentation levels. High performance chromatography (HPLC) was used for determination of bioactive constituents responsible. The level of hormonal secretion was significantly altered by potassium bromate. DNA fragmentation%, activity of antioxidant enzymes; catalase (CAT), peroxidase (POD), superoxide dismutase (SOD) and phase II metabolizing enzymes viz; glutathione reductase (GSR), glutathione peroxidase (GSHpx), glutathione-S-tansase (GST) and reduced glutathione (GSH) was decreased while hydrogen per oxide contents and thiobarbituric acid reactive substances (TBARS) were increased with KBrO3 treatment. Treatment with SACE effectively ameliorated the alterations in the biochemical markers; hormonal and molecular levels while HPLC characterization revealed the presence of catechin, kaempferol, rutin and quercetin. Protective effects of Sonchus asper vs. KBrO3 induced lipid peroxidation might be due to bioactive compound present in SACE.

  4. Drag penalty due to the asperities in the substrate of super-hydrophobic and liquid infused surfaces

    NASA Astrophysics Data System (ADS)

    Garcia Cartagena, Edgardo J.; Arenas, Isnardo; Leonardi, Stefano

    2017-11-01

    Direct numerical simulations of two superposed fluids in a turbulent channel with a textured surface made of pinnacles of random height have been performed. The viscosity ratio between the two fluids are N =μo /μi = 50 (μo and μi are the viscosities of outer and inner fluid respectively) mimicking a super-hydrophobic surface (water over air) and N=2.5 (water over heptane) resembling a liquid infused surface. Two set of simulations have been performed varying the Reynolds number, Reτ = 180 and Reτ = 390 . The interface between the two fluids is flat simulating infinite surface tension. The position of the interface between the two fluids has been varied in the vertical direction from the base of the substrate (what would be a rough wall) to the highest point of the roughness. Drag reduction is very sensitive to the position of the interface between the two fluids. Asperities above the interface induce a large form drag and diminish considerably the drag reduction. When the mean height of the surface measured from the interface in the outer fluid is greater than one wall unit, k+ > 1 , the drag increases with respect to a smooth wall. Present results provide a guideline to the accuracy required in manufacturing super-hydrophobic and liquid infused surfaces. This work was supported under ONR MURI Grants N00014-12-0875 and N00014-12- 1-0962, Program Manager Dr. Ki-Han Kim. Numerical simulations were performed on the Texas Advanced Computer Center.

  5. Deterministic seismogenic scenarios based on asperities spatial distribution to assess tsunami hazard on northern Chile (18°S to 24°S)

    NASA Astrophysics Data System (ADS)

    González-Carrasco, J. F.

    2016-12-01

    Southern Peru and northern Chile coastal areas, extended between 12º to 24ºS, have been recognized as a mature seismic gap with a high seismogenic potential associated to seismic moment deficit accumulated since 1877. An important scientific question is which will be the breaking pattern of a future megathrust earthquake, being relevant from hazard assessment perspective. During the last decade, the occurrence of three major subduction earthquakes has given the possibility to acquire outstanding geophysical and geological information to know the behavior of phenomena. An interesting result is the relationship between the maximum slip areas and the spatial distribution of asperities in subduction zones. In this contribution, we propose a methodology to identify a regional pattern of main asperities to construct reliable seismogenic scenarios in a seismic gap. We follow a deterministic approach to explore the distribution of asperities segmentation using geophysical and geodetic data as trench-parallel gravity anomaly (TPGA), interseismic coupling (ISC), b-value, historical moment release, residual bathymetric and gravity anomalies. The combined information represents physical constraints for short and long term suitable regions for future mega earthquakes. To illuminate the asperities distribution, we construct profiles using fault coordinates, along-strike and down-dip direction, of all proxies to define the boundaries of a major asperities (> 100 km). The geometry of a major asperity is useful to define a finite set of future deterministic seismogenic scenarios to evaluate tsunamigenic hazard in main cities of northern zone of Chile (18°S to 24°S).

  6. [Evaluation of the inhibitory effect of extracts from leaves of Renealmia alpinia Rottb. Maas (Zingiberaceae) on the venom of Bothrops asper (mapaná)].

    PubMed

    Patiño, Arley Camilo; López, Jéssica; Aristizábal, Mónica; Quintana, Juan Carlos; Benjumea, Dora

    2012-09-01

    Traditional medicine is an invaluable source of research into new medicines as a supplement for the treatment of snakebite, considered as a serious public health problem worldwide. The extracts of the medicinal plant, Renealmia alpina, have been used traditionally by indigenous people of Chocó (Colombia) against Bothrops asper snakebite, a snake responsible for the majority of snakebite accidents in Colombia. The ability of extracts of R. alpinia leaves was tested for its ability to neutralize the hemorrhagic, coagulant and proteolytic effects of the snakebite venom of B. asper. The acute toxicity tests and analgesic activity of R. alpina were evaluated in vivo. In addition, tests were undertaken in in vitro conditions to demonstrate inhibition of coagulant, haemolytic and proteolytic activity of the B. asper venom. Results. Renealmia alpinia extracts had no toxic effects in experimental animals and also provided analgesic and antiophidian effects and protection against the lethal effects of the venom of B. asper. Renealmia. alpinia was an effective therapeutic alternative in association with antivenom treatment in the event of a B. asper snakebite accident. It was demonstrated to protect against the lethal effects and provided analgesic properties as well.

  7. Electric field enhancement due to a saw-tooth asperity in a channel and implications on microscale gas breakdown

    NASA Astrophysics Data System (ADS)

    Venkattraman, Ayyaswamy

    2014-10-01

    The electric field enhancement due to an isolated saw-tooth asperity in an infinite channel is considered with the goal of providing some inputs to the choice of field enhancement factors used to describe microscale gas breakdown. The Schwarz-Christoffel transformation is used to map the interior of the channel to the upper half of the transformed plane. The expression for the electric field in the transformed plane is then used to determine the electric field distribution in the channel as well as field enhancement near the asperity. The effective field enhancement factor is determined and its dependence on operating and geometrical parameters is studied. While the effective field enhancement factor depends only weakly on the height of the asperity in comparison to the channel, it is influenced significantly by the base angles of the asperity. Due to the strong dependence of field emission current density on electric field, the effective field enhancement factor (βeff) is shown to vary rapidly with the applied electric field irrespective of the geometrical parameters. This variation is included in the analysis of microscale gas breakdown and compared with results obtained using a constant βeff as is done traditionally. Even though results for a varying βeff may be approximately reproduced using an equivalent constant βeff independent of E-field, it might be important for a range of operating conditions. This is confirmed by extracting βeff from experimental data for breakdown in argon microgaps with plane-parallel cathodes and comparing its dependence on the E-field. While the use of two-dimensional asperities is shown to be a minor disadvantage of the proposed approach in its current form, it can potentially help in developing predictive capabilities as opposed to treating βeff as a curve-fitting parameter.

  8. Fault creep and persistent asperities on the western section of the North Anatolian Fault, Turkey

    NASA Astrophysics Data System (ADS)

    Floyd, M.; Reilinger, R. E.; Ergintav, S.; Karabulut, H.; Vernant, P.; Konca, A. O.; Dogan, U.; Cetin, S.; Cakir, Z.; Mencin, D.; Bilham, R. G.; King, R. W.

    2017-12-01

    We interpret new geodetic and seismic observations along the western section of the North Anatolian Fault (NAF) in Turkey as evidence for persistent asperities on the fault surface. Analysis of geodetic and seismic observations of seven segments of the fault at different stages of the earthquake cycle suggest that areas of the fault surface that are accumulating strain (i.e. asperities) are deficient in interseismic seismicity and earthquake aftershocks compared to areas between asperities that are failing at least in part by fault creep. From west to east, these segments include the 2014 M6.9 Gokceada earthquake and 1912 M7.4 Ganos earthquake segments, the Sea of Marmara and Princes' Islands seismic "gaps", the 1999 M7.6/7.2 Izmit/Duzce earthquake segments, and the 1944 M7.4 Ismetpasa segment, which remains actively creeping. Aspects of each segment contribute to our interpretation of overall fault behavior. The most well-defined distribution of coseismic slip in relation to pre- and post-earthquake seismicity is for the 2014 Gokceada event. The most complete set of geodetic observations (pre-, co-, and short- and long-term post-seismic) come from the 1999 Izmit and Duzce events. Simple three-layer elastic models including a middle layer that is fully locked between earthquakes, and shallow and deeper layers that are allowed to creep, can account for these observations of the deformation cycle. Recent observations from InSAR, creepmeters and small-aperture GPS profiles indicate ongoing surface and shallow fault creep rates, as allowed by the upper layer of the three-layer model. Conceptually, creep in the deeper layer represents the deep healing of the fault following the earthquake. For the Izmit and Duzce earthquake segments, healing from prior earthquakes was complete before the 1999 sequence. More generally, the consistent pattern of strain accumulation along the full length of the NAF, including the long eastern segments that ruptured in major earthquakes in

  9. Critical Two-Point Function for Long-Range O( n) Models Below the Upper Critical Dimension

    NASA Astrophysics Data System (ADS)

    Lohmann, Martin; Slade, Gordon; Wallace, Benjamin C.

    2017-12-01

    We consider the n-component |φ|^4 lattice spin model (n ≥ 1) and the weakly self-avoiding walk (n=0) on Z^d, in dimensions d=1,2,3. We study long-range models based on the fractional Laplacian, with spin-spin interactions or walk step probabilities decaying with distance r as r^{-(d+α )} with α \\in (0,2). The upper critical dimension is d_c=2α . For ɛ >0, and α = 1/2 (d+ɛ ), the dimension d=d_c-ɛ is below the upper critical dimension. For small ɛ , weak coupling, and all integers n ≥ 0, we prove that the two-point function at the critical point decays with distance as r^{-(d-α )}. This "sticking" of the critical exponent at its mean-field value was first predicted in the physics literature in 1972. Our proof is based on a rigorous renormalisation group method. The treatment of observables differs from that used in recent work on the nearest-neighbour 4-dimensional case, via our use of a cluster expansion.

  10. Critical points of the O(n) loop model on the martini and the 3-12 lattices

    NASA Astrophysics Data System (ADS)

    Ding, Chengxiang; Fu, Zhe; Guo, Wenan

    2012-06-01

    We derive the critical line of the O(n) loop model on the martini lattice as a function of the loop weight n basing on the critical points on the honeycomb lattice conjectured by Nienhuis [Phys. Rev. Lett.PRLTAO0031-900710.1103/PhysRevLett.49.1062 49, 1062 (1982)]. In the limit n→0 we prove the connective constant μ=1.7505645579⋯ of self-avoiding walks on the martini lattice. A finite-size scaling analysis based on transfer matrix calculations is also performed. The numerical results coincide with the theoretical predictions with a very high accuracy. Using similar numerical methods, we also study the O(n) loop model on the 3-12 lattice. We obtain similarly precise agreement with the critical points given by Batchelor [J. Stat. Phys.JSTPBS0022-471510.1023/A:1023065215233 92, 1203 (1998)].

  11. Extracts of Renealmia alpinia (Rottb.) MAAS Protect against Lethality and Systemic Hemorrhage Induced by Bothrops asper Venom: Insights from a Model with Extract Administration before Venom Injection.

    PubMed

    Patiño, Arley Camilo; Quintana, Juan Carlos; Gutiérrez, José María; Rucavado, Alexandra; Benjumea, Dora María; Pereañez, Jaime Andrés

    2015-04-30

    Renealmia alpinia (Rottb.) MAAS, obtained by micropropagation (in vitro) and wild forms have previously been shown to inhibit some toxic activities of Bothrops asper snake venom if preincubated before injection. In this study, assays were performed in a murine model in which extracts were administered for three days before venom injection. R. alpinia extracts inhibited lethal activity of B. asper venom injected by intraperitoneal route. Median Effective Dose (ED50) values were 36.6 ± 3.2 mg/kg and 31.7 ± 5.4 mg/kg (p > 0.05) for R. alpinia wild and in vitro extracts, respectively. At a dose of 75 mg/kg, both extracts totally inhibited the lethal activity of the venom. Moreover, this dose prolonged survival time of mice receiving a lethal dose of venom by the intravenous route. At 75 mg/kg, both extracts of R. alpinia reduced the extent of venom-induced pulmonary hemorrhage by 48.0% (in vitro extract) and 34.7% (wild extract), in agreement with histological observations of lung tissue. R. alpinia extracts also inhibited hemorrhage in heart and kidneys, as evidenced by a decrease in mg of hemoglobin/g of organ. These results suggest the possibility of using R. alpinia as a prophylactic agent in snakebite, a hypothesis that needs to be further explored.

  12. Direct measurement of asperity contact growth in quartz at hydrothermal conditions

    NASA Astrophysics Data System (ADS)

    Beeler, N. M.; Hickman, S. H.

    2008-12-01

    Room-temperature friction and indentation experiments suggest fault strengthening during the interseismic period results from increases in asperity contact area due to solid-state deformation. However, field observations on exhumed fault zones indicate that solution-transport processes, pressure solution, crack healing and contact overgrowth, influence fault zone rheology near the base of the seismogenic zone. Contact overgrowths result from gradients in surface curvature, where material is dissolved from the pore walls, diffuses through the fluid and precipitates at the contact between two asperities, cementing the asperities together without convergence normal to the contact. To determine the mechanisms and kinetics of asperity cementation, we conducted laboratory experiments in which convex and flat lenses prepared from quartz single crystals were pressed together in an externally heated pressure vessel equipped with an optical observation port. Convergence between the two lenses and contact morphology were continuously monitored during these experiments using reflected-light interferometry through a long-working-distance microscope. Contact normal force is constant with an initial effective normal stress of 1.7 MPa. Four single-phase experiments were conducted at temperatures between 350 and 530C at 150 MPa water pressure, along with two controls: one single phase, dry at 425C and one bimaterial (qtz/sapphire) at 425C and 150 MPa water pressure. No contact growth or convergence was observed in either of the controls. For wet single-phase contacts, however, growth was initially rapid and then decreased with time following an inverse squared dependence of contact radius on aperture. No convergence was observed over the duration of these experiments, suggesting that neither significant pressure solution nor crystal plasticity occurred at these stresses and temperatures. The formation of fluid inclusions between the lenses indicate that the contact is not uniformly

  13. Complementary Ruptures of Surface Ruptures and Deep Asperity during the 2014 Northern Nagano, Japan, Earthquake (MW 6.3)

    NASA Astrophysics Data System (ADS)

    Asano, K.; Iwata, T.; Kubo, H.

    2015-12-01

    A thrust earthquake of MW 6.3 occurred along the northern part of the Itoigawa-Shizuoka Tectonic Line (ISTL) in the northern Nagano prefecture, central Japan, on November 22, 2014. This event was reported to be related to an active fault, the Kamishiro fault belonging to the ISTL (e.g., HERP, 2014). The surface rupture is observed along the Kamishiro fault (e.g., Lin et al., 2015; Okada et al., 2015). We estimated the kinematic source rupture process of this earthquake through the multiple time-window linear waveform inversion method (Hartzell and Heaton, 1983). We used velocity waveforms in 0.05-1 Hz from 12 strong motion stations of K-NET, KiK-net (NIED), JMA, and Nagano prefecture (SK-net, ERI). In order to enhance the reliability in Green's functions, we assumed one-dimensional velocity structure models different for the different stations, which were extracted from the nation-wide three-dimensional velocity structure model, Japan Integrated Velocity Structure Model (JIVSM, Koketsu et al., 2012). Considering the spatial distribution of aftershocks (Sakai et al., 2015) and surface ruptures, the assumed fault model consisted of two dip-bending fault segments with different dip angles between the northern and southern segments. The total length and width of the fault plane is 20 km and 13 km, relatively, and the fault model is divided into 260 subfaults of 1 km × 1 km in space and six smoothed ramp functions in time. An asperity or large slip area with a peak slip of 1.9 m was estimated in the lower plane of the northern segment in the approximate depth range of 4 to 8 km. The depth extent of this asperity is consistent with the seismogenic zone revealed by past studies (e.g., Panayotopoulos et al., 2014). In contrast, the slip in the southern segment is relatively concentrated in the shallow portion of the segment where the surface ruptures were found along the Kamishiro fault. The overall spatial rupture pattern of the source fault, in which the deep asperity

  14. Determination of the pore fluid pressure ratio at seismogenic megathrusts in subduction zones: Implications for strength of asperities and Andean-type mountain building

    NASA Astrophysics Data System (ADS)

    Seno, Tetsuzo

    2009-05-01

    We construct the differential stress profile across the fore arc in a subduction zone from the force balance between the shear stress, τ, at seismogenic megathrust and the lithostatic pressure. We assume that τ is written by μ (1 - λ) σn, where λ is the pore fluid pressure ratio, μ is the coefficient of static friction, and σn is the normal stress. Given a density structure of the fore-arc wedge, we determine λ by comparing calculated fore-arc stresses with observed ones, as 0.95-0.98 in Shikoku, Miyagi, Peru, north Chile, and south Chile and 0.90-0.93 in south Vancouver Island and Washington. The parameter τ averaged over the seismogenic megathrust is of the order of ˜10 MPa. Stress drops of great earthquakes in these zones occupy 14-87% and not a constant fraction of τ. They, on the other hand, increase linearly with 1 - λ. We propose a simple fault model in which the area of asperities as a fraction of the total fault area is proportional to 1 - λ. Variation of fractional area of asperities thus may explain the observed correlation and the regional variation of λ. Assuming that the differential stress at summit of the Andean mountains is zero, not at the coast as observed at present, we determine λ to be 0.84 in north Chile in the mountain building stage. Such a smaller value of λ, along with λ < ˜0.4 in collision zones previously obtained and >˜0.9 in subduction zones, would suggest that variation of λ controls the tectonic style of the Earth.

  15. Dissipative N-point-vortex Models in the Plane

    NASA Astrophysics Data System (ADS)

    Shashikanth, Banavara N.

    2010-02-01

    A method is presented for constructing point vortex models in the plane that dissipate the Hamiltonian function at any prescribed rate and yet conserve the level sets of the invariants of the Hamiltonian model arising from the SE (2) symmetries. The method is purely geometric in that it uses the level sets of the Hamiltonian and the invariants to construct the dissipative field and is based on elementary classical geometry in ℝ3. Extension to higher-dimensional spaces, such as the point vortex phase space, is done using exterior algebra. The method is in fact general enough to apply to any smooth finite-dimensional system with conserved quantities, and, for certain special cases, the dissipative vector field constructed can be associated with an appropriately defined double Nambu-Poisson bracket. The most interesting feature of this method is that it allows for an infinite sequence of such dissipative vector fields to be constructed by repeated application of a symmetric linear operator (matrix) at each point of the intersection of the level sets.

  16. Earthquake slip weakening and asperities explained by thermal pressurization.

    PubMed

    Wibberley, Christopher A J; Shimamoto, Toshihiko

    2005-08-04

    An earthquake occurs when a fault weakens during the early portion of its slip at a faster rate than the release of tectonic stress driving the fault motion. This slip weakening occurs over a critical distance, D(c). Understanding the controls on D(c) in nature is severely limited, however, because the physical mechanism of weakening is unconstrained. Conventional friction experiments, typically conducted at slow slip rates and small displacements, have obtained D(c) values that are orders of magnitude lower than values estimated from modelling seismological data for natural earthquakes. Here we present data on fluid transport properties of slip zone rocks and on the slip zone width in the centre of the Median Tectonic Line fault zone, Japan. We show that the discrepancy between laboratory and seismological results can be resolved if thermal pressurization of the pore fluid is the slip-weakening mechanism. Our analysis indicates that a planar fault segment with an impermeable and narrow slip zone will become very unstable during slip and is likely to be the site of a seismic asperity.

  17. Extracts of Renealmia alpinia (Rottb.) MAAS Protect against Lethality and Systemic Hemorrhage Induced by Bothrops asper Venom: Insights from a Model with Extract Administration before Venom Injection

    PubMed Central

    Patiño, Arley Camilo; Quintana, Juan Carlos; Gutiérrez, José María; Rucavado, Alexandra; Benjumea, Dora María; Pereañez, Jaime Andrés

    2015-01-01

    Renealmia alpinia (Rottb.) MAAS, obtained by micropropagation (in vitro) and wild forms have previously been shown to inhibit some toxic activities of Bothrops asper snake venom if preincubated before injection. In this study, assays were performed in a murine model in which extracts were administered for three days before venom injection. R. alpinia extracts inhibited lethal activity of B. asper venom injected by intraperitoneal route. Median Effective Dose (ED50) values were 36.6 ± 3.2 mg/kg and 31.7 ± 5.4 mg/kg (p > 0.05) for R. alpinia wild and in vitro extracts, respectively. At a dose of 75 mg/kg, both extracts totally inhibited the lethal activity of the venom. Moreover, this dose prolonged survival time of mice receiving a lethal dose of venom by the intravenous route. At 75 mg/kg, both extracts of R. alpinia reduced the extent of venom-induced pulmonary hemorrhage by 48.0% (in vitro extract) and 34.7% (wild extract), in agreement with histological observations of lung tissue. R. alpinia extracts also inhibited hemorrhage in heart and kidneys, as evidenced by a decrease in mg of hemoglobin/g of organ. These results suggest the possibility of using R. alpinia as a prophylactic agent in snakebite, a hypothesis that needs to be further explored. PMID:25941768

  18. Flash Points of Secondary Alcohol and n-Alkane Mixtures.

    PubMed

    Esina, Zoya N; Miroshnikov, Alexander M; Korchuganova, Margarita R

    2015-11-19

    The flash point is one of the most important characteristics used to assess the ignition hazard of mixtures of flammable liquids. To determine the flash points of mixtures of secondary alcohols with n-alkanes, it is necessary to calculate the activity coefficients. In this paper, we use a model that allows us to obtain enthalpy of fusion and enthalpy of vaporization data of the pure components to calculate the liquid-solid equilibrium (LSE) and vapor-liquid equilibrium (VLE). Enthalpy of fusion and enthalpy of vaporization data of secondary alcohols in the literature are limited; thus, the prediction of these characteristics was performed using the method of thermodynamic similarity. Additionally, the empirical models provided the critical temperatures and boiling temperatures of the secondary alcohols. The modeled melting enthalpy and enthalpy of vaporization as well as the calculated LSE and VLE flash points were determined for the secondary alcohol and n-alkane mixtures.

  19. Point defect reduction in MOCVD (Al)GaN by chemical potential control and a comprehensive model of C incorporation in GaN

    NASA Astrophysics Data System (ADS)

    Reddy, Pramod; Washiyama, Shun; Kaess, Felix; Kirste, Ronny; Mita, Seiji; Collazo, Ramon; Sitar, Zlatko

    2017-12-01

    A theoretical framework that provides a quantitative relationship between point defect formation energies and growth process parameters is presented. It enables systematic point defect reduction by chemical potential control in metalorganic chemical vapor deposition (MOCVD) of III-nitrides. Experimental corroboration is provided by a case study of C incorporation in GaN. The theoretical model is shown to be successful in providing quantitative predictions of CN defect incorporation in GaN as a function of growth parameters and provides valuable insights into boundary phases and other impurity chemical reactions. The metal supersaturation is found to be the primary factor in determining the chemical potential of III/N and consequently incorporation or formation of point defects which involves exchange of III or N atoms with the reservoir. The framework is general and may be extended to other defect systems in (Al)GaN. The utility of equilibrium formalism typically employed in density functional theory in predicting defect incorporation in non-equilibrium and high temperature MOCVD growth is confirmed. Furthermore, the proposed theoretical framework may be used to determine optimal growth conditions to achieve minimum compensation within any given constraints such as growth rate, crystal quality, and other practical system limitations.

  20. Novel Concordance Between Geographic, Environmental, and Genetic Structure in the Ecological Generalist Prickly Sculpin (Cottus asper) in California.

    PubMed

    Baumsteiger, Jason; Kinziger, Andrew P; Aguilar, Andres

    2016-11-01

    Ecological generalists may contain a wealth of information concerning diversity, ecology, and geographic connectivity throughout their range. We explored these ideas in prickly sculpin (Cottus asper), a small generalist freshwater fish species where coastal forms have potentially undergone radiations into inland lacustrine and riverine environments. Using a 962bp cytochrome b mtDNA marker and 11 microsatellites, we estimated diversity, divergence times, gene flow, and structure among populations at 43 locations throughout California. We then incorporated genetic and GIS data into ecological niche models to assess ecological conditions within identified groups. Though not reciprocally monophyletic, unique mtDNA haplotypes, microsatellite clustering, and measures of isolation by distance (Coastal: r = 0.960, P < 0.001; Inland: r = 0.277, P = 0.148) suggest 2 novel taxonomic groups, Coastal and Inland (constrained to Great Central Valley). Divergence estimates of 41-191 kya combined with the regional biogeographic history suggest geographic barriers are absent between groups since divergence, but ecological niche modeling revealed significant environmental differences (t = 10.84, P < 0.001). Introgressed individuals were also discovered between groups in an ecologically and geographically intermediate region. Population structure was limited, predominately found in tributaries of the San Joaquin basin in the Inland group. Overall, C. asper exhibited substantial genetic diversity, despite its ecological generality, reflecting California's historically unique and complex hydrology. More broadly, this study illustrates variable environments within the range of a generalist species may mask genetic divergences and should not be overlooked in biodiversity assessments. © The American Genetic Association 2016. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  1. Tribology. Mechanisms of antiwear tribofilm growth revealed in situ by single-asperity sliding contacts.

    PubMed

    Gosvami, N N; Bares, J A; Mangolini, F; Konicek, A R; Yablon, D G; Carpick, R W

    2015-04-03

    Zinc dialkyldithiophosphates (ZDDPs) form antiwear tribofilms at sliding interfaces and are widely used as additives in automotive lubricants. The mechanisms governing the tribofilm growth are not well understood, which limits the development of replacements that offer better performance and are less likely to degrade automobile catalytic converters over time. Using atomic force microscopy in ZDDP-containing lubricant base stock at elevated temperatures, we monitored the growth and properties of the tribofilms in situ in well-defined single-asperity sliding nanocontacts. Surface-based nucleation, growth, and thickness saturation of patchy tribofilms were observed. The growth rate increased exponentially with either applied compressive stress or temperature, consistent with a thermally activated, stress-assisted reaction rate model. Although some models rely on the presence of iron to catalyze tribofilm growth, the films grew regardless of the presence of iron on either the tip or substrate, highlighting the critical role of stress and thermal activation. Copyright © 2015, American Association for the Advancement of Science.

  2. In Vitro Antiplasmodial Activity of Phospholipases A2 and a Phospholipase Homologue Isolated from the Venom of the Snake Bothrops asper

    PubMed Central

    Castillo, Juan Carlos Quintana; Vargas, Leidy Johana; Segura, Cesar; Gutiérrez, José María; Pérez, Juan Carlos Alarcón

    2012-01-01

    The antimicrobial and antiparasite activity of phospholipase A2 (PLA2) from snakes and bees has been extensively explored. We studied the antiplasmodial effect of the whole venom of the snake Bothrops asper and of two fractions purified by ion-exchange chromatography: one containing catalytically-active phospholipases A2 (PLA2) (fraction V) and another containing a PLA2 homologue devoid of enzymatic activity (fraction VI). The antiplasmodial effect was assessed on in vitro cultures of Plasmodium falciparum. The whole venom of B. asper, as well as its fractions V and VI, were active against the parasite at 0.13 ± 0.01 µg/mL, 1.42 ± 0.56 µg/mL and 22.89 ± 1.22 µg/mL, respectively. Differences in the cytotoxic activity on peripheral blood mononuclear cells between the whole venom and fractions V and VI were observed, fraction V showing higher toxicity than total venom and fraction VI. Regarding toxicity in mice, the whole venom showed the highest lethal effect in comparison to fractions V and VI. These results suggest that B. asper PLA2 and its homologue have antiplasmodial potential. PMID:23242318

  3. Evaluation of phenolic contents and antioxidant activity of various solvent extracts of Sonchus asper (L.) Hill

    PubMed Central

    2012-01-01

    Background Sonchus asper (SA) is traditionally used for the treatment of various ailments associated with liver, lungs and kidneys. This study was aimed to investigate the therapeutic potential of nonpolar (hexane, SAHE; ethyl acetate, SAEE and chloroform, SACE) and polar (methanol, SAME) crude extracts of the whole plant. Methods To achieve these goals, several parameters including free-radical (DPPH•, ABTS•+, H2O2 and •OH) scavenging, iron chelating activity, scavenging of superoxide radicals, total flavonoids and total phenolic content (TPC) were examined. Results The SA extracts presented a remarkable capacity to scavenge all the tested reactive species with IC50 values being found at the μg ⁄ ml level. The SAME was shown to have the highest TPCs while lowest IC50 values for the DPPH•, ABTS•+ radical scavenging capacities and iron chelating scavenging efficiency, moreover, SAME had best activities in scavenging of superoxide radicals and hydrogen peroxide as well as potently scavenged the hydroxyl radicals. Conclusion These results suggest the potential of S. asper as a medicine against free-radical-associated oxidative damage. PMID:22305477

  4. Strain accumulation across the Prince William Sound asperity, Southcentral Alaska

    NASA Astrophysics Data System (ADS)

    Savage, J. C.; Svarc, J. L.; Lisowski, M.

    2015-03-01

    The surface velocities predicted by the conventional subduction model are compared to velocities measured in a GPS array (surveyed in 1993, 1995, 1997, 2000, and 2004) spanning the Prince William Sound asperity. The observed velocities in the comparison have been corrected to remove the contributions from postseismic (1964 Alaska earthquake) mantle relaxation. Except at the most seaward monument (located on Middleton Island at the seaward edge of the continental shelf, just 50 km landward of the deformation front in the Aleutian Trench), the corrected velocities qualitatively agree with those predicted by an improved, two-dimensional, back slip, subduction model in which the locked megathrust coincides with the plate interface identified by seismic refraction surveys, and the back slip rate is equal to the plate convergence rate. A better fit to the corrected velocities is furnished by either a back slip rate 20% greater than the plate convergence rate or a 30% shallower megathrust. The shallow megathrust in the latter fit may be an artifact of the uniform half-space Earth model used in the inversion. Backslip at the plate convergence rate on the megathrust mapped by refraction surveys would fit the data as well if the rigidity of the underthrust plate was twice that of the overlying plate, a rigidity contrast higher than expected. The anomalous motion at Middleton Island is attributed to continuous slip at near the plate convergence rate on a postulated, listric fault that splays off the megathrust at depth of about 12 km and outcrops on the continental slope south-southeast of Middleton Island.

  5. Strain accumulation across the Prince William Sound asperity, Southcentral Alaska

    USGS Publications Warehouse

    Savage, James C.; Svarc, Jerry L.; Lisowski, Michael

    2015-01-01

    The surface velocities predicted by the conventional subduction model are compared to velocities measured in a GPS array (surveyed in 1993, 1995, 1997, 2000, and 2004) spanning the Prince William Sound asperity. The observed velocities in the comparison have been corrected to remove the contributions from postseismic (1964 Alaska earthquake) mantle relaxation. Except at the most seaward monument (located on Middleton Island at the seaward edge of the continental shelf, just 50 km landward of the deformation front in the Aleutian Trench), the corrected velocities qualitatively agree with those predicted by an improved, two-dimensional, back slip, subduction model in which the locked megathrust coincides with the plate interface identified by seismic refraction surveys, and the back slip rate is equal to the plate convergence rate. A better fit to the corrected velocities is furnished by either a back slip rate 20% greater than the plate convergence rate or a 30% shallower megathrust. The shallow megathrust in the latter fit may be an artifact of the uniform half-space Earth model used in the inversion. Backslip at the plate convergence rate on the megathrust mapped by refraction surveys would fit the data as well if the rigidity of the underthrust plate was twice that of the overlying plate, a rigidity contrast higher than expected. The anomalous motion at Middleton Island is attributed to continuous slip at near the plate convergence rate on a postulated, listric fault that splays off the megathrust at depth of about 12 km and outcrops on the continental slope south-southeast of Middleton Island.

  6. Z n clock models and chains of so(n)2 non-Abelian anyons: symmetries, integrable points and low energy properties

    NASA Astrophysics Data System (ADS)

    Finch, Peter E.; Flohr, Michael; Frahm, Holger

    2018-02-01

    We study two families of quantum models which have been used previously to investigate the effect of topological symmetries in one-dimensional correlated matter. Various striking similarities are observed between certain {Z}n quantum clock models, spin chains generalizing the Ising model, and chains of non-Abelian anyons constructed from the so(n)2 fusion category for odd n, both subject to periodic boundary conditions. In spite of the differences between these two types of quantum chains, e.g. their Hilbert spaces being spanned by tensor products of local spin states or fusion paths of anyons, the symmetries of the lattice models are shown to be closely related. Furthermore, under a suitable mapping between the parameters describing the interaction between spins and anyons the respective Hamiltonians share part of their energy spectrum (although their degeneracies may differ). This spin-anyon correspondence can be extended by fine-tuning of the coupling constants leading to exactly solvable models. We show that the algebraic structures underlying the integrability of the clock models and the anyon chain are the same. For n  =  3,5,7 we perform an extensive finite size study—both numerical and based on the exact solution—of these models to map out their ground state phase diagram and to identify the effective field theories describing their low energy behaviour. We observe that the continuum limit at the integrable points can be described by rational conformal field theories with extended symmetry algebras which can be related to the discrete ones of the lattice models.

  7. (Evaluation of neutralizing ability of four commercially available antivenoms against the venom of Bothrops asper from Costa Rica)

    PubMed

    Bogarín, G; Segura, E; Durán, G; Lomonte, B; Rojas, G; Gutiérrez, J M

    1995-09-01

    We studied the ability of four commercially available antivenoms to neutralize several toxic and enzymatic activities of Bothrops asper (terciopelo) venom from Costa Rica. Experiments with preincubation of venom and antivenom were carried out to test the neutralization of lethal, hemorrhagic, coagulant and indirect hemolytic activities. In addition, antibody titers against crude venom and myotoxin II purified from this venom were determined by enzyme immunoassay (ELISA). Results indicate that polyvalent antivenom from Instituto Clodomiro Picado (Costa Rica) has the highest neutralizing ability against lethal, coagulant and indirect hemolytic activities, whereas MYN polyvalent antivenom (México) has the highest neutralization against hemorrhagic activity. Antivenom from Instituto Clodomiro Picado also has the highest antibody titers against crude B. asper venom and against myotoxin II. Antivenoms from Universidad Central de Venezuela (Venezuela), Vencofarma (Brazil) and MYN (México) failed to neutralize the lethal effect of this venom. These results stress the need for rigorous quality control systems to evaluate the neutralizing ability of antivenoms in Central America.

  8. Venom of Bothrops asper from Mexico and Costa Rica: intraspecific variation and cross-neutralization by antivenoms.

    PubMed

    Segura, Alvaro; Herrera, María; Villalta, Mauren; Vargas, Mariángela; Uscanga-Reynell, Alfredo; de León-Rosales, Samuel Ponce; Jiménez-Corona, María Eugenia; Reta-Mares, José Francisco; Gutiérrez, José María; León, Guillermo

    2012-01-01

    Bothrops asper is the species that induces the highest incidence of snakebite envenomation in southern Mexico, Central America and parts of northern South America. The intraspecies variability in HPLC profile and toxicological activities between the venoms from specimens collected in Mexico (Veracruz) and Costa Rica (Caribbean and Pacific populations) was investigated, as well as the cross-neutralization by antivenoms manufactured in these countries. Venoms differ in their HPLC profiles and in their toxicity, since venom from Mexican population showed higher lethal and defibrinogenating activities, whereas those from Costa Rica showed higher hemorrhagic and in vitro coagulant activities. In general, antivenoms were more effective in the neutralization of homologous venoms. Overall, both antivenoms effectively neutralized the various toxic effects of venoms from the two populations of B. asper. However, antivenom raised against venom from Costa Rican specimens showed a higher efficacy in the neutralization of defibrinogenating and coagulant activities, thus highlighting immunochemical differences in the toxins responsible for these effects associated with hemostatic disturbances in snakebite envenoming. These observations illustrate how intraspecies venom variation may influence antivenom neutralizing profile. Copyright © 2011 Elsevier Ltd. All rights reserved.

  9. Cardiac Glycoside Constituents of Streblus asper with Potential Antineoplastic Activity.

    PubMed

    Ren, Yulin; Chen, Wei-Lun; Lantvit, Daniel D; Sass, Ellen J; Shriwas, Pratik; Ninh, Tran Ngoc; Chai, Hee-Byung; Zhang, Xiaoli; Soejarto, Djaja D; Chen, Xiaozhuo; Lucas, David M; Swanson, Steven M; Burdette, Joanna E; Kinghorn, A Douglas

    2017-03-24

    Three new (1-3) and two known (4 and 5) cytotoxic cardiac glycosides were isolated and characterized from a medicinal plant, Streblus asper Lour. (Moraceae), collected in Vietnam, with six new analogues and one known derivative (5a-g) synthesized from (+)-strebloside (5). A preliminary structure-activity relationship study indicated that the C-10 formyl and C-5 and C-14 hydroxy groups and C-3 sugar unit play important roles in the mediation of the cytotoxicity of (+)-strebloside (5) against HT-29 human colon cancer cells. When evaluated in NCr nu/nu mice implanted intraperitoneally with hollow fibers facilitated with either MDA-MB-231 human breast or OVCAR3 human ovarian cancer cells, (+)-strebloside (5) showed significant cell growth inhibitory activity in both cases, in the dose range 5-30 mg/kg.

  10. Relationship between the frequency magnitude distribution and the visibility graph in the synthetic seismicity generated by a simple stick-slip system with asperities.

    PubMed

    Telesca, Luciano; Lovallo, Michele; Ramirez-Rojas, Alejandro; Flores-Marquez, Leticia

    2014-01-01

    By using the method of the visibility graph (VG) the synthetic seismicity generated by a simple stick-slip system with asperities is analysed. The stick-slip system mimics the interaction between tectonic plates, whose asperities are given by sandpapers of different granularity degrees. The VG properties of the seismic sequences have been put in relationship with the typical seismological parameter, the b-value of the Gutenberg-Richter law. Between the b-value of the synthetic seismicity and the slope of the least square line fitting the k-M plot (relationship between the magnitude M of each synthetic event and its connectivity degree k) a close linear relationship is found, also verified by real seismicity.

  11. Saturation point model for the formation of metal nitrate in nitrogen tetroxide oxidizer

    NASA Technical Reports Server (NTRS)

    Torrance, Paul R.

    1991-01-01

    A model was developed for the formation of metal nitrate in nitrogen tetroxide (N2O4). The basis of this model is the saturation point of metal nitrate in N2O4. This basis is chosen mainly because of the White Sands Test Facility's metal nitrate in N2O4 experience. Means of reaching the saturation point are examined, and a relationship is made for the reaction/formation rate and diffusion rate of metal nitrate in N2O4.

  12. Thermo-Mechanical Cracking in Coated Media with a Cavity by a Moving Asperity Friction.

    DTIC Science & Technology

    1988-03-01

    smmhhEohEEEoh ILL 11 O1-25 *I54 *.Nit III,’M " ’, Ot.S S S S S V S AD-A193 311 L . *M THE UNIVERSITY OF NEW MEXICO COLLEGE OF ENGINEERING BUREAU OF...Mechanical Engineering Department APR 06 IIM 1 University of New Mexico SID Technical Report No. ME-144(88)ONR-233-3 H Work performed under ONR Grant...With A Cavity By A Moving Asperity Friction by Frederick D. Ju and Tsu-Yen Chen Mechanical Engineering Department University of New Mexico Albuquerque

  13. ASYMPTOTICS FOR CHANGE-POINT MODELS UNDER VARYING DEGREES OF MIS-SPECIFICATION

    PubMed Central

    SONG, RUI; BANERJEE, MOULINATH; KOSOROK, MICHAEL R.

    2015-01-01

    Change-point models are widely used by statisticians to model drastic changes in the pattern of observed data. Least squares/maximum likelihood based estimation of change-points leads to curious asymptotic phenomena. When the change–point model is correctly specified, such estimates generally converge at a fast rate (n) and are asymptotically described by minimizers of a jump process. Under complete mis-specification by a smooth curve, i.e. when a change–point model is fitted to data described by a smooth curve, the rate of convergence slows down to n1/3 and the limit distribution changes to that of the minimizer of a continuous Gaussian process. In this paper we provide a bridge between these two extreme scenarios by studying the limit behavior of change–point estimates under varying degrees of model mis-specification by smooth curves, which can be viewed as local alternatives. We find that the limiting regime depends on how quickly the alternatives approach a change–point model. We unravel a family of ‘intermediate’ limits that can transition, at least qualitatively, to the limits in the two extreme scenarios. The theoretical results are illustrated via a set of carefully designed simulations. We also demonstrate how inference for the change-point parameter can be performed in absence of knowledge of the underlying scenario by resorting to subsampling techniques that involve estimation of the convergence rate. PMID:26681814

  14. Static and sliding contact of rough surfaces: Effect of asperity-scale properties and long-range elastic interactions

    NASA Astrophysics Data System (ADS)

    Hulikal, Srivatsan; Lapusta, Nadia; Bhattacharya, Kaushik

    2018-07-01

    Friction in static and sliding contact of rough surfaces is important in numerous physical phenomena. We seek to understand macroscopically observed static and sliding contact behavior as the collective response of a large number of microscopic asperities. To that end, we build on Hulikal et al. (2015) and develop an efficient numerical framework that can be used to investigate how the macroscopic response of multiple frictional contacts depends on long-range elastic interactions, different constitutive assumptions about the deforming contacts and their local shear resistance, and surface roughness. We approximate the contact between two rough surfaces as that between a regular array of discrete deformable elements attached to a elastic block and a rigid rough surface. The deformable elements are viscoelastic or elasto/viscoplastic with a range of relaxation times, and the elastic interaction between contacts is long-range. We find that the model reproduces the main macroscopic features of evolution of contact and friction for a range of constitutive models of the elements, suggesting that macroscopic frictional response is robust with respect to the microscopic behavior. Viscoelasticity/viscoplasticity contributes to the increase of friction with contact time and leads to a subtle history dependence. Interestingly, long-range elastic interactions only change the results quantitatively compared to the meanfield response. The developed numerical framework can be used to study how specific observed macroscopic behavior depends on the microscale assumptions. For example, we find that sustained increase in the static friction coefficient during long hold times suggests viscoelastic response of the underlying material with multiple relaxation time scales. We also find that the experimentally observed proportionality of the direct effect in velocity jump experiments to the logarithm of the velocity jump points to a complex material-dependent shear resistance at the

  15. On two-point boundary correlations in the six-vertex model with domain wall boundary conditions

    NASA Astrophysics Data System (ADS)

    Colomo, F.; Pronko, A. G.

    2005-05-01

    The six-vertex model with domain wall boundary conditions on an N × N square lattice is considered. The two-point correlation function describing the probability of having two vertices in a given state at opposite (top and bottom) boundaries of the lattice is calculated. It is shown that this two-point boundary correlator is expressible in a very simple way in terms of the one-point boundary correlators of the model on N × N and (N - 1) × (N - 1) lattices. In alternating sign matrix (ASM) language this result implies that the doubly refined x-enumerations of ASMs are just appropriate combinations of the singly refined ones.

  16. Model Breaking Points Conceptualized

    ERIC Educational Resources Information Center

    Vig, Rozy; Murray, Eileen; Star, Jon R.

    2014-01-01

    Current curriculum initiatives (e.g., National Governors Association Center for Best Practices and Council of Chief State School Officers 2010) advocate that models be used in the mathematics classroom. However, despite their apparent promise, there comes a point when models break, a point in the mathematical problem space where the model cannot,…

  17. Bothrops asper snake venom and its metalloproteinase BaP-1 activate the complement system. Role in leucocyte recruitment.

    PubMed Central

    Farsky, S H; Gonçalves, L R; Gutiérrez, J M; Correa, A P; Rucavado, A; Gasque, P; Tambourgi, D V

    2000-01-01

    The venom of the snake Bothrops asper, the most important poisonous snake in Central America, evokes an inflammatory response, the mechanisms of which are not well characterized. The objectives of this study were to investigate whether B. asper venom and its purified toxins--phospholipases and metalloproteinase--activate the complement system and the contribution of the effect on leucocyte recruitment. In vitro chemotaxis assays were performed using Boyden's chamber model to investigate the ability of serum incubated with venom and its purified toxins to induce neutrophil migration. The complement consumption by the venom was evaluated using an in vitro haemolytic assay. The importance of complement activation by the venom on neutrophil migration was investigated in vivo by injecting the venom into the peritoneal cavity of C5-deficient mice. Data obtained demonstrated that serum incubated with crude venom and its purified metalloproteinase BaP-1 are able to induce rat neutrophil chemotaxis, probably mediated by agent(s) derived from the complement system. This hypothesis was corroborated by the capacity of the venom to activate this system in vitro. The involvement of C5a in neutrophil chemotaxis induced by venom-activated serum was demonstrated by abolishing migration when neutrophils were pre-incubated with antirat C5a receptor antibody. The relevance of the complement system in in vivo leucocyte mobilization was further demonstrated by the drastic decrease of this response in C5-deficient mice. Pre-incubation of serum with the soluble human recombinant complement receptor type 1 (sCR 1) did not prevent the response induced by the venom, but abolished the migration evoked by metalloproteinase-activated serum. These data show the role of the complement system in bothropic envenomation and the participation of metalloproteinase in the effect. Also, they suggest that the venom may contain other component(s) which can cause direct activation of C5a. PMID:11200361

  18. Double vibrational collision-induced Raman scattering by SF{sub 6}-N{sub 2}: Beyond the point-polarizable molecule model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Verzhbitskiy, I. A.; Chrysos, M.; Kouzov, A. P.

    2010-11-15

    Collision-induced Raman bandshapes and zeroth-order spectral moments are calculated both for the depolarized spectrum and for the extremely weak isotropic spectrum of the SF{sub 6}({nu}{sub 1}) +N{sub 2}({nu}{sub 1}) double-Raman-scattering band. A critical comparison is made with experiments conducted recently by the authors [Phys. Rev. A 81, 012702 (2010); 81, 042705 (2010)]. The study of this transition, hitherto restricted to the model framework of two point-polarizable molecules, is now completed to incorporate effects beyond the point-molecule approximation. Whereas the extended model offers a few percent improvement in the depolarized spectrum, it reveals a huge 80% increase in the isotropic spectrummore » and its moment, owing essentially to the polarizability anisotropy of N{sub 2}. For both spectra, agreement between quantum-mechanical calculations and our experiments is found, provided that the best ab initio data for the (hyper)polarizability parameters are used. This refined study shows clearly the need to include all mechanisms and data to a high level of accuracy and allows one to decide between alternatives about difficult and controversial issues such as the intermolecular potential or the sensitive Hamaker force constants.« less

  19. Effect of the quartic gradient terms on the critical exponents of the Wilson-Fisher fixed point in O(N) models

    NASA Astrophysics Data System (ADS)

    Péli, Zoltán; Nagy, Sándor; Sailer, Kornel

    2018-02-01

    The effect of the O(partial4) terms of the gradient expansion on the anomalous dimension η and the correlation length's critical exponent ν of the Wilson-Fisher fixed point has been determined for the Euclidean 3-dimensional O( N) models with N≥ 2 . Wetterich's effective average action renormalization group method is used with field-independent derivative couplings and Litim's optimized regulator. It is shown that the critical theory is well approximated by the effective average action preserving O( N) symmetry with an accuracy of O(η).

  20. A second generation distributed point polarizable water model.

    PubMed

    Kumar, Revati; Wang, Fang-Fang; Jenness, Glen R; Jordan, Kenneth D

    2010-01-07

    A distributed point polarizable model (DPP2) for water, with explicit terms for charge penetration, induction, and charge transfer, is introduced. The DPP2 model accurately describes the interaction energies in small and large water clusters and also gives an average internal energy per molecule and radial distribution functions of liquid water in good agreement with experiment. A key to the success of the model is its accurate description of the individual terms in the n-body expansion of the interaction energies.

  1. Two-point functions in a holographic Kondo model

    NASA Astrophysics Data System (ADS)

    Erdmenger, Johanna; Hoyos, Carlos; O'Bannon, Andy; Papadimitriou, Ioannis; Probst, Jonas; Wu, Jackson M. S.

    2017-03-01

    We develop the formalism of holographic renormalization to compute two-point functions in a holographic Kondo model. The model describes a (0 + 1)-dimensional impurity spin of a gauged SU( N ) interacting with a (1 + 1)-dimensional, large- N , strongly-coupled Conformal Field Theory (CFT). We describe the impurity using Abrikosov pseudo-fermions, and define an SU( N )-invariant scalar operator O built from a pseudo-fermion and a CFT fermion. At large N the Kondo interaction is of the form O^{\\dagger}O, which is marginally relevant, and generates a Renormalization Group (RG) flow at the impurity. A second-order mean-field phase transition occurs in which O condenses below a critical temperature, leading to the Kondo effect, including screening of the impurity. Via holography, the phase transition is dual to holographic superconductivity in (1 + 1)-dimensional Anti-de Sitter space. At all temperatures, spectral functions of O exhibit a Fano resonance, characteristic of a continuum of states interacting with an isolated resonance. In contrast to Fano resonances observed for example in quantum dots, our continuum and resonance arise from a (0 + 1)-dimensional UV fixed point and RG flow, respectively. In the low-temperature phase, the resonance comes from a pole in the Green's function of the form - i< O >2, which is characteristic of a Kondo resonance.

  2. Graphene microsheets enter cells through spontaneous membrane penetration at edge asperities and corner sites

    PubMed Central

    Li, Yinfeng; Yuan, Hongyan; von dem Bussche, Annette; Creighton, Megan; Hurt, Robert H.; Kane, Agnes B.; Gao, Huajian

    2013-01-01

    Understanding and controlling the interaction of graphene-based materials with cell membranes is key to the development of graphene-enabled biomedical technologies and to the management of graphene health and safety issues. Very little is known about the fundamental behavior of cell membranes exposed to ultrathin 2D synthetic materials. Here we investigate the interactions of graphene and few-layer graphene (FLG) microsheets with three cell types and with model lipid bilayers by combining coarse-grained molecular dynamics (MD), all-atom MD, analytical modeling, confocal fluorescence imaging, and electron microscopic imaging. The imaging experiments show edge-first uptake and complete internalization for a range of FLG samples of 0.5- to 10-μm lateral dimension. In contrast, the simulations show large energy barriers relative to kBT for membrane penetration by model graphene or FLG microsheets of similar size. More detailed simulations resolve this paradox by showing that entry is initiated at corners or asperities that are abundant along the irregular edges of fabricated graphene materials. Local piercing by these sharp protrusions initiates membrane propagation along the extended graphene edge and thus avoids the high energy barrier calculated in simple idealized MD simulations. We propose that this mechanism allows cellular uptake of even large multilayer sheets of micrometer-scale lateral dimension, which is consistent with our multimodal bioimaging results for primary human keratinocytes, human lung epithelial cells, and murine macrophages. PMID:23840061

  3. AGT, N-Burge partitions and {{W}}_N minimal models

    NASA Astrophysics Data System (ADS)

    Belavin, Vladimir; Foda, Omar; Santachiara, Raoul

    2015-10-01

    Let {B}_{N,n}^{p,p', H} be a conformal block, with n consecutive channels χ ι , ι = 1, ⋯ n, in the conformal field theory {M}_N^{p,p'× {M}^{H} , where {M}_N^{p,p' } is a {W}_N minimal model, generated by chiral spin-2, ⋯ spin- N currents, and labeled by two co-prime integers p and p', 1 < p < p', while {M}^{H} is a free boson conformal field theory. {B}_{N,n}^{p,p', H} is the expectation value of vertex operators between an initial and a final state. Each vertex operator is labelled by a charge vector that lives in the weight lattice of the Lie algebra A N - 1, spanned by weight vectors {overrightarrow{ω}}_1,\\cdots, {overrightarrow{ω}}_{N-1} . We restrict our attention to conformal blocks with vertex operators whose charge vectors point along {overrightarrow{ω}}_1 . The charge vectors that label the initial and final states can point in any direction.

  4. Asymmetric interaction of point defects and heterophase interfaces in ZrN/TaN multilayered nanofilms.

    PubMed

    Lao, Yuanxia; Hu, Shuanglin; Shi, Yunlong; Deng, Yu; Wang, Fei; Du, Hao; Zhang, Haibing; Wang, Yuan

    2017-01-05

    Materials with a high density of heterophase interfaces, which are capable of absorbing and annihilating radiation-induced point defects, can exhibit a superior radiation tolerance. In this paper, we investigated the interaction behaviors of point defects and heterophase interfaces by implanting helium atoms into the ZrN/TaN multilayered nanofilms. It was found that the point defect-interface interaction on the two sides of the ZrN/TaN interface was asymmetric, likely due to the difference in the vacancy formation energies of ZrN and TaN. The helium bubbles could migrate from the ZrN layers into the TaN layers through the heterophase interfaces, resulting in a better crystallinity of the ZrN layers and a complete amorphization of the TaN layers. The findings provided some clues to the fundamental behaviors of point defects near the heterophase interfaces, which make us re-examine the design rules of advanced radiation-tolerant materials.

  5. Asymmetric interaction of point defects and heterophase interfaces in ZrN/TaN multilayered nanofilms

    NASA Astrophysics Data System (ADS)

    Lao, Yuanxia; Hu, Shuanglin; Shi, Yunlong; Deng, Yu; Wang, Fei; Du, Hao; Zhang, Haibing; Wang, Yuan

    2017-01-01

    Materials with a high density of heterophase interfaces, which are capable of absorbing and annihilating radiation-induced point defects, can exhibit a superior radiation tolerance. In this paper, we investigated the interaction behaviors of point defects and heterophase interfaces by implanting helium atoms into the ZrN/TaN multilayered nanofilms. It was found that the point defect-interface interaction on the two sides of the ZrN/TaN interface was asymmetric, likely due to the difference in the vacancy formation energies of ZrN and TaN. The helium bubbles could migrate from the ZrN layers into the TaN layers through the heterophase interfaces, resulting in a better crystallinity of the ZrN layers and a complete amorphization of the TaN layers. The findings provided some clues to the fundamental behaviors of point defects near the heterophase interfaces, which make us re-examine the design rules of advanced radiation-tolerant materials.

  6. Asymmetric interaction of point defects and heterophase interfaces in ZrN/TaN multilayered nanofilms

    PubMed Central

    Lao, Yuanxia; Hu, Shuanglin; Shi, Yunlong; Deng, Yu; Wang, Fei; Du, Hao; Zhang, Haibing; Wang, Yuan

    2017-01-01

    Materials with a high density of heterophase interfaces, which are capable of absorbing and annihilating radiation-induced point defects, can exhibit a superior radiation tolerance. In this paper, we investigated the interaction behaviors of point defects and heterophase interfaces by implanting helium atoms into the ZrN/TaN multilayered nanofilms. It was found that the point defect-interface interaction on the two sides of the ZrN/TaN interface was asymmetric, likely due to the difference in the vacancy formation energies of ZrN and TaN. The helium bubbles could migrate from the ZrN layers into the TaN layers through the heterophase interfaces, resulting in a better crystallinity of the ZrN layers and a complete amorphization of the TaN layers. The findings provided some clues to the fundamental behaviors of point defects near the heterophase interfaces, which make us re-examine the design rules of advanced radiation-tolerant materials. PMID:28053307

  7. The Effects of 4-Hydroxybenzoic Acid Identified from Bamboo (Dendrocalamus asper) Shoots on Kv1.4 Channel

    PubMed Central

    Mohamad, Fatin H.; Wong, Jia Hui; Mohamad, Habsah; Ismail, Abdul Hadi; Mohamed Yusoff, Abdul Aziz; Osman, Hasnah; Wong, Kok Tong; Idris, Zamzuri; Abdullah, Jafri Malin

    2018-01-01

    Background Bamboo shoot has been used as a treatment for epilepsy in traditional Chinese medicine for generations to treat neuronal disorders such as convulsive, dizziness and headaches. 4-hydroxybenzoic acid (4-hba) is a non-flavonoid phenol found abundantly in Dendrocalamus asper shoots (bamboo), fruits (strawberries and apples) and flowers. Kv1.4 is a rapidly inactivating Shaker-related member of the voltage-gated potassium channels with two inactivation mechanisms; the fast N-type and slow C-type. It plays vital roles in repolarisation, hyperpolarisation and signaling the restoration of resting membrane potential through the regulation of the movement of K+ across the cellular membrane. Methods Chemical compounds from Dendrocalamus asper bamboo shoots were purified and identified as major palmitic acids mixed with other minor fatty acids, palmitic acid, 4-hydroxybenzaldehyde, lauric acid, 4-hydroxybenzoic acid and cholest-4-ene-3-one. The response of synthetic 4-hydroxybenzoic acid was tested on Kv1.4 potassium channel which was injected into viable oocytes that was extracted from Xenopus laevis. The current were detected by the two-microelectrode voltage clamp, holding potential starting from −80 mV with 20 mV step-up until +80 mV. Readings of treatments with 0.1% DMSO, 4-hba concentrations and K channel blockers were taken at +60 mV. The ratio of tail/peak amplitude is the index of the activity of the Kv1.4 channels with n ≥ 6 (number of oocytes tested). The decreases of the ratios of five different concentrations (1 μM, 10 μM, 100 μM, 1 mM and 2.5 mM) were compared with 0.1% DMSO as the control. Results All concentration showed statistically significant results with P < 0.05 except for 100 μM. The normalised current of the 4-hba concentrations were compared with potassium channel blockers (TEA and 4-AP) and all groups showed statistically significant results. This study also showed that time taken for each concentration to affect Kv1.4 does not play

  8. Experimental development of low-frequency shear modulus and attenuation measurements in mated rock fractures: Shear mechanics due to asperity contact area changes with normal stress

    DOE PAGES

    Saltiel, Seth; Selvadurai, Paul A.; Bonner, Brian P.; ...

    2017-02-16

    Reservoir core measurements can help guide seismic monitoring of fluid-induced pressure variations in tight fractured reservoirs including those targeted for supercritical CO 2 injection. We present the first seismic-frequency ‘room-dry’ measurements of fracture specific shear stiffness, using artificially fractured standard granite samples with different degrees of mating, a well-mated tensile fracture from a dolomite reservoir core, as well as simple roughened polymethyl methacrylate (PMMA) surfaces. We have adapted a low-frequency (0.01 to 100 Hz) shear modulus and attenuation apparatus to explore the seismic signature of fractures and understand the mechanics of asperity contacts under a range of normal stress conditions.more » Our instrument is unique in its ability to measure at low normal stresses (0.5 – 20 MPa), simulating 'open' fractures in shallow or high fluid pressure reservoirs. The accuracy of our instrument is demonstrated by calibration and comparison to ultrasonic measurements and low-frequency direct shear measurements of intact samples from the literature. Pressure sensitive film was used to measure real contact area of the fracture surfaces. The fractured shear modulus for the majority of the samples shows an exponential dependence on real contact area. A simple numerical model, with one bonded circular asperity, predicts this behavior and matches the data for the simple PMMA surfaces. The rock surfaces reach their intact moduli at lower contact area than the model predicts, likely due to more complex geometry. Lastly, we apply our results to a Linear-Slip Interface Model to estimate reflection coefficients and calculate shear wave time delays due to the lower wave velocities through the fractured zone. We find that cross-well surveys could detect even well-mated hard rock fractures assuming the availability of high repeatability acquisition systems.« less

  9. Experimental development of low-frequency shear modulus and attenuation measurements in mated rock fractures: Shear mechanics due to asperity contact area changes with normal stress

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saltiel, Seth; Selvadurai, Paul A.; Bonner, Brian P.

    Reservoir core measurements can help guide seismic monitoring of fluid-induced pressure variations in tight fractured reservoirs including those targeted for supercritical CO 2 injection. We present the first seismic-frequency ‘room-dry’ measurements of fracture specific shear stiffness, using artificially fractured standard granite samples with different degrees of mating, a well-mated tensile fracture from a dolomite reservoir core, as well as simple roughened polymethyl methacrylate (PMMA) surfaces. We have adapted a low-frequency (0.01 to 100 Hz) shear modulus and attenuation apparatus to explore the seismic signature of fractures and understand the mechanics of asperity contacts under a range of normal stress conditions.more » Our instrument is unique in its ability to measure at low normal stresses (0.5 – 20 MPa), simulating 'open' fractures in shallow or high fluid pressure reservoirs. The accuracy of our instrument is demonstrated by calibration and comparison to ultrasonic measurements and low-frequency direct shear measurements of intact samples from the literature. Pressure sensitive film was used to measure real contact area of the fracture surfaces. The fractured shear modulus for the majority of the samples shows an exponential dependence on real contact area. A simple numerical model, with one bonded circular asperity, predicts this behavior and matches the data for the simple PMMA surfaces. The rock surfaces reach their intact moduli at lower contact area than the model predicts, likely due to more complex geometry. Lastly, we apply our results to a Linear-Slip Interface Model to estimate reflection coefficients and calculate shear wave time delays due to the lower wave velocities through the fractured zone. We find that cross-well surveys could detect even well-mated hard rock fractures assuming the availability of high repeatability acquisition systems.« less

  10. Smooth random change point models.

    PubMed

    van den Hout, Ardo; Muniz-Terrera, Graciela; Matthews, Fiona E

    2011-03-15

    Change point models are used to describe processes over time that show a change in direction. An example of such a process is cognitive ability, where a decline a few years before death is sometimes observed. A broken-stick model consists of two linear parts and a breakpoint where the two lines intersect. Alternatively, models can be formulated that imply a smooth change between the two linear parts. Change point models can be extended by adding random effects to account for variability between subjects. A new smooth change point model is introduced and examples are presented that show how change point models can be estimated using functions in R for mixed-effects models. The Bayesian inference using WinBUGS is also discussed. The methods are illustrated using data from a population-based longitudinal study of ageing, the Cambridge City over 75 Cohort Study. The aim is to identify how many years before death individuals experience a change in the rate of decline of their cognitive ability. Copyright © 2010 John Wiley & Sons, Ltd.

  11. Coseismic slip of two large Mexican earthquakes from teleseismic body waveforms - Implications for asperity interaction in the Michoacan plate boundary segment

    NASA Astrophysics Data System (ADS)

    Mendoza, Carlos

    1993-05-01

    The distributions and depths of coseismic slip are derived for the October 25, 1981 Playa Azul and September 21, 1985 Zihuatanejo earthquakes in western Mexico by inverting the recorded teleseismic body waves. Rupture during the Playa Azul earthquake appears to have occurred in two separate zones both updip and downdip of the point of initial nucleation, with most of the slip concentrated in a circular region of 15-km radius downdip from the hypocenter. Coseismic slip occurred entirely within the area of reduced slip between the two primary shallow sources of the Michoacan earthquake that occurred on September 19, 1985, almost 4 years later. The slip of the Zihuatanejo earthquake was concentrated in an area adjacent to one of the main sources of the Michoacan earthquake and appears to be the southeastern continuation of rupture along the Cocos-North America plate boundary. The zones of maximum slip for the Playa Azul, Zihuatanejo, and Michoacan earthquakes may be considered asperity regions that control the occurrence of large earthquakes along the Michoacan segment of the plate boundary.

  12. Venom from the snake Bothrops asper Garman. Purification and characterization of three phospholipases A2

    PubMed Central

    Anagón, Alejandro C.; Molinar, Ricardo R.; Possani, Lourival D.; Fletcher, Paul L.; Cronan, John E.; Julia, Jordi Z.

    1980-01-01

    The water-soluble venom of Bothrops asper Garman (San Juan Evangelista, Veracruz, México) showed 15 polypeptide bands on polyacrylamide-gel electrophoresis. This material exhibited phospholipase, hyaluronidase, N-benzoyl-l-arginine ethyl hydrolase, N-benzoyl-l-tyrosine ethyl hydrolase and phosphodiesterase activity, but no alkaline phosphatase or acid phosphatase activity. Fractionation on Sephadex G-75 afforded seven protein fractions, which were apparently less toxic than the whole venom (LD50=4.3μg/g mouse wt.). Subsequent separation of the phospholipase-positive fraction (II) on DEAE-cellulose with potassium phosphate buffers (pH7.55) gave several fractions, two being phospholipase-positive (II.6 and II.8). These fractions were further purified on DEAE-cellulose columns with potassium phosphate buffers (pH8.6). Fraction II.8.4 was rechromatographed in the same DEAE-cellulose column, giving a pure protein designated phospholipase 1. The fraction II.6.3 was further separated by gel disc electrophoresis yielding two more pure proteins designated phospholipase 2 and phospholipase 3. Analysis of phospholipids hydrolysed by these enzymes have shown that all three phospholipases belong to type A2. Amino acid analysis has shown that phospholipase A2 (type 1) has 97 residues with a calculated mol.wt. of 10978±11. Phospholipase A2 (type 2) has 96 residues with a mol.wt. of 10959±11. Phospholipase A2 (type 3) has 266 residues with 16 half-cystine residues and a calculated mol.wt of 29042±31. Automated Edman degradation showed the N-terminal sequence to be: Asx-Leu-Trp-Glx-Phe-Gly-Glx-Met-Met-Ser-Asx-Val- Met-Arg-Lys-Asx-Val-Val-Phe-Lys-Tyr-Leu- for phospholipase A2 (type 2). ImagesFig. 1. PMID:7387631

  13. Present coupling along the Peruvian subduction asperity that devastated Lima while breaking during the 1746 earthquake

    NASA Astrophysics Data System (ADS)

    Cavalié, O.; Chlieh, M.; Villegas Lanza, J. C.

    2016-12-01

    Subduction zone are particularly prone to generating large earthquakes due to its wide lateral extension. In order to understand where, and possibly when, large earthquakes will occur, interseismic deformation observation is a key information because it allows to map asperities that accumulate stress on the plate interface. South American subduction is one of the longest worldwide, running all along the west coast of the continent. Combined with the relatively fast convergence rate between the Nazca plate and the South American continent, Chile and Peru experience regularly M>7.5 earthquakes. In this study, we focused on the Peruvian subduction margin and more precisely on the Central segment containing Lima where the seismic risk is the highest in the country due the large population that lives in the Peruvian capital. On the Central segment (10°S and 15°S), we used over 50 GPS interseismic measurements from campaign and continuous sites, as well as InSAR data to map coupling along the subduction interface. GPS data come from the Peruvian GPS network and InSAR data are from the Envisat satellite. We selected two tracks covering the central segment (including Lima) and with enough SAR image acquisitions between 2003 and 2010 to get a robust deformation estimation. GPS and InSAR data show a consistent tectonic signal with a maximum of surface displacement by the coast: the maximum horizontal velocities from GPS is about 20 mm and InSAR finds 12-13 mm in the LOS component. In addition, InSAR reveals lateral variations along the coast: the maximum motion is measured around Lima (11°S) and fades on either side. By inverting the geodetic data, we were able to map the coupling along the segment. It results in a main asperity where interseismic stress is loading. However, compared the previous published models based on GPS only, the coupling in the central segment seems more heterogeneous. Finally, we compared the deficit of seismic moment accumulating in the

  14. Present coupling along the Peruvian subduction asperity that devastated Lima while breaking during the 1746 earthquake

    NASA Astrophysics Data System (ADS)

    Cavalié, O.; Chlieh, M.; Villegas Lanza, J. C.

    2017-12-01

    Subduction zone are particularly prone to generating large earthquakes due to its wide lateral extension. In order to understand where, and possibly when, large earthquakes will occur, interseismic deformation observation is a key information because it allows to map asperities that accumulate stress on the plate interface. South American subduction is one of the longest worldwide, running all along the west coast of the continent. Combined with the relatively fast convergence rate between the Nazca plate and the South American continent, Chile and Peru experience regularly M>7.5 earthquakes. In this study, we focused on the Peruvian subduction margin and more precisely on the Central segment containing Lima where the seismic risk is the highest in the country due the large population that lives in the Peruvian capital. On the Central segment (10°S and 15°S), we used over 50 GPS interseismic measurements from campaign and continuous sites, as well as InSAR data to map coupling along the subduction interface. GPS data come from the Peruvian GPS network and InSAR data are from the Envisat satellite. We selected two tracks covering the central segment (including Lima) and with enough SAR image acquisitions between 2003 and 2010 to get a robust deformation estimation. GPS and InSAR data show a consistent tectonic signal with a maximum of surface displacement by the coast: the maximum horizontal velocities from GPS is about 20 mm and InSAR finds 12-13 mm in the LOS component. In addition, InSAR reveals lateral variations along the coast: the maximum motion is measured around Lima (11°S) and fades on either side. By inverting the geodetic data, we were able to map the coupling along the segment. It results in a main asperity where interseismic stress is loading. However, compared the previous published models based on GPS only, the coupling in the central segment seems more heterogeneous. Finally, we compared the deficit of seismic moment accumulating in the

  15. Microseismic Analysis of Fracture of an Intact Rock Asperity Traversing a Sawcut Fault

    NASA Astrophysics Data System (ADS)

    Mclaskey, G.; Lockner, D. A.

    2017-12-01

    Microseismic events carry information related to stress state, fault geometry, and other subsurface properties, but their relationship to large and potentially damaging earthquakes is not well defined. We conducted laboratory rock mechanics experiments that highlight the interaction between a sawcut fault and an asperity composed of an intact rock "pin". The sample is a 76 mm diameter cylinder of Westerly granite with a 21 mm diameter cylinder (the pin) of intact Westerly granite that crosses the sawcut fault. Upon loading to 80 MPa in a triaxial machine, we first observed a slip event that ruptured the sawcut fault, slipped about 35 mm, but was halted by the rock pin. With continued loading, the rock pin failed in a swarm of thousands of M -7 seismic events similar to the localized microcracking that occurs during the final fracture nucleation phase in an intact rock sample. Once the pin was fractured to a critical point, it permitted complete rupture events on the sawcut fault (stick-slip instabilities). No seismicity was detected on the sawcut fault plane until the pin was sheared. Subsequent slip events were preceded by 10s of foreshocks, all located on the fault plane. We also identified an aseismic zone on the fault plane surrounding the fractured rock pin. A post-mortem analysis of the sample showed a thick gouge layer where the pin intersected the fault, suggesting that this gouge propped open the fault and prevented microseismic events in its vicinity. This experiment is an excellent case study in microseismicity since the events separate neatly into three categories: slip on the sawcut fault, fracture of the intact rock pin, and off-fault seismicity associated with pin-related rock joints. The distinct locations, timing, and focal mechanisms of the different categories of microseismic events allow us to study how their occurrence is related to the mechanics of the deforming rock.

  16. An Emprical Point Error Model for Tls Derived Point Clouds

    NASA Astrophysics Data System (ADS)

    Ozendi, Mustafa; Akca, Devrim; Topan, Hüseyin

    2016-06-01

    The random error pattern of point clouds has significant effect on the quality of final 3D model. The magnitude and distribution of random errors should be modelled numerically. This work aims at developing such an anisotropic point error model, specifically for the terrestrial laser scanner (TLS) acquired 3D point clouds. A priori precisions of basic TLS observations, which are the range, horizontal angle and vertical angle, are determined by predefined and practical measurement configurations, performed at real-world test environments. A priori precision of horizontal (𝜎𝜃) and vertical (𝜎𝛼) angles are constant for each point of a data set, and can directly be determined through the repetitive scanning of the same environment. In our practical tests, precisions of the horizontal and vertical angles were found as 𝜎𝜃=±36.6𝑐𝑐 and 𝜎𝛼=±17.8𝑐𝑐, respectively. On the other hand, a priori precision of the range observation (𝜎𝜌) is assumed to be a function of range, incidence angle of the incoming laser ray, and reflectivity of object surface. Hence, it is a variable, and computed for each point individually by employing an empirically developed formula varying as 𝜎𝜌=±2-12 𝑚𝑚 for a FARO Focus X330 laser scanner. This procedure was followed by the computation of error ellipsoids of each point using the law of variance-covariance propagation. The direction and size of the error ellipsoids were computed by the principal components transformation. The usability and feasibility of the model was investigated in real world scenarios. These investigations validated the suitability and practicality of the proposed method.

  17. The three-point function as a probe of models for large-scale structure

    NASA Astrophysics Data System (ADS)

    Frieman, Joshua A.; Gaztanaga, Enrique

    1994-04-01

    We analyze the consequences of models of structure formation for higher order (n-point) galaxy correlation functions in the mildly nonlinear regime. Several variations of the standard Omega = 1 cold dark matter model with scale-invariant primordial perturbations have recently been introduced to obtain more power on large scales, Rp is approximately 20/h Mpc, e.g., low matter-density (nonzero cosmological constant) models, 'tilted' primordial spectra, and scenarios with a mixture of cold and hot dark matter. They also include models with an effective scale-dependent bias, such as the cooperative galaxy formation scenario of Bower et al. We show that higher-order (n-point) galaxy correlation functions can provide a useful test of such models and can discriminate between models with true large-scale power in the density field and those where the galaxy power arises from scale-dependent bias: a bias with rapid scale dependence leads to a dramatic decrease of the the hierarchical amplitudes QJ at large scales, r is greater than or approximately Rp. Current observational constraints on the three-point amplitudes Q3 and S3 can place limits on the bias parameter(s) and appear to disfavor, but not yet rule out, the hypothesis that scale-dependent bias is responsible for the extra power observed on large scales.

  18. TARDEC FIXED HEEL POINT (FHP): DRIVER CAD ACCOMMODATION MODEL VERIFICATION REPORT

    DTIC Science & Technology

    2017-11-09

    SUPPLEMENTARY NOTES N/A 14. ABSTRACT Easy-to-use Computer-Aided Design (CAD) tools, known as accommodation models, are needed by the ground vehicle... designers when developing the interior workspace for the occupant. The TARDEC Fixed Heel Point (FHP): Driver CAD Accommodation Model described in this...is intended to provide the composite boundaries representing the body of the defined target design population, including posture prediction

  19. Triple point temperature of neon isotopes: Dependence on nitrogen impurity and sealed-cell model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pavese, F.; Steur, P. P. M.; Giraudi, D.

    2013-09-11

    This paper illustrates a study conducted at INRIM, to further check how some quantities influence the value of the triple point temperature of the neon high-purity isotopes {sup 20}Ne and {sup 22}Ne. The influence of nitrogen as a chemical impurity in neon is critical with regard to the present best total uncertainty achieved in the measurement of these triple points, but only one determination is available in the literature. Checks are reported, performed on two different samples of {sup 22}Ne known to contain a N{sub 2} amount of 157⋅10{sup −6}, using two different models of sealed cells. The model ofmore » the cell can, in principle, have some effects on the shape of the melting plateau or on the triple point temperature observed for the sample sealed in it. This can be due to cell thermal parameters, or because the INRIM cell element mod. c contains many copper wires closely packed, which can, in principle, constrain the interface and induce a premelting-like effect. The reported results on a cell mod. Bter show no evident effect from the cell model and provide a value for the effect of N{sub 2} in Ne liquidus point of 8.6(1.9) μK ppm N{sub 2}{sup −1}, only slightly different from the literature datum.« less

  20. Aspergillus asper sp. nov. and Aspergillus collinsii sp. nov., from Aspergillus section Usti.

    PubMed

    Jurjevic, Zeljko; Peterson, Stephen W

    2016-07-01

    In sampling fungi from the built environment, two isolates that could not confidently be placed in described species were encountered. Phenotypic analysis suggested that they belonged in Aspergillus sect. Usti. In order to verify the sectional placement and to assure that they were undescribed rather than phenotypically aberrant isolates, DNA was isolated and sequenced at the beta-tubulin, calmodulin, internal transcribed spacer and RNA polymerase II loci and sequences compared with those from other species in the genus Aspergillus. At each locus, each new isolate was distant from existing species. Phylogenetic trees calculated from these data and GenBank data for species of the section Usti excluded the placement of these isolates in existing species, with statistical support. Because they were excluded from existing taxa, the distinct species Aspergillus asper (type strain NRRL 35910 T ) and Aspergillus collinsii (type strain NRRL 66196 T ) in sect. Usti are proposed to accommodate these strains.

  1. Model for Semantically Rich Point Cloud Data

    NASA Astrophysics Data System (ADS)

    Poux, F.; Neuville, R.; Hallot, P.; Billen, R.

    2017-10-01

    This paper proposes an interoperable model for managing high dimensional point clouds while integrating semantics. Point clouds from sensors are a direct source of information physically describing a 3D state of the recorded environment. As such, they are an exhaustive representation of the real world at every scale: 3D reality-based spatial data. Their generation is increasingly fast but processing routines and data models lack of knowledge to reason from information extraction rather than interpretation. The enhanced smart point cloud developed model allows to bring intelligence to point clouds via 3 connected meta-models while linking available knowledge and classification procedures that permits semantic injection. Interoperability drives the model adaptation to potentially many applications through specialized domain ontologies. A first prototype is implemented in Python and PostgreSQL database and allows to combine semantic and spatial concepts for basic hybrid queries on different point clouds.

  2. The three-point function as a probe of models for large-scale structure

    NASA Technical Reports Server (NTRS)

    Frieman, Joshua A.; Gaztanaga, Enrique

    1993-01-01

    The consequences of models of structure formation for higher-order (n-point) galaxy correlation functions in the mildly non-linear regime are analyzed. Several variations of the standard Omega = 1 cold dark matter model with scale-invariant primordial perturbations were recently introduced to obtain more power on large scales, R(sub p) is approximately 20 h(sup -1) Mpc, e.g., low-matter-density (non-zero cosmological constant) models, 'tilted' primordial spectra, and scenarios with a mixture of cold and hot dark matter. They also include models with an effective scale-dependent bias, such as the cooperative galaxy formation scenario of Bower, etal. It is shown that higher-order (n-point) galaxy correlation functions can provide a useful test of such models and can discriminate between models with true large-scale power in the density field and those where the galaxy power arises from scale-dependent bias: a bias with rapid scale-dependence leads to a dramatic decrease of the hierarchical amplitudes Q(sub J) at large scales, r is approximately greater than R(sub p). Current observational constraints on the three-point amplitudes Q(sub 3) and S(sub 3) can place limits on the bias parameter(s) and appear to disfavor, but not yet rule out, the hypothesis that scale-dependent bias is responsible for the extra power observed on large scales.

  3. Influence of the Density Structure of the Caribbean Plate Forearc on the Static Stress State and Asperity Distribution along the Costa Rican Seismogenic Zone

    NASA Astrophysics Data System (ADS)

    Lücke, O. H.; Gutknecht, B. D.

    2014-12-01

    Most of the forearc region along the Central American Subduction Zone shows a series of trench-parallel, positive gravity anomalies with corresponding gravity lows along the trench and toward the coast. These features extend from Guatemala to northern Nicaragua. However, the Costa Rican segment of the forearc does not follow this pattern. In this region, the along-trench gravity low is segmented, the coastal low is absent, and the forearc gravity high is located onshore at the Nicoya Peninsula which overlies the seismogenic zone. Geodetic and seismological studies along the Costa Rican Subduction Zone suggest the presence of coupled areas beneath the Nicoya Peninsula prior to the 2012, magnitude Mw 7.6 earthquake. These areas had previously been associated with asperities. Previous publications have proposed a mechanical model for the generation of asperities along the Chilean convergent margin based on the structure of the overriding plate above the seismogenic zone in which dense igneous bodies disturb the state of stress on the seismogenic zone and may influence seismogenic processes. In Costa Rica, surface geology and gravity data indicate the presence of dense basalt/gabbro crust overlying the seismogenic zone where the coupling is present. Bouguer anomaly values in this region reach up to 120×10-5 m/s2, which are the highest for Costa Rica. In this work, the state of stress on the Cocos-Caribbean plate interface is calculated based on the geometry and mass distribution of a 3D density model of the subduction zone as interpreted from gravity data from combined geopotential models. Results show a correlation between the coupled areas at the Nicoya Peninsula and the presence of stress anomalies on the plate interface. The stress anomalies are calculated for the normal component of the vertical stress on the seismogenic zone and are interpreted as being generated by the dense material which makes up the forearc in the area. The dense material of the Nicoya

  4. Exact results for the O( N ) model with quenched disorder

    NASA Astrophysics Data System (ADS)

    Delfino, Gesualdo; Lamsen, Noel

    2018-04-01

    We use scale invariant scattering theory to exactly determine the lines of renormalization group fixed points for O( N )-symmetric models with quenched disorder in two dimensions. Random fixed points are characterized by two disorder parameters: a modulus that vanishes when approaching the pure case, and a phase angle. The critical lines fall into three classes depending on the values of the disorder modulus. Besides the class corresponding to the pure case, a second class has maximal value of the disorder modulus and includes Nishimori-like multicritical points as well as zero temperature fixed points. The third class contains critical lines that interpolate, as N varies, between the first two classes. For positive N , it contains a single line of infrared fixed points spanning the values of N from √{2}-1 to 1. The symmetry sector of the energy density operator is superuniversal (i.e. N -independent) along this line. For N = 2 a line of fixed points exists only in the pure case, but accounts also for the Berezinskii-Kosterlitz-Thouless phase observed in presence of disorder.

  5. Rate and State Friction Relation for Nanoscale Contacts: Thermally Activated Prandtl-Tomlinson Model with Chemical Aging

    NASA Astrophysics Data System (ADS)

    Tian, Kaiwen; Goldsby, David L.; Carpick, Robert W.

    2018-05-01

    Rate and state friction (RSF) laws are widely used empirical relationships that describe macroscale to microscale frictional behavior. They entail a linear combination of the direct effect (the increase of friction with sliding velocity due to the reduced influence of thermal excitations) and the evolution effect (the change in friction with changes in contact "state," such as the real contact area or the degree of interfacial chemical bonds). Recent atomic force microscope (AFM) experiments and simulations found that nanoscale single-asperity amorphous silica-silica contacts exhibit logarithmic aging (increasing friction with time) over several decades of contact time, due to the formation of interfacial chemical bonds. Here we establish a physically based RSF relation for such contacts by combining the thermally activated Prandtl-Tomlinson (PTT) model with an evolution effect based on the physics of chemical aging. This thermally activated Prandtl-Tomlinson model with chemical aging (PTTCA), like the PTT model, uses the loading point velocity for describing the direct effect, not the tip velocity (as in conventional RSF laws). Also, in the PTTCA model, the combination of the evolution and direct effects may be nonlinear. We present AFM data consistent with the PTTCA model whereby in aging tests, for a given hold time, static friction increases with the logarithm of the loading point velocity. Kinetic friction also increases with the logarithm of the loading point velocity at sufficiently high velocities, but at a different increasing rate. The discrepancy between the rates of increase of static and kinetic friction with velocity arises from the fact that appreciable aging during static contact changes the energy landscape. Our approach extends the PTT model, originally used for crystalline substrates, to amorphous materials. It also establishes how conventional RSF laws can be modified for nanoscale single-asperity contacts to provide a physically based friction

  6. Why are the prevalence and diversity of helminths in the endemic Pyrenean brook newt Calotriton asper (Amphibia, Salamandridae) so low?

    PubMed

    Comas, M; Ribas, A

    2015-03-01

    A cornerstone in parasitology is why some species or populations are more parasitized than others. Here we examine the influence of host characteristics and habitat on parasite prevalence. We studied the helminths parasitizing the Pyrenean brook newt Calotriton asper (n= 167), paying special attention to the relationship between parasites and ecological factors such as habitat, sex, ontogeny, body size and age of the host. We detected two species of parasites, Megalobatrachonema terdentatum (Nematoda: Kathlaniidae) and Brachycoelium salamandrae (Trematoda: Brachycoeliidae), with a prevalence of 5.99% and 1.2%, respectively. Marginally significant differences were found in the prevalence between sexes, with females being more parasitized than males. The present results show significant differences in the body length of paedomorphic and metamorphic individuals, the former being smaller. Nevertheless, no significant correlations between parasite prevalence and either newt body length, ontogenetic stage or age were found. In comparison with other Salamandridae living in ponds, prevalence and diversity values were low. This may be due to a long hibernation period, the species' lotic habitat and its reophilous lifestyle, which probably do not allow for a high parasite load.

  7. Model for a Ferromagnetic Quantum Critical Point in a 1D Kondo Lattice

    NASA Astrophysics Data System (ADS)

    Komijani, Yashar; Coleman, Piers

    2018-04-01

    Motivated by recent experiments, we study a quasi-one-dimensional model of a Kondo lattice with ferromagnetic coupling between the spins. Using bosonization and dynamical large-N techniques, we establish the presence of a Fermi liquid and a magnetic phase separated by a local quantum critical point, governed by the Kondo breakdown picture. Thermodynamic properties are studied and a gapless charged mode at the quantum critical point is highlighted.

  8. The Φ43 and Φ63 matricial QFT models have reflection positive two-point function

    NASA Astrophysics Data System (ADS)

    Grosse, Harald; Sako, Akifumi; Wulkenhaar, Raimar

    2018-01-01

    We extend our previous work (on D = 2) to give an exact solution of the ΦD3 large- N matrix model (or renormalised Kontsevich model) in D = 4 and D = 6 dimensions. Induction proofs and the difficult combinatorics are unchanged compared with D = 2, but the renormalisation - performed according to Zimmermann - is much more involved. As main result we prove that the Schwinger 2-point function resulting from the ΦD3 -QFT model on Moyal space satisfies, for real coupling constant, reflection positivity in D = 4 and D = 6 dimensions. The Källén-Lehmann mass spectrum of the associated Wightman 2-point function describes a scattering part | p|2 ≥ 2μ2 and an isolated broadened mass shell around | p|2 =μ2.

  9. Drainage Asperities on Subduction Megathrusts

    NASA Astrophysics Data System (ADS)

    Sibson, R. H.

    2012-12-01

    the stress-state in the forearc hanging-wall switches from compressional reverse-slip faulting before failure to extensional normal-slip faulting postfailure, as occurred during the 2011 Mw9.0 Tohoku megathrust rupture. Mean stress and fault-normal stress then change from being greater than vertical stress prefailure, to less than vertical stress postfailure. Postfailure reductions in overpressure are expected from a combination of poroelastic effects and fluid loss through fault-fracture networks, enhancing vertical permeability. Mineralised fault-fracture meshes in exhumed fore-arc assemblages (e.g. the Alaska-Juneau Au-quartz vein swarm) testify to the episodic discharge of substantial volumes of hydrothermal fluid (< tens of km3). Localized drainage from the subduction interface shear zone increases frictional strength significantly, giving rise to a postfailure strength asperities. Anticipated strength increases from such fluid discharge depends on the magnitude of the drop in overpressure but are potentially large (< hundreds of MPa). Time to the subsequent failure is then governed by reaccumulation of fluid overpressure as well as shear stress along the subduction interface.

  10. A renormalization group model for the stick-slip behavior of faults

    NASA Technical Reports Server (NTRS)

    Smalley, R. F., Jr.; Turcotte, D. L.; Solla, S. A.

    1983-01-01

    A fault which is treated as an array of asperities with a prescribed statistical distribution of strengths is described. For a linear array the stress is transferred to a single adjacent asperity and for a two dimensional array to three ajacent asperities. It is shown that the solutions bifurcate at a critical applied stress. At stresses less than the critical stress virtually no asperities fail on a large scale and the fault is locked. At the critical stress the solution bifurcates and asperity failure cascades away from the nucleus of failure. It is found that the stick slip behavior of most faults can be attributed to the distribution of asperities on the fault. The observation of stick slip behavior on faults rather than stable sliding, why the observed level of seismicity on a locked fault is very small, and why the stress on a fault is less than that predicted by a standard value of the coefficient of friction are outlined.

  11. Strong Ground Motion Simulation and Source Modeling of the April 1, 2006 Tai-Tung Earthquake Using Empirical Green's Function Method

    NASA Astrophysics Data System (ADS)

    Huang, H.; Lin, C.

    2010-12-01

    The Tai-Tung earthquake (ML=6.2) occurred at the southeastern part of Taiwan on April 1, 2006. We examine the source model of this event using the observed seismograms by CWBSN at five stations surrounding the source area. An objective estimation method was used to obtain the parameters N and C which are needed for the empirical Green’s function method by Irikura (1986). This method is called “source spectral ratio fitting method” which gives estimate of seismic moment ratio between a large and a small event and their corner frequencies by fitting the observed source spectral ratio with the ratio of source spectra which obeys the model (Miyake et al., 1999). This method has an advantage of removing site effects in evaluating the parameters. The best source model of the Tai-Tung mainshock in 2006 was estimated by comparing the observed waveforms with synthetics using empirical Green’s function method. The size of the asperity is about 3.5 km length along the strike direction by 7.0 km width along the dip direction. The rupture started at the left-bottom of the asperity and extended radially to the right-upper direction.

  12. Strong Ground Motion Simulation and Source Modeling of the December 16, 1993 Tapu Earthquake, Taiwan, Using Empirical Green's Function Method

    NASA Astrophysics Data System (ADS)

    Huang, H.-C.; Lin, C.-Y.

    2012-04-01

    The Tapu earthquake (ML 5.7) occurred at the southwestern part of Taiwan on December 16, 1993. We examine the source model of this event using the observed seismograms by CWBSN at eight stations surrounding the source area. An objective estimation method is used to obtain the parameters N and C which are needed for the empirical Green's function method by Irikura (1986). This method is called "source spectral ratio fitting method" which gives estimate of seismic moment ratio between a large and a small event and their corner frequencies by fitting the observed source spectral ratio with the ratio of source spectra which obeys the model (Miyake et al., 1999). This method has an advantage of removing site effects in evaluating the parameters. The best source model of the Tapu mainshock in 1993 is estimated by comparing the observed waveforms with the synthetic ones using empirical Green's function method. The size of the asperity is about 2.1 km length along the strike direction by 1.5 km width along the dip direction. The rupture started at the right-bottom of the asperity and extended radially to the left-upper direction.

  13. Strong Ground Motion Simulation and Source Modeling of the December 16, 1993 Tapu Earthquake, Taiwan, Using Empirical Green's Function Method

    NASA Astrophysics Data System (ADS)

    Huang, H.; Lin, C.

    2012-12-01

    The Tapu earthquake (ML 5.7) occurred at the southwestern part of Taiwan on December 16, 1993. We examine the source model of this event using the observed seismograms by CWBSN at eight stations surrounding the source area. An objective estimation method is used to obtain the parameters N and C which are needed for the empirical Green's function method by Irikura (1986). This method is called "source spectral ratio fitting method" which gives estimate of seismic moment ratio between a large and a small event and their corner frequencies by fitting the observed source spectral ratio with the ratio of source spectra which obeys the model (Miyake et al., 1999). This method has an advantage of removing site effects in evaluating the parameters. The best source model of the Tapu mainshock in 1993 is estimated by comparing the observed waveforms with the synthetic ones using empirical Green's function method. The size of the asperity is about 2.1 km length along the strike direction by 1.5 km width along the dip direction. The rupture started at the right-bottom of the asperity and extended radially to the left-upper direction.

  14. Inferring Models of Bacterial Dynamics toward Point Sources

    PubMed Central

    Jashnsaz, Hossein; Nguyen, Tyler; Petrache, Horia I.; Pressé, Steve

    2015-01-01

    Experiments have shown that bacteria can be sensitive to small variations in chemoattractant (CA) concentrations. Motivated by these findings, our focus here is on a regime rarely studied in experiments: bacteria tracking point CA sources (such as food patches or even prey). In tracking point sources, the CA detected by bacteria may show very large spatiotemporal fluctuations which vary with distance from the source. We present a general statistical model to describe how bacteria locate point sources of food on the basis of stochastic event detection, rather than CA gradient information. We show how all model parameters can be directly inferred from single cell tracking data even in the limit of high detection noise. Once parameterized, our model recapitulates bacterial behavior around point sources such as the “volcano effect”. In addition, while the search by bacteria for point sources such as prey may appear random, our model identifies key statistical signatures of a targeted search for a point source given any arbitrary source configuration. PMID:26466373

  15. Thermal Effect on Fracture Integrity in Enhanced Geothermal Systems

    NASA Astrophysics Data System (ADS)

    Zeng, C.; Deng, W.; Wu, C.; Insall, M.

    2017-12-01

    In enhanced geothermal systems (EGS), cold fluid is injected to be heated up for electricity generation purpose, and pre-existing fractures are the major conduits for fluid transport. Due to the relative cold fluid injection, the rock-fluid temperature difference will induce thermal stress along the fracture wall. Such large thermal stress could cause the failure of self-propping asperities and therefore change the fracture integrity, which could affect the heat recovery efficiency and fluid recycling. To study the thermal effect on fracture integrity, two mechanisms pertinent to thermal stress are proposed to cause asperity contact failure: (1) the crushing between two pairing asperities leads to the failure at contact area, and (2) the thermal spalling expedites this process. Finite element modeling is utilized to investigate both failure mechanisms by idealizing the asperities as hemispheres. In the numerical analysis, we have implemented meso-scale damage model to investigate coupled failure mechanism induced by thermomechanical stress field and original overburden pressure at the vicinity of contact point. Our results have shown that both the overburden pressure and a critical temperature determine the threshold of asperity failure. Since the overburden pressure implies the depth of fractures in EGS and the critical temperature implies the distance of fractures to the injection well, our ultimate goal is to locate a region of EGS where the fracture integrity is vulnerable to such thermal effect and estimate the influences.

  16. SEM Based CARMA Time Series Modeling for Arbitrary N.

    PubMed

    Oud, Johan H L; Voelkle, Manuel C; Driver, Charles C

    2018-01-01

    This article explains in detail the state space specification and estimation of first and higher-order autoregressive moving-average models in continuous time (CARMA) in an extended structural equation modeling (SEM) context for N = 1 as well as N > 1. To illustrate the approach, simulations will be presented in which a single panel model (T = 41 time points) is estimated for a sample of N = 1,000 individuals as well as for samples of N = 100 and N = 50 individuals, followed by estimating 100 separate models for each of the one-hundred N = 1 cases in the N = 100 sample. Furthermore, we will demonstrate how to test the difference between the full panel model and each N = 1 model by means of a subject-group-reproducibility test. Finally, the proposed analyses will be applied in an empirical example, in which the relationships between mood at work and mood at home are studied in a sample of N = 55 women. All analyses are carried out by ctsem, an R-package for continuous time modeling, interfacing to OpenMx.

  17. N-point statistics of large-scale structure in the Zel'dovich approximation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tassev, Svetlin, E-mail: tassev@astro.princeton.edu

    2014-06-01

    Motivated by the results presented in a companion paper, here we give a simple analytical expression for the matter n-point functions in the Zel'dovich approximation (ZA) both in real and in redshift space (including the angular case). We present numerical results for the 2-dimensional redshift-space correlation function, as well as for the equilateral configuration for the real-space 3-point function. We compare those to the tree-level results. Our analysis is easily extendable to include Lagrangian bias, as well as higher-order perturbative corrections to the ZA. The results should be especially useful for modelling probes of large-scale structure in the linear regime,more » such as the Baryon Acoustic Oscillations. We make the numerical code used in this paper freely available.« less

  18. Spacing distribution functions for 1D point island model with irreversible attachment

    NASA Astrophysics Data System (ADS)

    Gonzalez, Diego; Einstein, Theodore; Pimpinelli, Alberto

    2011-03-01

    We study the configurational structure of the point island model for epitaxial growth in one dimension. In particular, we calculate the island gap and capture zone distributions. Our model is based on an approximate description of nucleation inside the gaps. Nucleation is described by the joint probability density p xy n (x,y), which represents the probability density to have nucleation at position x within a gap of size y. Our proposed functional form for p xy n (x,y) describes excellently the statistical behavior of the system. We compare our analytical model with extensive numerical simulations. Our model retains the most relevant physical properties of the system. This work was supported by the NSF-MRSEC at the University of Maryland, Grant No. DMR 05-20471, with ancillary support from the Center for Nanophysics and Advanced Materials (CNAM).

  19. Calculating the n-point correlation function with general and efficient python code

    NASA Astrophysics Data System (ADS)

    Genier, Fred; Bellis, Matthew

    2018-01-01

    There are multiple approaches to understanding the evolution of large-scale structure in our universe and with it the role of baryonic matter, dark matter, and dark energy at different points in history. One approach is to calculate the n-point correlation function estimator for galaxy distributions, sometimes choosing a particular type of galaxy, such as luminous red galaxies. The standard way to calculate these estimators is with pair counts (for the 2-point correlation function) and with triplet counts (for the 3-point correlation function). These are O(n2) and O(n3) problems, respectively and with the number of galaxies that will be characterized in future surveys, having efficient and general code will be of increasing importance. Here we show a proof-of-principle approach to the 2-point correlation function that relies on pre-calculating galaxy locations in coarse “voxels”, thereby reducing the total number of necessary calculations. The code is written in python, making it easily accessible and extensible and is open-sourced to the community. Basic results and performance tests using SDSS/BOSS data will be shown and we discuss the application of this approach to the 3-point correlation function.

  20. An Observation of Repeating Events at local asperities during a Laboratory Stick-slip Experiment of a Saw-cut Cylindrical Lucite Sample

    NASA Astrophysics Data System (ADS)

    Gu, C.; Mighani, S.; Prieto, G. A.; Mok, U.; Evans, J. B.; Hager, B. H.; Toksoz, M. N.

    2017-12-01

    Repeating earthquakes have been found in subduction zones and interpreted as repeated ruptures of small local asperities. Repeating earthquakes have also been found in oil/gas fields, interpreted as the reactivation of pre-existing faults due to fluid injection/extraction. To mimic the fault rupture of a fault with local asperities, we designed a "stick-slip" experiment using a saw-cut cylindrical Lucite sample, which had sharp localized ridges parallel to the strike of the fault plane. The sample was subjected to conventional triaxial loading with a constant confining pressure of 10 MPa. The axial load was then increased to 6 MPa at a constant rate of 0.12 MPa/sec until the sliding occurred along the fault plane. Ultrasonic acoustic emissions (AEs) were monitored with eight PZT sensors. Two cycles of AEs were detected with the occurrence rate that decreased from the beginning to the end of each cycle, while the relative magnitudes increased. Correlation analysis indicated that these AEs were clustered into two groups - those with frequency content between 200-300kHz and a second group with frequency content between 10-50kHz. The locations of the high-frequency events, with almost identical waveforms, show that these events are from the sharp localized ridges on the saw-cut plane. The locations of the low-frequency events show an approaching process to the high-frequency events for each cycle. In this single experiment, there was a correlation of the proximity of the low-frequency events with the subsequent triggering of large high-frequency repeating events.

  1. Accuracy limit of rigid 3-point water models

    NASA Astrophysics Data System (ADS)

    Izadi, Saeed; Onufriev, Alexey V.

    2016-08-01

    Classical 3-point rigid water models are most widely used due to their computational efficiency. Recently, we introduced a new approach to constructing classical rigid water models [S. Izadi et al., J. Phys. Chem. Lett. 5, 3863 (2014)], which permits a virtually exhaustive search for globally optimal model parameters in the sub-space that is most relevant to the electrostatic properties of the water molecule in liquid phase. Here we apply the approach to develop a 3-point Optimal Point Charge (OPC3) water model. OPC3 is significantly more accurate than the commonly used water models of same class (TIP3P and SPCE) in reproducing a comprehensive set of liquid bulk properties, over a wide range of temperatures. Beyond bulk properties, we show that OPC3 predicts the intrinsic charge hydration asymmetry (CHA) of water — a characteristic dependence of hydration free energy on the sign of the solute charge — in very close agreement with experiment. Two other recent 3-point rigid water models, TIP3PFB and H2ODC, each developed by its own, completely different optimization method, approach the global accuracy optimum represented by OPC3 in both the parameter space and accuracy of bulk properties. Thus, we argue that an accuracy limit of practical 3-point rigid non-polarizable models has effectively been reached; remaining accuracy issues are discussed.

  2. Gradient flow of O(N) nonlinear sigma model at large N

    DOE PAGES

    Aoki, Sinya; Kikuchi, Kengo; Onogi, Tetsuya

    2015-04-28

    Here, we study the gradient flow equation for the O(N) nonlinear sigma model in two dimensions at large N. We parameterize solution of the field at flow time t in powers of bare fields by introducing the coefficient function X n for the n-th power term (n = 1, 3, ··· ). Reducing the flow equation by keeping only the contributions at leading order in large N, we obtain a set of equations for X n ’s, which can be solved iteratively starting from n = 1. For n = 1 case, we find an explicit form of the exactmore » solution. Using this solution, we show that the two point function at finite flow time t is finite. As an application, we obtain the non-perturbative running coupling defined from the energy density. We also discuss the solution for n = 3 case.« less

  3. A New Simplified Source Model to Explain Strong Ground Motions from a Mega-Thrust Earthquake - Application to the 2011 Tohoku Earthquake (Mw9.0) -

    NASA Astrophysics Data System (ADS)

    Nozu, A.

    2013-12-01

    A new simplified source model is proposed to explain strong ground motions from a mega-thrust earthquake. The proposed model is simpler, and involves less model parameters, than the conventional characterized source model, which itself is a simplified expression of actual earthquake source. In the proposed model, the spacio-temporal distribution of slip within a subevent is not modeled. Instead, the source spectrum associated with the rupture of a subevent is modeled and it is assumed to follow the omega-square model. By multiplying the source spectrum with the path effect and the site amplification factor, the Fourier amplitude at a target site can be obtained. Then, combining it with Fourier phase characteristics of a smaller event, the time history of strong ground motions from the subevent can be calculated. Finally, by summing up contributions from the subevents, strong ground motions from the entire rupture can be obtained. The source model consists of six parameters for each subevent, namely, longitude, latitude, depth, rupture time, seismic moment and corner frequency of the subevent. Finite size of the subevent can be taken into account in the model, because the corner frequency of the subevent is included in the model, which is inversely proportional to the length of the subevent. Thus, the proposed model is referred to as the 'pseudo point-source model'. To examine the applicability of the model, a pseudo point-source model was developed for the 2011 Tohoku earthquake. The model comprises nine subevents, located off Miyagi Prefecture through Ibaraki Prefecture. The velocity waveforms (0.2-1 Hz), the velocity envelopes (0.2-10 Hz) and the Fourier spectra (0.2-10 Hz) at 15 sites calculated with the pseudo point-source model agree well with the observed ones, indicating the applicability of the model. Then the results were compared with the results of a super-asperity (SPGA) model of the same earthquake (Nozu, 2012, AGU), which can be considered as an

  4. Assimilating Flow Data into Complex Multiple-Point Statistical Facies Models Using Pilot Points Method

    NASA Astrophysics Data System (ADS)

    Ma, W.; Jafarpour, B.

    2017-12-01

    We develop a new pilot points method for conditioning discrete multiple-point statistical (MPS) facies simulation on dynamic flow data. While conditioning MPS simulation on static hard data is straightforward, their calibration against nonlinear flow data is nontrivial. The proposed method generates conditional models from a conceptual model of geologic connectivity, known as a training image (TI), by strategically placing and estimating pilot points. To place pilot points, a score map is generated based on three sources of information:: (i) the uncertainty in facies distribution, (ii) the model response sensitivity information, and (iii) the observed flow data. Once the pilot points are placed, the facies values at these points are inferred from production data and are used, along with available hard data at well locations, to simulate a new set of conditional facies realizations. While facies estimation at the pilot points can be performed using different inversion algorithms, in this study the ensemble smoother (ES) and its multiple data assimilation variant (ES-MDA) are adopted to update permeability maps from production data, which are then used to statistically infer facies types at the pilot point locations. The developed method combines the information in the flow data and the TI by using the former to infer facies values at select locations away from the wells and the latter to ensure consistent facies structure and connectivity where away from measurement locations. Several numerical experiments are used to evaluate the performance of the developed method and to discuss its important properties.

  5. Determination of the boiling-point distribution by simulated distillation from n-pentane through n-tetratetracontane in 70 to 80 seconds.

    PubMed

    Lubkowitz, Joaquin A; Meneghini, Roberto I

    2002-01-01

    This work presents the carrying out of boiling-point distributions by simulated distillation with direct-column heating rather than oven-column heating. Column-heating rates of 300 degrees C/min are obtained yielding retention times of 73 s for n-tetratetracontane. The calibration curves of the retention time versus the boiling point, in the range of n-pentane to n-tetratetracontane, are identical to those obtained by slower oven-heating rates. The boiling-point distribution of the reference gas oil is compared with that obtained with column oven heating at rates of 15 to 40 degrees C/min. The results show boiling-point distribution values nearly the same (1-2 degrees F) as those obtained with oven column heating from the initial boiling point to 80% distilled off. Slightly higher differences are obtained (3-4 degrees F) for the 80% distillation to final boiling-point interval. Nonetheless, allowed consensus differences are never exceeded. Precision of the boiling-point distributions (expressed as standard deviations) are 0.1-0.3% for the data obtained in the direct column-heating mode.

  6. Comprehensive overview of the Point-by-Point model of prompt emission in fission

    NASA Astrophysics Data System (ADS)

    Tudora, A.; Hambsch, F.-J.

    2017-08-01

    The investigation of prompt emission in fission is very important in understanding the fission process and to improve the quality of evaluated nuclear data required for new applications. In the last decade remarkable efforts were done for both the development of prompt emission models and the experimental investigation of the properties of fission fragments and the prompt neutrons and γ-ray emission. The accurate experimental data concerning the prompt neutron multiplicity as a function of fragment mass and total kinetic energy for 252Cf(SF) and 235 ( n, f) recently measured at JRC-Geel (as well as other various prompt emission data) allow a consistent and very detailed validation of the Point-by-Point (PbP) deterministic model of prompt emission. The PbP model results describe very well a large variety of experimental data starting from the multi-parametric matrices of prompt neutron multiplicity ν (A,TKE) and γ-ray energy E_{γ}(A,TKE) which validate the model itself, passing through different average prompt emission quantities as a function of A ( e.g., ν(A), E_{γ}(A), < ɛ > (A) etc.), as a function of TKE ( e.g., ν (TKE), E_{γ}(TKE)) up to the prompt neutron distribution P (ν) and the total average prompt neutron spectrum. The PbP model does not use free or adjustable parameters. To calculate the multi-parametric matrices it needs only data included in the reference input parameter library RIPL of IAEA. To provide average prompt emission quantities as a function of A, of TKE and total average quantities the multi-parametric matrices are averaged over reliable experimental fragment distributions. The PbP results are also in agreement with the results of the Monte Carlo prompt emission codes FIFRELIN, CGMF and FREYA. The good description of a large variety of experimental data proves the capability of the PbP model to be used in nuclear data evaluations and its reliability to predict prompt emission data for fissioning nuclei and incident energies for

  7. Impact of confinement housing on study end-points in the calf model of cryptosporidiosis.

    PubMed

    Graef, Geneva; Hurst, Natalie J; Kidder, Lance; Sy, Tracy L; Goodman, Laura B; Preston, Whitney D; Arnold, Samuel L M; Zambriski, Jennifer A

    2018-04-01

    Diarrhea is the second leading cause of death in children < 5 years globally and the parasite genus Cryptosporidium is a leading cause of that diarrhea. The global disease burden attributable to cryptosporidiosis is substantial and the only approved chemotherapeutic, nitazoxanide, has poor efficacy in HIV positive children. Chemotherapeutic development is dependent on the calf model of cryptosporidiosis, which is the best approximation of human disease. However, the model is not consistently applied across research studies. Data collection commonly occurs using two different methods: Complete Fecal Collection (CFC), which requires use of confinement housing, and Interval Collection (IC), which permits use of box stalls. CFC mimics human challenge model methodology but it is unknown if confinement housing impacts study end-points and if data gathered via this method is suitable for generalization to human populations. Using a modified crossover study design we compared CFC and IC and evaluated the impact of housing on study end-points. At birth, calves were randomly assigned to confinement (n = 14) or box stall housing (n = 9), or were challenged with 5 x 107 C. parvum oocysts, and followed for 10 days. Study end-points included fecal oocyst shedding, severity of diarrhea, degree of dehydration, and plasma cortisol. Calves in confinement had no significant differences in mean log oocysts enumerated per gram of fecal dry matter between CFC and IC samples (P = 0.6), nor were there diurnal variations in oocyst shedding (P = 0.1). Confinement housed calves shed significantly more oocysts (P = 0.05), had higher plasma cortisol (P = 0.001), and required more supportive care (P = 0.0009) than calves in box stalls. Housing method confounds study end-points in the calf model of cryptosporidiosis. Due to increased stress data collected from calves in confinement housing may not accurately estimate the efficacy of chemotherapeutics targeting C. parvum.

  8. MCNP-REN - A Monte Carlo Tool for Neutron Detector Design Without Using the Point Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abhold, M.E.; Baker, M.C.

    1999-07-25

    The development of neutron detectors makes extensive use of the predictions of detector response through the use of Monte Carlo techniques in conjunction with the point reactor model. Unfortunately, the point reactor model fails to accurately predict detector response in common applications. For this reason, the general Monte Carlo N-Particle code (MCNP) was modified to simulate the pulse streams that would be generated by a neutron detector and normally analyzed by a shift register. This modified code, MCNP - Random Exponentially Distributed Neutron Source (MCNP-REN), along with the Time Analysis Program (TAP) predict neutron detector response without using the pointmore » reactor model, making it unnecessary for the user to decide whether or not the assumptions of the point model are met for their application. MCNP-REN is capable of simulating standard neutron coincidence counting as well as neutron multiplicity counting. Measurements of MOX fresh fuel made using the Underwater Coincidence Counter (UWCC) as well as measurements of HEU reactor fuel using the active neutron Research Reactor Fuel Counter (RRFC) are compared with calculations. The method used in MCNP-REN is demonstrated to be fundamentally sound and shown to eliminate the need to use the point model for detector performance predictions.« less

  9. Hierarchy of N-point functions in the ΛCDM and ReBEL cosmologies

    NASA Astrophysics Data System (ADS)

    Hellwing, Wojciech A.; Juszkiewicz, Roman; van de Weygaert, Rien

    2010-11-01

    In this work we investigate higher-order statistics for the ΛCDM and ReBEL scalar-interacting dark matter models by analyzing 180h-1Mpc dark matter N-body simulation ensembles. The N-point correlation functions and the related hierarchical amplitudes, such as skewness and kurtosis, are computed using the counts-in-cells method. Our studies demonstrate that the hierarchical amplitudes Sn of the scalar-interacting dark matter model significantly deviate from the values in the ΛCDM cosmology on scales comparable and smaller than the screening length rs of a given scalar-interacting model. The corresponding additional forces that enhance the total attractive force exerted on dark matter particles at galaxy scales lower the values of the hierarchical amplitudes Sn. We conclude that hypothetical additional exotic interactions in the dark matter sector should leave detectable markers in the higher-order correlation statistics of the density field. We focused in detail on the redshift evolution of the dark matter field’s skewness and kurtosis. From this investigation we find that the deviations from the canonical ΛCDM model introduced by the presence of the “fifth” force attain a maximum value at redshifts 0.5models.

  10. Modeling of Surface Geometric Structure State After Integratedformed Milling and Finish Burnishing

    NASA Astrophysics Data System (ADS)

    Berczyński, Stefan; Grochała, Daniel; Grządziel, Zenon

    2017-06-01

    The article deals with computer-based modeling of burnishing a surface previously milled with a spherical cutter. This method of milling leaves traces, mainly asperities caused by the cutting crossfeed and cutter diameter. The burnishing process - surface plastic treatment - is accompanied by phenomena that take place right in the burnishing ball-milled surface contact zone. The authors present the method for preparing a finite element model and the methodology of tests for the assessment of height parameters of a surface geometrical structure (SGS). In the physical model the workpieces had a cuboidal shape and these dimensions: (width × height × length) 2×1×4.5 mm. As in the process of burnishing a cuboidal workpiece is affected by plastic deformations, the nonlinearities of the milled item were taken into account. The physical model of the process assumed that the burnishing ball would be rolled perpendicularly to milling cutter linear traces. The model tests included the application of three different burnishing forces: 250 N, 500 N and 1000 N. The process modeling featured the contact and pressing of a ball into the workpiece surface till the desired force was attained, then the burnishing ball was rolled along the surface section of 2 mm, and the burnishing force was gradually reduced till the ball left the contact zone. While rolling, the burnishing ball turned by a 23° angle. The cumulative diagrams depict plastic deformations of the modeled surfaces after milling and burnishing with defined force values. The roughness of idealized milled surface was calculated for the physical model under consideration, i.e. in an elementary section between profile peaks spaced at intervals of crossfeed passes, where the milling feed fwm = 0.5 mm. Also, asperities after burnishing were calculated for the same section. The differences of the obtained values fall below 20% of mean values recorded during empirical experiments. The adopted simplification in after

  11. Rupture behaviors of the 2010 Jiashian and 2016 Meinong Earthquakes: Implication for interaction of two asperities on the Chishan Transfer Fault Zone in SW Taiwan.

    NASA Astrophysics Data System (ADS)

    Jian, P. R.; Hung, S. H.; Chen, Y. L.; Meng, L.; Tseng, T. L.

    2017-12-01

    After about 45 years of seismic quiescence, southwest Taiwan was imperiled by two strong earthquakes, the 2010 Mw 6.2 Jiashian and deadly 2016 Mw 6.4 Meinong earthquakes in the last decade. The focal mechanisms and their aftershock distributions imply that both events occurred on NW-SE striking, shallow-dipping fault planes but at different depths of 21 and 16 km, respectively. Here we present the MUSIC back projection images using high-frequency P- and sP-waves recorded in the European and Australian seismic networks, the directivity analysis using global teleseismic P waves and relocated aftershocks to characterize the rupture behaviors of the two mainshocks and explore the potential connection between them. The results for the Meinong event indicate a unilateral, subhorizontal rupture propagating NW-ward 17 km and lasting for 6-7 s [Jian et al., 2017]. For the Jiashian event, the rupture initiated at a greater depth of 21 km and then propagated both NW-ward and up-dip ( 16o) on the fault plane, with a shorter rupture length of 10 km and duration of 4-5 s. The up-dip propagation is corroborated by the 3-D directivity analysis that leads to the widths of P-wave pulses increasing linearly with the directivity parameter. Moreover, relocation of aftershocks reveals that the Jiashian sequence is confined in a NW-SE elongated zone extending 15 km and 5 km shallower than the hypocenter. The Meinong aftershock sequence shows three clusters: one surrounding the mainshock hypocenter, another one distributed northwestern and deeper (>20 km) off the rupture plane beneath Tainan, and the other distant shallow-focus one (<10 km) beneath the southern Central Mountain Range. As evidenced by similar focal mechanism, rupture behaviors, as well as the spatial configuration of the mainshock rupture zones and aftershock distributions, we attribute the Jiashian and Meinong earthquakes to two asperities on a buried oblique fault that has been reactivated recently, the NW-SE striking

  12. Elastic-plastic cube model for ultrasonic friction reduction via Poisson's effect.

    PubMed

    Dong, Sheng; Dapino, Marcelo J

    2014-01-01

    Ultrasonic friction reduction has been studied experimentally and theoretically. This paper presents a new elastic-plastic cube model which can be applied to various ultrasonic lubrication cases. A cube is used to represent all the contacting asperities of two surfaces. Friction force is considered as the product of the tangential contact stiffness and the deformation of the cube. Ultrasonic vibrations are projected onto three orthogonal directions, separately changing contact parameters and deformations. Hence, the overall change of friction forces. Experiments are conducted to examine ultrasonic friction reduction using different materials under normal loads that vary from 40 N to 240 N. Ultrasonic vibrations are generated both in longitudinal and vertical (out-of-plane) directions by way of the Poisson effect. The tests show up to 60% friction reduction; model simulations describe the trends observed experimentally. Copyright © 2013 Elsevier B.V. All rights reserved.

  13. Laboratory studies of frictional sliding and the implications of precursory seismicity

    NASA Astrophysics Data System (ADS)

    Selvadurai, Paul A.

    scale roughness influenced the asperity-asperity interaction during the nucleation phase. Asperities in the seismogenic region where shown to exist close enough to each other so that elastic communication (through the off-fault material) could not be neglected. Prior to gross fault rupture (i.e. mainshock), we measured the propagation of a slow nucleating rupture into the relatively 'locked', seimsogenic region of the fault. Slow slip dynamics were captured using slip sensors placed along the fault that measured a non-uniform slip profile leading up to failure. We found that the propagation of the slow rupture into the locked region was dependent on the normal force Fn. Higher Fn was found to slow the propagation of shear rupture into the locked region. Within the relatively 'locked' region, a noticeable increase in size and a more compact spatial-temporal distribution of foreshocks were measured when Fn was increased. In order to develop an understanding of the relationship between Fn and the resistance of the fault to slow rupture, a quasi-static finite element (FE) model was developed. The model used distributions of asperities measured directly from the pressure sensitive film in a small section of the interface where foreshocks coalesced; specifically, the region where the slowly propagating slip front encountered the more dense distribution of asperities. A single asperity was modeled and followed the Cattaneo partial slip asperity solution. As the shear force increased along the fault, the asperities in this model were able to accommodate tangential slip by entering a partial sliding regime; the central contact of the asperities remained adhered while sliding accumulated along its periphery. Partial slip on the asperity propagated inwards as the shear force was incrementally increased. A further increase in the shear force caused the asperity to enter a full sliding condition. Increasing confining loads caused increased stiffness and increased capacity to store

  14. A model for predicting wear rates in tooth enamel.

    PubMed

    Borrero-Lopez, Oscar; Pajares, Antonia; Constantino, Paul J; Lawn, Brian R

    2014-09-01

    It is hypothesized that wear of enamel is sensitive to the presence of sharp particulates in oral fluids and masticated foods. To this end, a generic model for predicting wear rates in brittle materials is developed, with specific application to tooth enamel. Wear is assumed to result from an accumulation of elastic-plastic micro-asperity events. Integration over all such events leads to a wear rate relation analogous to Archard׳s law, but with allowance for variation in asperity angle and compliance. The coefficient K in this relation quantifies the wear severity, with an arbitrary distinction between 'mild' wear (low K) and 'severe' wear (high K). Data from the literature and in-house wear-test experiments on enamel specimens in lubricant media (water, oil) with and without sharp third-body particulates (silica, diamond) are used to validate the model. Measured wear rates can vary over several orders of magnitude, depending on contact asperity conditions, accounting for the occurrence of severe enamel removal in some human patients (bruxing). Expressions for the depth removal rate and number of cycles to wear down occlusal enamel in the low-crowned tooth forms of some mammals are derived, with tooth size and enamel thickness as key variables. The role of 'hard' versus 'soft' food diets in determining evolutionary paths in different hominin species is briefly considered. A feature of the model is that it does not require recourse to specific material removal mechanisms, although processes involving microplastic extrusion and microcrack coalescence are indicated. Published by Elsevier Ltd.

  15. Modeling hard clinical end-point data in economic analyses.

    PubMed

    Kansal, Anuraag R; Zheng, Ying; Palencia, Roberto; Ruffolo, Antonio; Hass, Bastian; Sorensen, Sonja V

    2013-11-01

    The availability of hard clinical end-point data, such as that on cardiovascular (CV) events among patients with type 2 diabetes mellitus, is increasing, and as a result there is growing interest in using hard end-point data of this type in economic analyses. This study investigated published approaches for modeling hard end-points from clinical trials and evaluated their applicability in health economic models with different disease features. A review of cost-effectiveness models of interventions in clinically significant therapeutic areas (CV diseases, cancer, and chronic lower respiratory diseases) was conducted in PubMed and Embase using a defined search strategy. Only studies integrating hard end-point data from randomized clinical trials were considered. For each study included, clinical input characteristics and modeling approach were summarized and evaluated. A total of 33 articles (23 CV, eight cancer, two respiratory) were accepted for detailed analysis. Decision trees, Markov models, discrete event simulations, and hybrids were used. Event rates were incorporated either as constant rates, time-dependent risks, or risk equations based on patient characteristics. Risks dependent on time and/or patient characteristics were used where major event rates were >1%/year in models with fewer health states (<7). Models of infrequent events or with numerous health states generally preferred constant event rates. The detailed modeling information and terminology varied, sometimes requiring interpretation. Key considerations for cost-effectiveness models incorporating hard end-point data include the frequency and characteristics of the relevant clinical events and how the trial data is reported. When event risk is low, simplification of both the model structure and event rate modeling is recommended. When event risk is common, such as in high risk populations, more detailed modeling approaches, including individual simulations or explicitly time-dependent event rates

  16. Geometrical and Structural Asperities on Fault Surfaces

    NASA Astrophysics Data System (ADS)

    Sagy, A.; Brodsky, E. E.; van der Elst, N.; Agosta, F.; di Toro, G.; Collettini, C.

    2007-12-01

    Earthquake dynamics are strongly affected by fault zone structure and geometry. Fault surface irregularities and the nearby structure control the rupture nucleation and propagation, the fault strength, the near-field stress orientations and the hydraulic properties. New field observations demonstrate the existence of asperities in faults as displayed by topographical bumps on the fault surface and hardening of the internal structure near them. Ground-based LIDAR measurements on more than 30 normal and strike slip faults in different lithologies demonstrate that faults are not planar surfaces and roughness is strongly dependent on fault displacement. In addition to the well-understood roughness exemplified by abrasive striations and fracture segmentation, we found semi-elliptical topographical bumps with wavelengths of a few meters. In many faults the bumps are not spread equally on the surface and zones can be bumpier than others. The bumps are most easily identified on faults with total displacement of dozens to hundreds of meters. Smaller scale roughness on these faults is smoothed by abrasive processes. A key site in southern Oregon shows that the topographic bumps are closely tied to the internal structure of the fault zone. At this location, we combine LiDAR data with detailed structural analysis of the fault zone embedded in volcanic rocks. Here the bumps correlate with an abrupt change in the width of the cohesive cataclasite layer that is exposed under a thin ultracataclasite zone. In most of the exposures the cohesive layer thickness is 10-20 cm. However, under protruding bumps the layer is always thickened and the width can locally exceed one meter. Field and microscopic analyses show that the layer contains grains with dimensions ranging from less than 10 μ up to a few centimeters. There is clear evidence of internal flow, rotation and fracturing of the grains in the layer. X-Ray diffraction measurements of samples from the layer show that the bulk

  17. Approximate Model for Turbulent Stagnation Point Flow.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dechant, Lawrence

    2016-01-01

    Here we derive an approximate turbulent self-similar model for a class of favorable pressure gradient wedge-like flows, focusing on the stagnation point limit. While the self-similar model provides a useful gross flow field estimate this approach must be combined with a near wall model is to determine skin friction and by Reynolds analogy the heat transfer coefficient. The combined approach is developed in detail for the stagnation point flow problem where turbulent skin friction and Nusselt number results are obtained. Comparison to the classical Van Driest (1958) result suggests overall reasonable agreement. Though the model is only valid near themore » stagnation region of cylinders and spheres it nonetheless provides a reasonable model for overall cylinder and sphere heat transfer. The enhancement effect of free stream turbulence upon the laminar flow is used to derive a similar expression which is valid for turbulent flow. Examination of free stream enhanced laminar flow suggests that the rather than enhancement of a laminar flow behavior free stream disturbance results in early transition to turbulent stagnation point behavior. Excellent agreement is shown between enhanced laminar flow and turbulent flow behavior for high levels, e.g. 5% of free stream turbulence. Finally the blunt body turbulent stagnation results are shown to provide realistic heat transfer results for turbulent jet impingement problems.« less

  18. Statistical prescission point model of fission fragment angular distributions

    NASA Astrophysics Data System (ADS)

    John, Bency; Kataria, S. K.

    1998-03-01

    In light of recent developments in fission studies such as slow saddle to scission motion and spin equilibration near the scission point, the theory of fission fragment angular distribution is examined and a new statistical prescission point model is developed. The conditional equilibrium of the collective angular bearing modes at the prescission point, which is guided mainly by their relaxation times and population probabilities, is taken into account in the present model. The present model gives a consistent description of the fragment angular and spin distributions for a wide variety of heavy and light ion induced fission reactions.

  19. Multicritical points of the O(N) scalar theory in 2 < d < 4 for large N

    NASA Astrophysics Data System (ADS)

    Katsis, A.; Tetradis, N.

    2018-05-01

    We solve analytically the renormalization-group equation for the potential of the O (N)-symmetric scalar theory in the large-N limit and in dimensions 2 < d < 4, in order to look for nonperturbative fixed points that were found numerically in a recent study. We find new real solutions with singularities in the higher derivatives of the potential at its minimum, and complex solutions with branch cuts along the negative real axis.

  20. A point particle model of lightly bound skyrmions

    NASA Astrophysics Data System (ADS)

    Gillard, Mike; Harland, Derek; Kirk, Elliot; Maybee, Ben; Speight, Martin

    2017-04-01

    A simple model of the dynamics of lightly bound skyrmions is developed in which skyrmions are replaced by point particles, each carrying an internal orientation. The model accounts well for the static energy minimizers of baryon number 1 ≤ B ≤ 8 obtained by numerical simulation of the full field theory. For 9 ≤ B ≤ 23, a large number of static solutions of the point particle model are found, all closely resembling size B subsets of a face centred cubic lattice, with the particle orientations dictated by a simple colouring rule. Rigid body quantization of these solutions is performed, and the spin and isospin of the corresponding ground states extracted. As part of the quantization scheme, an algorithm to compute the symmetry group of an oriented point cloud, and to determine its corresponding Finkelstein-Rubinstein constraints, is devised.

  1. Earth observing system instrument pointing control modeling for polar orbiting platforms

    NASA Technical Reports Server (NTRS)

    Briggs, H. C.; Kia, T.; Mccabe, S. A.; Bell, C. E.

    1987-01-01

    An approach to instrument pointing control performance assessment for large multi-instrument platforms is described. First, instrument pointing requirements and reference platform control systems for the Eos Polar Platforms are reviewed. Performance modeling tools including NASTRAN models of two large platforms, a modal selection procedure utilizing a balanced realization method, and reduced order platform models with core and instrument pointing control loops added are then described. Time history simulations of instrument pointing and stability performance in response to commanded slewing of adjacent instruments demonstrates the limits of tolerable slew activity. Simplified models of rigid body responses are also developed for comparison. Instrument pointing control methods required in addition to the core platform control system to meet instrument pointing requirements are considered.

  2. Development and evaluation of spatial point process models for epidermal nerve fibers.

    PubMed

    Olsbo, Viktor; Myllymäki, Mari; Waller, Lance A; Särkkä, Aila

    2013-06-01

    We propose two spatial point process models for the spatial structure of epidermal nerve fibers (ENFs) across human skin. The models derive from two point processes, Φb and Φe, describing the locations of the base and end points of the fibers. Each point of Φe (the end point process) is connected to a unique point in Φb (the base point process). In the first model, both Φe and Φb are Poisson processes, yielding a null model of uniform coverage of the skin by end points and general baseline results and reference values for moments of key physiologic indicators. The second model provides a mechanistic model to generate end points for each base, and we model the branching structure more directly by defining Φe as a cluster process conditioned on the realization of Φb as its parent points. In both cases, we derive distributional properties for observable quantities of direct interest to neurologists such as the number of fibers per base, and the direction and range of fibers on the skin. We contrast both models by fitting them to data from skin blister biopsy images of ENFs and provide inference regarding physiological properties of ENFs. Copyright © 2013 Elsevier Inc. All rights reserved.

  3. Tunnel Point Cloud Filtering Method Based on Elliptic Cylindrical Model

    NASA Astrophysics Data System (ADS)

    Zhua, Ningning; Jiaa, Yonghong; Luo, Lun

    2016-06-01

    The large number of bolts and screws that attached to the subway shield ring plates, along with the great amount of accessories of metal stents and electrical equipments mounted on the tunnel walls, make the laser point cloud data include lots of non-tunnel section points (hereinafter referred to as non-points), therefore affecting the accuracy for modeling and deformation monitoring. This paper proposed a filtering method for the point cloud based on the elliptic cylindrical model. The original laser point cloud data was firstly projected onto a horizontal plane, and a searching algorithm was given to extract the edging points of both sides, which were used further to fit the tunnel central axis. Along the axis the point cloud was segmented regionally, and then fitted as smooth elliptic cylindrical surface by means of iteration. This processing enabled the automatic filtering of those inner wall non-points. Experiments of two groups showed coincident results, that the elliptic cylindrical model based method could effectively filter out the non-points, and meet the accuracy requirements for subway deformation monitoring. The method provides a new mode for the periodic monitoring of tunnel sections all-around deformation in subways routine operation and maintenance.

  4. Point Defects and p -Type Doping in ScN from First Principles

    NASA Astrophysics Data System (ADS)

    Kumagai, Yu; Tsunoda, Naoki; Oba, Fumiyasu

    2018-03-01

    Scandium nitride (ScN) has been intensively researched as a prototype of rocksalt nitrides and a potential counterpart of the wurtzite group IIIa nitrides. It also holds great promise for applications in various fields, including optoelectronics, thermoelectrics, spintronics, and piezoelectrics. We theoretically investigate the bulk properties, band-edge positions, chemical stability, and point defects, i.e., native defects, unintentionally doped impurities, and p -type dopants of ScN using the Heyd-Scuseria-Ernzerhof hybrid functional. We find several fascinating behaviors: (i) a high level for the valence-band maximum, (ii) the lowest formation energy among binary nitrides, (iii) high formation energies of native point defects, (iv) low formation energies of donor-type impurities, and (v) a p -type conversion by Mg doping. Furthermore, we uncover the origins of the Burstein-Moss shift commonly observed in ScN. Our work sheds light on a fundamental understanding of ScN in regard to its technological applications.

  5. Lung motion estimation using dynamic point shifting: An innovative model based on a robust point matching algorithm.

    PubMed

    Yi, Jianbing; Yang, Xuan; Chen, Guoliang; Li, Yan-Ran

    2015-10-01

    Image-guided radiotherapy is an advanced 4D radiotherapy technique that has been developed in recent years. However, respiratory motion causes significant uncertainties in image-guided radiotherapy procedures. To address these issues, an innovative lung motion estimation model based on a robust point matching is proposed in this paper. An innovative robust point matching algorithm using dynamic point shifting is proposed to estimate patient-specific lung motion during free breathing from 4D computed tomography data. The correspondence of the landmark points is determined from the Euclidean distance between the landmark points and the similarity between the local images that are centered at points at the same time. To ensure that the points in the source image correspond to the points in the target image during other phases, the virtual target points are first created and shifted based on the similarity between the local image centered at the source point and the local image centered at the virtual target point. Second, the target points are shifted by the constrained inverse function mapping the target points to the virtual target points. The source point set and shifted target point set are used to estimate the transformation function between the source image and target image. The performances of the authors' method are evaluated on two publicly available DIR-lab and POPI-model lung datasets. For computing target registration errors on 750 landmark points in six phases of the DIR-lab dataset and 37 landmark points in ten phases of the POPI-model dataset, the mean and standard deviation by the authors' method are 1.11 and 1.11 mm, but they are 2.33 and 2.32 mm without considering image intensity, and 1.17 and 1.19 mm with sliding conditions. For the two phases of maximum inhalation and maximum exhalation in the DIR-lab dataset with 300 landmark points of each case, the mean and standard deviation of target registration errors on the 3000 landmark points of ten

  6. The application of the pilot points in groundwater numerical inversion model

    NASA Astrophysics Data System (ADS)

    Hu, Bin; Teng, Yanguo; Cheng, Lirong

    2015-04-01

    Numerical inversion simulation of groundwater has been widely applied in groundwater. Compared to traditional forward modeling, inversion model has more space to study. Zones and inversing modeling cell by cell are conventional methods. Pilot points is a method between them. The traditional inverse modeling method often uses software dividing the model into several zones with a few parameters needed to be inversed. However, distribution is usually too simple for modeler and result of simulation deviation. Inverse cell by cell will get the most actual parameter distribution in theory, but it need computational complexity greatly and quantity of survey data for geological statistical simulation areas. Compared to those methods, pilot points distribute a set of points throughout the different model domains for parameter estimation. Property values are assigned to model cells by Kriging to ensure geological units within the parameters of heterogeneity. It will reduce requirements of simulation area geological statistics and offset the gap between above methods. Pilot points can not only save calculation time, increase fitting degree, but also reduce instability of numerical model caused by numbers of parameters and other advantages. In this paper, we use pilot point in a field which structure formation heterogeneity and hydraulics parameter was unknown. We compare inversion modeling results of zones and pilot point methods. With the method of comparative analysis, we explore the characteristic of pilot point in groundwater inversion model. First, modeler generates an initial spatially correlated field given a geostatistical model by the description of the case site with the software named Groundwater Vistas 6. Defining Kriging to obtain the value of the field functions over the model domain on the basis of their values at measurement and pilot point locations (hydraulic conductivity), then we assign pilot points to the interpolated field which have been divided into 4

  7. Fixed points, stability, and intermittency in a shell model for advection of passive scalars

    PubMed

    Kockelkoren; Jensen

    2000-08-01

    We investigate the fixed points of a shell model for the turbulent advection of passive scalars introduced in Jensen, Paladin, and Vulpiani [Phys. Rev. A 45, 7214 (1992)]. The passive scalar field is driven by the velocity field of the popular Gledzer-Ohkitani-Yamada (GOY) shell model. The scaling behavior of the static solutions is found to differ significantly from Obukhov-Corrsin scaling straight theta(n) approximately k(-1/3)(n), which is only recovered in the limit where the diffusivity vanishes, D-->0. From the eigenvalue spectrum we show that any perturbation in the scalar will always damp out, i.e., the eigenvalues of the scalar are negative and are decoupled from the eigenvalues of the velocity. We estimate Lyapunov exponents and the intermittency parameters using a definition proposed by Benzi, Paladin, Parisi, and Vulpiani [J. Phys. A 18, 2157 (1985)]. The full model is found to be as chaotic as the GOY model, measured by the maximal Lyapunov exponent, but is more intermittent.

  8. Nonextensivity at the Circum-Pacific subduction zones-Preliminary studies

    NASA Astrophysics Data System (ADS)

    Scherrer, T. M.; França, G. S.; Silva, R.; de Freitas, D. B.; Vilar, C. S.

    2015-05-01

    Following the fragment-asperity interaction model introduced by Sotolongo-Costa and Posadas (2004) and revised by Silva et al. (2006), we try to explain the nonextensive effect in the context of the asperity model designed by Lay and Kanamori (1981). To address this issue, we used data from the NEIC catalog in the decade between 2001 and 2010, in order to investigate the so-called Circum-Pacific subduction zones. We propose a geophysical explanation to nonextensive parameter q. The results need further investigation however evidence of correlation between the nonextensive parameter and the asperity model is shown, i.e., we show that q-value is higher for areas with larger asperities and stronger coupling.

  9. Pairwise-interaction extended point-particle model for particle-laden flows

    NASA Astrophysics Data System (ADS)

    Akiki, G.; Moore, W. C.; Balachandar, S.

    2017-12-01

    In this work we consider the pairwise interaction extended point-particle (PIEP) model for Euler-Lagrange simulations of particle-laden flows. By accounting for the precise location of neighbors the PIEP model goes beyond local particle volume fraction, and distinguishes the influence of upstream, downstream and laterally located neighbors. The two main ingredients of the PIEP model are (i) the undisturbed flow at any particle is evaluated as a superposition of the macroscale flow and a microscale flow that is approximated as a pairwise superposition of perturbation fields induced by each of the neighboring particles, and (ii) the forces and torque on the particle are then calculated from the undisturbed flow using the Faxén form of the force relation. The computational efficiency of the standard Euler-Lagrange approach is retained, since the microscale perturbation fields induced by a neighbor are pre-computed and stored as PIEP maps. Here we extend the PIEP force model of Akiki et al. [3] with a corresponding torque model to systematically include the effect of perturbation fields induced by the neighbors in evaluating the net torque. Also, we use DNS results from a uniform flow over two stationary spheres to further improve the PIEP force and torque models. We then test the PIEP model in three different sedimentation problems and compare the results against corresponding DNS to assess the accuracy of the PIEP model and improvement over the standard point-particle approach. In the case of two sedimenting spheres in a quiescent ambient the PIEP model is shown to capture the drafting-kissing-tumbling process. In cases of 5 and 80 sedimenting spheres a good agreement is obtained between the PIEP simulation and the DNS. For all three simulations, the DEM-PIEP was able to recreate, to a good extent, the results from the DNS, while requiring only a negligible fraction of the numerical resources required by the fully-resolved DNS.

  10. Alfvén wave dynamics at the neighborhood of a 2.5D magnetic null-point

    NASA Astrophysics Data System (ADS)

    Sabri, S.; Vasheghani Farahani, S.; Ebadi, H.; Hosseinpour, M.; Fazel, Z.

    2018-05-01

    The aim of the present study is to highlight the energy transfer via the interaction of magnetohydrodynamic waves with a 2.5D magnetic null-point in a finite plasma-β regime of the solar corona. An initially symmetric Alfvén pulse at a specific distance from a magnetic null-point is kicked towards the isothermal null-point. A shock-capturing Godunov-type PLUTO code is used to solve the ideal magnetohydrodynamic set equations in the context of wave-plasma energy transfer. As the Alfvén wave propagates towards the magnetic null-point it experiences speed lowering which ends up in releasing energy along the separatrices. In this line owing to the Alfvén wave, a series of events take place that contribute towards coronal heating. Nonlinear induced waves are by products of the torsional Alfvén interaction with magnetic null-points. The energy of these induced waves which are fast magnetoacoustic (transverse) and slow magnetoacoustic (longitudinal) waves are supplied by the Alfvén wave. The nonlinearly induced density perturbations are proportional to the Alfvén wave energy loss. This supplies energy for the propagation of fast and slow magnetoacoustic waves, where in contrast to the fast wave the slow wave experiences a continuous energy increase. As such, the slow wave may transfer its energy to the medium at later times, maintaining a continuous heating mechanism at the neighborhood of a magnetic null-point.

  11. Fast maximum likelihood estimation using continuous-time neural point process models.

    PubMed

    Lepage, Kyle Q; MacDonald, Christopher J

    2015-06-01

    A recent report estimates that the number of simultaneously recorded neurons is growing exponentially. A commonly employed statistical paradigm using discrete-time point process models of neural activity involves the computation of a maximum-likelihood estimate. The time to computate this estimate, per neuron, is proportional to the number of bins in a finely spaced discretization of time. By using continuous-time models of neural activity and the optimally efficient Gaussian quadrature, memory requirements and computation times are dramatically decreased in the commonly encountered situation where the number of parameters p is much less than the number of time-bins n. In this regime, with q equal to the quadrature order, memory requirements are decreased from O(np) to O(qp), and the number of floating-point operations are decreased from O(np(2)) to O(qp(2)). Accuracy of the proposed estimates is assessed based upon physiological consideration, error bounds, and mathematical results describing the relation between numerical integration error and numerical error affecting both parameter estimates and the observed Fisher information. A check is provided which is used to adapt the order of numerical integration. The procedure is verified in simulation and for hippocampal recordings. It is found that in 95 % of hippocampal recordings a q of 60 yields numerical error negligible with respect to parameter estimate standard error. Statistical inference using the proposed methodology is a fast and convenient alternative to statistical inference performed using a discrete-time point process model of neural activity. It enables the employment of the statistical methodology available with discrete-time inference, but is faster, uses less memory, and avoids any error due to discretization.

  12. A complete tank test of a flying-boat hull with a pointed step -N.A.C.A. Model No. 22

    NASA Technical Reports Server (NTRS)

    Shoemaker, James M

    1934-01-01

    The results of a complete tank test of a model of a flying-boat hull of unconventional form, having a deep pointed step, are presented in this note. The advantage of the pointed-step type over the usual forms of flying-boat hulls with respect to resistance at high speeds is pointed out. A take-off example using the data from these tests is worked out, and the results are compared with those of an example in which the test data for a hull of the type in general use in the United States are applied to a flying boat having the same design specifications. A definite saving in take-off run is shown by the pointed-step type.

  13. Lung motion estimation using dynamic point shifting: An innovative model based on a robust point matching algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yi, Jianbing, E-mail: yijianbing8@163.com; Yang, Xuan, E-mail: xyang0520@263.net; Li, Yan-Ran, E-mail: lyran@szu.edu.cn

    2015-10-15

    Purpose: Image-guided radiotherapy is an advanced 4D radiotherapy technique that has been developed in recent years. However, respiratory motion causes significant uncertainties in image-guided radiotherapy procedures. To address these issues, an innovative lung motion estimation model based on a robust point matching is proposed in this paper. Methods: An innovative robust point matching algorithm using dynamic point shifting is proposed to estimate patient-specific lung motion during free breathing from 4D computed tomography data. The correspondence of the landmark points is determined from the Euclidean distance between the landmark points and the similarity between the local images that are centered atmore » points at the same time. To ensure that the points in the source image correspond to the points in the target image during other phases, the virtual target points are first created and shifted based on the similarity between the local image centered at the source point and the local image centered at the virtual target point. Second, the target points are shifted by the constrained inverse function mapping the target points to the virtual target points. The source point set and shifted target point set are used to estimate the transformation function between the source image and target image. Results: The performances of the authors’ method are evaluated on two publicly available DIR-lab and POPI-model lung datasets. For computing target registration errors on 750 landmark points in six phases of the DIR-lab dataset and 37 landmark points in ten phases of the POPI-model dataset, the mean and standard deviation by the authors’ method are 1.11 and 1.11 mm, but they are 2.33 and 2.32 mm without considering image intensity, and 1.17 and 1.19 mm with sliding conditions. For the two phases of maximum inhalation and maximum exhalation in the DIR-lab dataset with 300 landmark points of each case, the mean and standard deviation of target registration errors

  14. The influence of point defects on the thermal conductivity of AlN crystals

    NASA Astrophysics Data System (ADS)

    Rounds, Robert; Sarkar, Biplab; Alden, Dorian; Guo, Qiang; Klump, Andrew; Hartmann, Carsten; Nagashima, Toru; Kirste, Ronny; Franke, Alexander; Bickermann, Matthias; Kumagai, Yoshinao; Sitar, Zlatko; Collazo, Ramón

    2018-05-01

    The average bulk thermal conductivity of free-standing physical vapor transport and hydride vapor phase epitaxy single crystal AlN samples with different impurity concentrations is analyzed using the 3ω method in the temperature range of 30-325 K. AlN wafers grown by physical vapor transport show significant variation in thermal conductivity at room temperature with values ranging between 268 W/m K and 339 W/m K. AlN crystals grown by hydride vapor phase epitaxy yield values between 298 W/m K and 341 W/m K at room temperature, suggesting that the same fundamental mechanisms limit the thermal conductivity of AlN grown by both techniques. All samples in this work show phonon resonance behavior resulting from incorporated point defects. Samples shown by optical analysis to contain carbon-silicon complexes exhibit higher thermal conductivity above 100 K. Phonon scattering by point defects is determined to be the main limiting factor for thermal conductivity of AlN within the investigated temperature range.

  15. HYDROLOGY AND SEDIMENT MODELING USING THE BASINS NON-POINT SOURCE MODEL

    EPA Science Inventory

    The Non-Point Source Model (Hydrologic Simulation Program-Fortran, or HSPF) within the EPA Office of Water's BASINS watershed modeling system was used to simulate streamflow and total suspended solids within Contentnea Creek, North Carolina, which is a tributary of the Neuse Rive...

  16. Exploring load, velocity, and surface disorder dependence of friction with one-dimensional and two-dimensional models.

    PubMed

    Dagdeviren, Omur E

    2018-08-03

    The effect of surface disorder, load, and velocity on friction between a single asperity contact and a model surface is explored with one-dimensional and two-dimensional Prandtl-Tomlinson (PT) models. We show that there are fundamental physical differences between the predictions of one-dimensional and two-dimensional models. The one-dimensional model estimates a monotonic increase in friction and energy dissipation with load, velocity, and surface disorder. However, a two-dimensional PT model, which is expected to approximate a tip-sample system more realistically, reveals a non-monotonic trend, i.e. friction is inert to surface disorder and roughness in wearless friction regime. The two-dimensional model discloses that the surface disorder starts to dominate the friction and energy dissipation when the tip and the sample interact predominantly deep into the repulsive regime. Our numerical calculations address that tracking the minimum energy path and the slip-stick motion are two competing effects that determine the load, velocity, and surface disorder dependence of friction. In the two-dimensional model, the single asperity can follow the minimum energy path in wearless regime; however, with increasing load and sliding velocity, the slip-stick movement dominates the dynamic motion and results in an increase in friction by impeding tracing the minimum energy path. Contrary to the two-dimensional model, when the one-dimensional PT model is employed, the single asperity cannot escape to the minimum energy minimum due to constraint motion and reveals only a trivial dependence of friction on load, velocity, and surface disorder. Our computational analyses clarify the physical differences between the predictions of the one-dimensional and two-dimensional models and open new avenues for disordered surfaces for low energy dissipation applications in wearless friction regime.

  17. Point-to-point migration functions and gravity model renormalization: approaches to aggregation in spatial interaction modeling.

    PubMed

    Slater, P B

    1985-08-01

    Two distinct approaches to assessing the effect of geographic scale on spatial interactions are modeled. In the first, the question of whether a distance deterrence function, which explains interactions for one system of zones, can also succeed on a more aggregate scale, is examined. Only the two-parameter function for which it is found that distances between macrozones are weighted averaged of distances between component zones is satisfactory in this regard. Estimation of continuous (point-to-point) functions--in the form of quadrivariate cubic polynomials--for US interstate migration streams, is then undertaken. Upon numerical integration, these higher order surfaces yield predictions of interzonal and intrazonal movements at any scale of interest. Test of spatial stationarity, isotropy, and symmetry of interstate migration are conducted in this framework.

  18. Using Laser Scanners to Augment the Systematic Error Pointing Model

    NASA Astrophysics Data System (ADS)

    Wernicke, D. R.

    2016-08-01

    The antennas of the Deep Space Network (DSN) rely on precise pointing algorithms to communicate with spacecraft that are billions of miles away. Although the existing systematic error pointing model is effective at reducing blind pointing errors due to static misalignments, several of its terms have a strong dependence on seasonal and even daily thermal variation and are thus not easily modeled. Changes in the thermal state of the structure create a separation from the model and introduce a varying pointing offset. Compensating for this varying offset is possible by augmenting the pointing model with laser scanners. In this approach, laser scanners mounted to the alidade measure structural displacements while a series of transformations generate correction angles. Two sets of experiments were conducted in August 2015 using commercially available laser scanners. When compared with historical monopulse corrections under similar conditions, the computed corrections are within 3 mdeg of the mean. However, although the results show promise, several key challenges relating to the sensitivity of the optical equipment to sunlight render an implementation of this approach impractical. Other measurement devices such as inclinometers may be implementable at a significantly lower cost.

  19. Gas-phase Conformational Analysis of (R,R)-Tartaric Acid, its Diamide, N,N,N',N'- Tetramethyldiamide and Model Compounds

    NASA Astrophysics Data System (ADS)

    Hoffmann, Marcin; Szarecka, Agnieszka; Rychlewski, Jacek

    A review over most recent ab initio studies carried out at both RHF and MP2 levels on (R,R)-tartaric acid (TA), its diamide (DA), tetramethyldiamide (TMDA) and on three prototypic model systems (each of them constitutes a half of the respective parental molecule), i.e. 2-hydroxyacetic acid (HA), 2-hydroxyacetamide (HD) and 2-hydroxy-N,N-dimethylacetamide (HMD) is presented. (R,R)-tartaric acid and the derivatives have been completely optimized at RHF/6-31G* level and subsequently single-point energies of all conformers have been calculated with the use of second order perturbation theory according to the scheme: MP2/6-31G*//RHF/6-31G*. In the complete optimization of the model molecules at RHF level we have employed relatively large basis sets, augmented with polarisation and diffuse functions, namely 3-21G, 6-31G*, 6-31++G** and 6-311++G**. Electronic correlation has been included with the largest basis set used in this study, i.e. MP2/6-311++G**//RHF/6-311++G** single-point energy calculations have been performed. General confomational preferences of tartaric acid derivatives have been analysed as well as an attempt has been made to define main factors affecting the conformational behaviour of these molecules in the isolated state, in particular, the role and stability of intramolecular hydrogen bonding. In the case of the model compounds, our study principally concerned the conformational preferences and hydrogen bonding structure within the [alpha]-hydroxy-X moiety, where X=COOH, CONH2, CON(CH3)2.

  20. Point- and line-based transformation models for high resolution satellite image rectification

    NASA Astrophysics Data System (ADS)

    Abd Elrahman, Ahmed Mohamed Shaker

    Rigorous mathematical models with the aid of satellite ephemeris data can present the relationship between the satellite image space and the object space. With government funded satellites, access to calibration and ephemeris data has allowed the development and use of these models. However, for commercial high-resolution satellites, which have been recently launched, these data are withheld from users, and therefore alternative empirical models should be used. In general, the existing empirical models are based on the use of control points and involve linking points in the image space and the corresponding points in the object space. But the lack of control points in some remote areas and the questionable accuracy of the identified discrete conjugate points provide a catalyst for the development of algorithms based on features other than control points. This research, concerned with image rectification and 3D geo-positioning determination using High-Resolution Satellite Imagery (HRSI), has two major objectives. First, the effects of satellite sensor characteristics, number of ground control points (GCPs), and terrain elevation variations on the performance of several point based empirical models are studied. Second, a new mathematical model, using only linear features as control features, or linear features with a minimum number of GCPs, is developed. To meet the first objective, several experiments for different satellites such as Ikonos, QuickBird, and IRS-1D have been conducted using different point based empirical models. Various data sets covering different terrain types are presented and results from representative sets of the experiments are shown and analyzed. The results demonstrate the effectiveness and the superiority of these models under certain conditions. From the results obtained, several alternatives to circumvent the effects of the satellite sensor characteristics, the number of GCPs, and the terrain elevation variations are introduced. To meet

  1. Assessment of Response Surface Models using Independent Confirmation Point Analysis

    NASA Technical Reports Server (NTRS)

    DeLoach, Richard

    2010-01-01

    This paper highlights various advantages that confirmation-point residuals have over conventional model design-point residuals in assessing the adequacy of a response surface model fitted by regression techniques to a sample of experimental data. Particular advantages are highlighted for the case of design matrices that may be ill-conditioned for a given sample of data. The impact of both aleatory and epistemological uncertainty in response model adequacy assessments is considered.

  2. A method for automatic feature points extraction of human vertebrae three-dimensional model

    NASA Astrophysics Data System (ADS)

    Wu, Zhen; Wu, Junsheng

    2017-05-01

    A method for automatic extraction of the feature points of the human vertebrae three-dimensional model is presented. Firstly, the statistical model of vertebrae feature points is established based on the results of manual vertebrae feature points extraction. Then anatomical axial analysis of the vertebrae model is performed according to the physiological and morphological characteristics of the vertebrae. Using the axial information obtained from the analysis, a projection relationship between the statistical model and the vertebrae model to be extracted is established. According to the projection relationship, the statistical model is matched with the vertebrae model to get the estimated position of the feature point. Finally, by analyzing the curvature in the spherical neighborhood with the estimated position of feature points, the final position of the feature points is obtained. According to the benchmark result on multiple test models, the mean relative errors of feature point positions are less than 5.98%. At more than half of the positions, the error rate is less than 3% and the minimum mean relative error is 0.19%, which verifies the effectiveness of the method.

  3. Prediction of Sliding Friction Coefficient Based on a Novel Hybrid Molecular-Mechanical Model.

    PubMed

    Zhang, Xiaogang; Zhang, Yali; Wang, Jianmei; Sheng, Chenxing; Li, Zhixiong

    2018-08-01

    Sliding friction is a complex phenomenon which arises from the mechanical and molecular interactions of asperities when examined in a microscale. To reveal and further understand the effects of micro scaled mechanical and molecular components of friction coefficient on overall frictional behavior, a hybrid molecular-mechanical model is developed to investigate the effects of main factors, including different loads and surface roughness values, on the sliding friction coefficient in a boundary lubrication condition. Numerical modelling was conducted using a deterministic contact model and based on the molecular-mechanical theory of friction. In the contact model, with given external loads and surface topographies, the pressure distribution, real contact area, and elastic/plastic deformation of each single asperity contact were calculated. Then asperity friction coefficient was predicted by the sum of mechanical and molecular components of friction coefficient. The mechanical component was mainly determined by the contact width and elastic/plastic deformation, and the molecular component was estimated as a function of the contact area and interfacial shear stress. Numerical results were compared with experimental results and a good agreement was obtained. The model was then used to predict friction coefficients in different operating and surface conditions. Numerical results explain why applied load has a minimum effect on the friction coefficients. They also provide insight into the effect of surface roughness on the mechanical and molecular components of friction coefficients. It is revealed that the mechanical component dominates the friction coefficient when the surface roughness is large (Rq > 0.2 μm), while the friction coefficient is mainly determined by the molecular component when the surface is relatively smooth (Rq < 0.2 μm). Furthermore, optimal roughness values for minimizing the friction coefficient are recommended.

  4. Estimation of boiling points using density functional theory with polarized continuum model solvent corrections.

    PubMed

    Chan, Poh Yin; Tong, Chi Ming; Durrant, Marcus C

    2011-09-01

    An empirical method for estimation of the boiling points of organic molecules based on density functional theory (DFT) calculations with polarized continuum model (PCM) solvent corrections has been developed. The boiling points are calculated as the sum of three contributions. The first term is calculated directly from the structural formula of the molecule, and is related to its effective surface area. The second is a measure of the electronic interactions between molecules, based on the DFT-PCM solvation energy, and the third is employed only for planar aromatic molecules. The method is applicable to a very diverse range of organic molecules, with normal boiling points in the range of -50 to 500 °C, and includes ten different elements (C, H, Br, Cl, F, N, O, P, S and Si). Plots of observed versus calculated boiling points gave R²=0.980 for a training set of 317 molecules, and R²=0.979 for a test set of 74 molecules. The role of intramolecular hydrogen bonding in lowering the boiling points of certain molecules is quantitatively discussed. Crown Copyright © 2011. Published by Elsevier Inc. All rights reserved.

  5. Optimization of Regression Models of Experimental Data Using Confirmation Points

    NASA Technical Reports Server (NTRS)

    Ulbrich, N.

    2010-01-01

    A new search metric is discussed that may be used to better assess the predictive capability of different math term combinations during the optimization of a regression model of experimental data. The new search metric can be determined for each tested math term combination if the given experimental data set is split into two subsets. The first subset consists of data points that are only used to determine the coefficients of the regression model. The second subset consists of confirmation points that are exclusively used to test the regression model. The new search metric value is assigned after comparing two values that describe the quality of the fit of each subset. The first value is the standard deviation of the PRESS residuals of the data points. The second value is the standard deviation of the response residuals of the confirmation points. The greater of the two values is used as the new search metric value. This choice guarantees that both standard deviations are always less or equal to the value that is used during the optimization. Experimental data from the calibration of a wind tunnel strain-gage balance is used to illustrate the application of the new search metric. The new search metric ultimately generates an optimized regression model that was already tested at regression model independent confirmation points before it is ever used to predict an unknown response from a set of regressors.

  6. Next-generation concurrent engineering: developing models to complement point designs

    NASA Technical Reports Server (NTRS)

    Morse, Elizabeth; Leavens, Tracy; Cohanim, Barbak; Harmon, Corey; Mahr, Eric; Lewis, Brian

    2006-01-01

    Concurrent Engineering Design teams have made routine the rapid development of point designs for space missions. The Jet Propulsion Laboratory's Team X is now evolving into a next generation CED; nin addition to a point design, the team develops a model of the local trade space. The process is a balance between the power of model-developing tools and the creativity of human experts, enabling the development of a variety of trade models for any space mission.

  7. A new method of regional CBF measurement using one point arterial sampling based on microsphere model with I-123 IMP SPECT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Odano, I.; Takahashi, N.; Ohkubo, M.

    1994-05-01

    We developed a new method for quantitative measurement of rCBF with Iodine-123-IMP based on the microsphere model, which was accurate, more simple and relatively non-invasive than the continuous withdrawal method. IMP is assumed to behave as a chemical microsphere in the brain. Then regional CBF is measured by the continuous withdrawal of arterial blood and the microsphere model as follows: F=Cb(t)/integral Ca(t)*N, where F is rCBF (ml/100g/min), Cb(t) is the brain activity concentration. The integral Ca(t) is the total activity of arterial whole-blood withdrawn, and N is the fraction of the integral Ca(t) that is true tracer activity. We analyzedmore » 14 patients. A dose of 222 MBq of IMP was injected i.v. over 1 min, and withdrawal of the arterial blood was performed from 0 to 5 min (integral Ca(t)), after which arterial blood samples (one point Ca(t)) were obtained at 5, 6, 7, 8, 9, 10 min, respectively. Then the integral Ca(t) was mathematically inferred from the value of one point Ca(t). When we examined the correlation between integral Ca(t)*N and one point Ca(t), and % error of one point Ca(t) compared with integral Ca(t)*N, the minimum of the % error was 8.1% and the maximum of the correlation coefficient was 0.943, the both values of which were obtained at 6 min. We concluded that 6 min was the best time to take arterial blood sample by one point sampling method for assuming the integral Ca(t)*N. IMP SPECT studies were performed with a ring-type SPECT scanner, Compared with rCBF measured by Xe-133 method, a significant correlation was observed in this method (r=0.773). One point Ca(t) method is very easy and quickly for measurement of rCBF without inserting catheters and without arterial blood treatment with octanol.« less

  8. 3D Finite Element Modeling of Sliding Wear

    DTIC Science & Technology

    2013-12-01

    the high strain rate compression of three armor materials: Maraging steel 300, high hardness armor (HHA), and aluminum alloy 5083. The University of...bearings, gears, brakes, gun barrels , slippers, locomotive wheels, or even rocket test tracks. The 3D wear model presented in this dissertation allows...43 Figure III-18 AISI-1080 Steel Distribution of Asperities..................................... 45 Figure III-19 Micrograph of Worn

  9. The Comparison of Point Data Models for the Output of WRF Hydro Model in the IDV

    NASA Astrophysics Data System (ADS)

    Ho, Y.; Weber, J.

    2017-12-01

    WRF Hydro netCDF output files contain streamflow, flow depth, longitude, latitude, altitude and stream order values for each forecast point. However, the data are not CF compliant. The total number of forecast points for the US CONUS is approximately 2.7 million and it is a big challenge for any visualization and analysis tool. The IDV point cloud display shows point data as a set of points colored by parameter. This display is very efficient compared to a standard point type display for rendering a large number of points. The one problem we have is that the data I/O can be a bottleneck issue when dealing with a large collection of point input files. In this presentation, we will experiment with different point data models and their APIs to access the same WRF Hydro model output. The results will help us construct a CF compliant netCDF point data format for the community.

  10. Equivalence of MAXENT and Poisson point process models for species distribution modeling in ecology.

    PubMed

    Renner, Ian W; Warton, David I

    2013-03-01

    Modeling the spatial distribution of a species is a fundamental problem in ecology. A number of modeling methods have been developed, an extremely popular one being MAXENT, a maximum entropy modeling approach. In this article, we show that MAXENT is equivalent to a Poisson regression model and hence is related to a Poisson point process model, differing only in the intercept term, which is scale-dependent in MAXENT. We illustrate a number of improvements to MAXENT that follow from these relations. In particular, a point process model approach facilitates methods for choosing the appropriate spatial resolution, assessing model adequacy, and choosing the LASSO penalty parameter, all currently unavailable to MAXENT. The equivalence result represents a significant step in the unification of the species distribution modeling literature. Copyright © 2013, The International Biometric Society.

  11. On the Decay of Correlations in Non-Analytic SO(n)-Symmetric Models

    NASA Astrophysics Data System (ADS)

    Naddaf, Ali

    We extend the method of complex translations which was originally employed by McBryan-Spencer [2] to obtain a decay rate for the two point function in two-dimensional SO(n)-symmetric models with non-analytic Hamiltonians for $.

  12. Bisous model-Detecting filamentary patterns in point processes

    NASA Astrophysics Data System (ADS)

    Tempel, E.; Stoica, R. S.; Kipper, R.; Saar, E.

    2016-07-01

    The cosmic web is a highly complex geometrical pattern, with galaxy clusters at the intersection of filaments and filaments at the intersection of walls. Identifying and describing the filamentary network is not a trivial task due to the overwhelming complexity of the structure, its connectivity and the intrinsic hierarchical nature. To detect and quantify galactic filaments we use the Bisous model, which is a marked point process built to model multi-dimensional patterns. The Bisous filament finder works directly with the galaxy distribution data and the model intrinsically takes into account the connectivity of the filamentary network. The Bisous model generates the visit map (the probability to find a filament at a given point) together with the filament orientation field. Using these two fields, we can extract filament spines from the data. Together with this paper we publish the computer code for the Bisous model that is made available in GitHub. The Bisous filament finder has been successfully used in several cosmological applications and further development of the model will allow to detect the filamentary network also in photometric redshift surveys, using the full redshift posterior. We also want to encourage the astro-statistical community to use the model and to connect it with all other existing methods for filamentary pattern detection and characterisation.

  13. Next-generation concurrent engineering: developing models to complement point designs

    NASA Technical Reports Server (NTRS)

    Morse, Elizabeth; Leavens, Tracy; Cohanim, Babak; Harmon, Corey; Mahr, Eric; Lewis, Brian

    2006-01-01

    Concurrent Engineering Design (CED) teams have made routine the rapid development of point designs for space missions. The Jet Propulsion Laboratory's Team X is now evolving into a 'next-generation CED; in addition to a point design, the Team develops a model of the local trade space. The process is a balance between the power of a model developing tools and the creativity of humal experts, enabling the development of a variety of trade models for any space mission. This paper reviews the modeling method and its practical implementation in the ED environment. Example results illustrate the benefit of this approach.

  14. Foreshock and aftershocks in simple earthquake models.

    PubMed

    Kazemian, J; Tiampo, K F; Klein, W; Dominguez, R

    2015-02-27

    Many models of earthquake faults have been introduced that connect Gutenberg-Richter (GR) scaling to triggering processes. However, natural earthquake fault systems are composed of a variety of different geometries and materials and the associated heterogeneity in physical properties can cause a variety of spatial and temporal behaviors. This raises the question of how the triggering process and the structure interact to produce the observed phenomena. Here we present a simple earthquake fault model based on the Olami-Feder-Christensen and Rundle-Jackson-Brown cellular automata models with long-range interactions that incorporates a fixed percentage of stronger sites, or asperity cells, into the lattice. These asperity cells are significantly stronger than the surrounding lattice sites but eventually rupture when the applied stress reaches their higher threshold stress. The introduction of these spatial heterogeneities results in temporal clustering in the model that mimics that seen in natural fault systems along with GR scaling. In addition, we observe sequences of activity that start with a gradually accelerating number of larger events (foreshocks) prior to a main shock that is followed by a tail of decreasing activity (aftershocks). This work provides further evidence that the spatial and temporal patterns observed in natural seismicity are strongly influenced by the underlying physical properties and are not solely the result of a simple cascade mechanism.

  15. New Micro- and Macroscopic Models of Contact and Friction

    DTIC Science & Technology

    1993-11-29

    prima Iriy lv d ito the complex structure of engineering surfaces., the severe clas;t o -p1lastic del,’fulormto.Ia un heat generation, atomnic...reflect an urg-ent need for constructing new coitst~itIt ive iaoh’lls of contact mci’ frictiott alid for estimating, the necessary materia ~l...frictionald initerlace miodels. Thes:e aire: 1. phenormenological models based prima ~rily onl experlnirioidtta observtlions-, a id~ 2. asperity- based models

  16. Cooperative terrain model acquisition by a team of two or three point-robots

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rao, N.S.V.; Protopopescu, V.; Manickam, N.

    1996-04-01

    We address the model acquisition problem for an unknown planar terrain by a team of two or three robots. The terrain is cluttered by a finite number of polygonal obstacles whose shapes and positions are unknown. The robots are point-sized and equipped with visual sensors which acquire all visible parts of the terrain by scan operations executed from their locations. The robots communicate with each other via wireless connection. The performance is measured by the number of the sensor (scan) operations which are assumed to be the most time-consuming of all the robot operations. We employ the restricted visibility graphmore » methods in a hierarchical setup. For terrains with convex obstacles and for teams of n(= 2, 3) robots, we prove that the sensing time is reduced by a factor of 1/n. For terrains with concave corners, the performance of the algorithm depends on the number of concave regions and their depths. A hierarchical decomposition of the restricted visibility graph into n-connected and (n - 1)-or-less connected components is considered. The performance for the n(= 2, 3) robot team is expressed in terms of the sizes of n-connected components, and the sizes and diameters of (n - 1)-or-less connected components.« less

  17. Modeling spatially-varying landscape change points in species occurrence thresholds

    USGS Publications Warehouse

    Wagner, Tyler; Midway, Stephen R.

    2014-01-01

    Predicting species distributions at scales of regions to continents is often necessary, as large-scale phenomena influence the distributions of spatially structured populations. Land use and land cover are important large-scale drivers of species distributions, and landscapes are known to create species occurrence thresholds, where small changes in a landscape characteristic results in abrupt changes in occurrence. The value of the landscape characteristic at which this change occurs is referred to as a change point. We present a hierarchical Bayesian threshold model (HBTM) that allows for estimating spatially varying parameters, including change points. Our model also allows for modeling estimated parameters in an effort to understand large-scale drivers of variability in land use and land cover on species occurrence thresholds. We use range-wide detection/nondetection data for the eastern brook trout (Salvelinus fontinalis), a stream-dwelling salmonid, to illustrate our HBTM for estimating and modeling spatially varying threshold parameters in species occurrence. We parameterized the model for investigating thresholds in landscape predictor variables that are measured as proportions, and which are therefore restricted to values between 0 and 1. Our HBTM estimated spatially varying thresholds in brook trout occurrence for both the proportion agricultural and urban land uses. There was relatively little spatial variation in change point estimates, although there was spatial variability in the overall shape of the threshold response and associated uncertainty. In addition, regional mean stream water temperature was correlated to the change point parameters for the proportion of urban land use, with the change point value increasing with increasing mean stream water temperature. We present a framework for quantify macrosystem variability in spatially varying threshold model parameters in relation to important large-scale drivers such as land use and land cover

  18. Underwater 3d Modeling: Image Enhancement and Point Cloud Filtering

    NASA Astrophysics Data System (ADS)

    Sarakinou, I.; Papadimitriou, K.; Georgoula, O.; Patias, P.

    2016-06-01

    This paper examines the results of image enhancement and point cloud filtering on the visual and geometric quality of 3D models for the representation of underwater features. Specifically it evaluates the combination of effects from the manual editing of images' radiometry (captured at shallow depths) and the selection of parameters for point cloud definition and mesh building (processed in 3D modeling software). Such datasets, are usually collected by divers, handled by scientists and used for geovisualization purposes. In the presented study, have been created 3D models from three sets of images (seafloor, part of a wreck and a small boat's wreck) captured at three different depths (3.5m, 10m and 14m respectively). Four models have been created from the first dataset (seafloor) in order to evaluate the results from the application of image enhancement techniques and point cloud filtering. The main process for this preliminary study included a) the definition of parameters for the point cloud filtering and the creation of a reference model, b) the radiometric editing of images, followed by the creation of three improved models and c) the assessment of results by comparing the visual and the geometric quality of improved models versus the reference one. Finally, the selected technique is tested on two other data sets in order to examine its appropriateness for different depths (at 10m and 14m) and different objects (part of a wreck and a small boat's wreck) in the context of an ongoing research in the Laboratory of Photogrammetry and Remote Sensing.

  19. Extracting valley-ridge lines from point-cloud-based 3D fingerprint models.

    PubMed

    Pang, Xufang; Song, Zhan; Xie, Wuyuan

    2013-01-01

    3D fingerprinting is an emerging technology with the distinct advantage of touchless operation. More important, 3D fingerprint models contain more biometric information than traditional 2D fingerprint images. However, current approaches to fingerprint feature detection usually must transform the 3D models to a 2D space through unwrapping or other methods, which might introduce distortions. A new approach directly extracts valley-ridge features from point-cloud-based 3D fingerprint models. It first applies the moving least-squares method to fit a local paraboloid surface and represent the local point cloud area. It then computes the local surface's curvatures and curvature tensors to facilitate detection of the potential valley and ridge points. The approach projects those points to the most likely valley-ridge lines, using statistical means such as covariance analysis and cross correlation. To finally extract the valley-ridge lines, it grows the polylines that approximate the projected feature points and removes the perturbations between the sampled points. Experiments with different 3D fingerprint models demonstrate this approach's feasibility and performance.

  20. A New Blind Pointing Model Improves Large Reflector Antennas Precision Pointing at Ka-Band (32 GHz)

    NASA Technical Reports Server (NTRS)

    Rochblatt, David J.

    2009-01-01

    The National Aeronautics and Space Administration (NASA), Jet Propulsion Laboratory (JPL)-Deep Space Network (DSN) subnet of 34-m Beam Waveguide (BWG) Antennas was recently upgraded with Ka-Band (32-GHz) frequency feeds for space research and communication. For normal telemetry tracking a Ka-Band monopulse system is used, which typically yields 1.6-mdeg mean radial error (MRE) pointing accuracy on the 34-m diameter antennas. However, for the monopulse to be able to acquire and lock, for special radio science applications where monopulse cannot be used, or as a back-up for the monopulse, high-precision open-loop blind pointing is required. This paper describes a new 4th order pointing model and calibration technique, which was developed and applied to the DSN 34-m BWG antennas yielding 1.8 to 3.0-mdeg MRE pointing accuracy and amplitude stability of 0.2 dB, at Ka-Band, and successfully used for the CASSINI spacecraft occultation experiment at Saturn and Titan. In addition, the new 4th order pointing model was used during a telemetry experiment at Ka-Band (32 GHz) utilizing the Mars Reconnaissance Orbiter (MRO) spacecraft while at a distance of 0.225 astronomical units (AU) from Earth and communicating with a DSN 34-m BWG antenna at a record high rate of 6-megabits per second (Mb/s).

  1. Infinite-disorder critical points of models with stretched exponential interactions

    NASA Astrophysics Data System (ADS)

    Juhász, Róbert

    2014-09-01

    We show that an interaction decaying as a stretched exponential function of distance, J(l)˜ e-cl^a , is able to alter the universality class of short-range systems having an infinite-disorder critical point. To do so, we study the low-energy properties of the random transverse-field Ising chain with the above form of interaction by a strong-disorder renormalization group (SDRG) approach. We find that the critical behavior of the model is controlled by infinite-disorder fixed points different from those of the short-range model if 0 < a < 1/2. In this range, the critical exponents calculated analytically by a simplified SDRG scheme are found to vary with a, while, for a > 1/2, the model belongs to the same universality class as its short-range variant. The entanglement entropy of a block of size L increases logarithmically with L at the critical point but, unlike the short-range model, the prefactor is dependent on disorder in the range 0 < a < 1/2. Numerical results obtained by an improved SDRG scheme are found to be in agreement with the analytical predictions. The same fixed points are expected to describe the critical behavior of, among others, the random contact process with stretched exponentially decaying activation rates.

  2. Two-point spectral model for variable density homogeneous turbulence

    NASA Astrophysics Data System (ADS)

    Pal, Nairita; Kurien, Susan; Clark, Timothy; Aslangil, Denis; Livescu, Daniel

    2017-11-01

    We present a comparison between a two-point spectral closure model for buoyancy-driven variable density homogeneous turbulence, with Direct Numerical Simulation (DNS) data of the same system. We wish to understand how well a suitable spectral model might capture variable density effects and the transition to turbulence from an initially quiescent state. Following the BHRZ model developed by Besnard et al. (1990), the spectral model calculation computes the time evolution of two-point correlations of the density fluctuations with the momentum and the specific-volume. These spatial correlations are expressed as function of wavenumber k and denoted by a (k) and b (k) , quantifying mass flux and turbulent mixing respectively. We assess the accuracy of the model, relative to a full DNS of the complete hydrodynamical equations, using a and b as metrics. Work at LANL was performed under the auspices of the U.S. DOE Contract No. DE-AC52-06NA25396.

  3. Short-Time Dynamics of the Random n-Vector Model

    NASA Astrophysics Data System (ADS)

    Chen, Yuan; Li, Zhi-Bing; Fang, Hai; He, Shun-Shan; Situ, Shu-Ping

    2001-11-01

    Short-time critical behavior of the random n-vector model is studied by the theoretic renormalization-group approach. Asymptotic scaling laws are studied in a frame of the expansion in ɛ=4-d for n≠1 and {√ɛ} for n=1 respectively. In d<4, the initial slip exponents θ‧ for the order parameter and θ for the response function are calculated up to the second order in ɛ=4-d for n≠1 and {√ɛ} for n=1 at the random fixed point respectively. Our results show that the random impurities exert a strong influence on the short-time dynamics for d<4 and n

  4. Modeling abundance using multinomial N-mixture models

    USGS Publications Warehouse

    Royle, Andy

    2016-01-01

    Multinomial N-mixture models are a generalization of the binomial N-mixture models described in Chapter 6 to allow for more complex and informative sampling protocols beyond simple counts. Many commonly used protocols such as multiple observer sampling, removal sampling, and capture-recapture produce a multivariate count frequency that has a multinomial distribution and for which multinomial N-mixture models can be developed. Such protocols typically result in more precise estimates than binomial mixture models because they provide direct information about parameters of the observation process. We demonstrate the analysis of these models in BUGS using several distinct formulations that afford great flexibility in the types of models that can be developed, and we demonstrate likelihood analysis using the unmarked package. Spatially stratified capture-recapture models are one class of models that fall into the multinomial N-mixture framework, and we discuss analysis of stratified versions of classical models such as model Mb, Mh and other classes of models that are only possible to describe within the multinomial N-mixture framework.

  5. Modeling strong‐motion recordings of the 2010 Mw 8.8 Maule, Chile, earthquake with high stress‐drop subevents and background slip

    USGS Publications Warehouse

    Frankel, Arthur

    2017-01-01

    Strong‐motion recordings of the Mw 8.8 Maule earthquake were modeled using a compound rupture model consisting of (1) a background slip distribution with large correlation lengths, relatively low slip velocity, and long peak rise time of slip of about 10 s and (2) high stress‐drop subevents (asperities) on the deeper portion of the rupture with moment magnitudes 7.9–8.2, high slip velocity, and rise times of slip of about 2 s. In this model, the high‐frequency energy is not produced in the same location as the peak coseismic slip, but is generated in the deeper part of the rupture zone. Using synthetic seismograms generated for a plane‐layered velocity model, I find that the high stress‐drop subevents explain the observed Fourier spectral amplitude from about 0.1 to 1.0 Hz. Broadband synthetics (0–10 Hz) were calculated by combining deterministic synthetics derived from the background slip and asperities (≤1  Hz) with stochastic synthetics generated only at the asperities (≥1  Hz). The broadband synthetics produced response spectral accelerations with low bias compared to the data, for periods of 0.1–10 s. A subevent stress drop of 200–350 bars for the high‐frequency stochastic synthetics was found to bracket the observed spectral accelerations at frequencies greater than 1 Hz. For most of the stations, the synthetics had durations of the Arias intensity similar to the observed records.

  6. Set points, settling points and some alternative models: theoretical options to understand how genes and environments combine to regulate body adiposity

    PubMed Central

    Speakman, John R.; Levitsky, David A.; Allison, David B.; Bray, Molly S.; de Castro, John M.; Clegg, Deborah J.; Clapham, John C.; Dulloo, Abdul G.; Gruer, Laurence; Haw, Sally; Hebebrand, Johannes; Hetherington, Marion M.; Higgs, Susanne; Jebb, Susan A.; Loos, Ruth J. F.; Luckman, Simon; Luke, Amy; Mohammed-Ali, Vidya; O’Rahilly, Stephen; Pereira, Mark; Perusse, Louis; Robinson, Tom N.; Rolls, Barbara; Symonds, Michael E.; Westerterp-Plantenga, Margriet S.

    2011-01-01

    The close correspondence between energy intake and expenditure over prolonged time periods, coupled with an apparent protection of the level of body adiposity in the face of perturbations of energy balance, has led to the idea that body fatness is regulated via mechanisms that control intake and energy expenditure. Two models have dominated the discussion of how this regulation might take place. The set point model is rooted in physiology, genetics and molecular biology, and suggests that there is an active feedback mechanism linking adipose tissue (stored energy) to intake and expenditure via a set point, presumably encoded in the brain. This model is consistent with many of the biological aspects of energy balance, but struggles to explain the many significant environmental and social influences on obesity, food intake and physical activity. More importantly, the set point model does not effectively explain the ‘obesity epidemic’ – the large increase in body weight and adiposity of a large proportion of individuals in many countries since the 1980s. An alternative model, called the settling point model, is based on the idea that there is passive feedback between the size of the body stores and aspects of expenditure. This model accommodates many of the social and environmental characteristics of energy balance, but struggles to explain some of the biological and genetic aspects. The shortcomings of these two models reflect their failure to address the gene-by-environment interactions that dominate the regulation of body weight. We discuss two additional models – the general intake model and the dual intervention point model – that address this issue and might offer better ways to understand how body fatness is controlled. PMID:22065844

  7. New statistical scission-point model to predict fission fragment observables

    NASA Astrophysics Data System (ADS)

    Lemaître, Jean-François; Panebianco, Stefano; Sida, Jean-Luc; Hilaire, Stéphane; Heinrich, Sophie

    2015-09-01

    The development of high performance computing facilities makes possible a massive production of nuclear data in a full microscopic framework. Taking advantage of the individual potential calculations of more than 7000 nuclei, a new statistical scission-point model, called SPY, has been developed. It gives access to the absolute available energy at the scission point, which allows the use of a parameter-free microcanonical statistical description to calculate the distributions and the mean values of all fission observables. SPY uses the richness of microscopy in a rather simple theoretical framework, without any parameter except the scission-point definition, to draw clear answers based on perfect knowledge of the ingredients involved in the model, with very limited computing cost.

  8. Theory of friction based on brittle fracture

    USGS Publications Warehouse

    Byerlee, J.D.

    1967-01-01

    A theory of friction is presented that may be more applicable to geologic materials than the classic Bowden and Tabor theory. In the model, surfaces touch at the peaks of asperities and sliding occurs when the asperities fail by brittle fracture. The coefficient of friction, ??, was calculated from the strength of asperities of certain ideal shapes; for cone-shaped asperities, ?? is about 0.1 and for wedge-shaped asperities, ?? is about 0.15. For actual situations which seem close to the ideal model, observed ?? was found to be very close to 0.1, even for materials such as quartz and calcite with widely differing strengths. If surface forces are present, the theory predicts that ?? should decrease with load and that it should be higher in a vacuum than in air. In the presence of a fluid film between sliding surfaces, ?? should depend on the area of the surfaces in contact. Both effects are observed. The character of wear particles produced during sliding and the way in which ?? depends on normal load, roughness, and environment lend further support to the model of friction presented here. ?? 1967 The American Institute of Physics.

  9. Strong-coupling analysis of two-dimensional O({ital N}) {sigma} models with {ital N}{le}2 on square, triangular, and honeycomb lattices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Campostrini, M.; Pelissetto, A.; Rossi, P.

    1996-09-01

    The critical behavior of two-dimensional (2D) O({ital N}) {sigma} models with {ital N}{le}2 on square, triangular, and honeycomb lattices is investigated by an analysis of the strong-coupling expansion of the two-point fundamental Green{close_quote}s function {ital G}({ital x}), calculated up to 21st order on the square lattice, 15th order on the triangular lattice, and 30th order on the honeycomb lattice. For {ital N}{lt}2 the critical behavior is of power-law type, and the exponents {gamma} and {nu} extracted from our strong-coupling analysis confirm exact results derived assuming universality with solvable solid-on-solid models. At {ital N}=2, i.e., for the 2D {ital XY} model,more » the results from all lattices considered are consistent with the Kosterlitz-Thouless exponential approach to criticality, characterized by an exponent {sigma}=1/2, and with universality. The value {sigma}=1/2 is confirmed within an uncertainty of few percent. The prediction {eta}=1/4 is also roughly verified. For various values of {ital N}{le}2, we determine some ratios of amplitudes concerning the two-point function {ital G}({ital x}) in the critical limit of the symmetric phase. This analysis shows that the low-momentum behavior of {ital G}({ital x}) in the critical region is essentially Gaussian at all values of {ital N}{le}2. Exact results for the long-distance behavior of {ital G}({ital x}) when {ital N}=1 (Ising model in the strong-coupling phase) confirm this statement. {copyright} {ital 1996 The American Physical Society.}« less

  10. Specifying initial stress for dynamic heterogeneous earthquake source models

    USGS Publications Warehouse

    Andrews, D.J.; Barall, M.

    2011-01-01

    Dynamic rupture calculations using heterogeneous stress drop that is random and self-similar with a power-law spatial spectrum have great promise of producing realistic ground-motion predictions. We present procedures to specify initial stress for random events with a target rupture length and target magnitude. The stress function is modified in the depth dimension to account for the brittle-ductile transition at the base of the seismogenic zone. Self-similar fluctuations in stress drop are tied in this work to the long-wavelength stress variation that determines rupture length. Heterogeneous stress is related to friction levels in order to relate the model to physical concepts. In a variant of the model, there are high-stress asperities with low background stress. This procedure has a number of advantages: (1) rupture stops naturally, not at artificial barriers; (2) the amplitude of short-wavelength fluctuations of stress drop is not arbitrary: the spectrum is fixed to the long-wavelength fluctuation that determines rupture length; and (3) large stress drop can be confined to asperities occupying a small fraction of the total rupture area, producing slip distributions with enhanced peaks.

  11. A Semiparametric Change-Point Regression Model for Longitudinal Observations.

    PubMed

    Xing, Haipeng; Ying, Zhiliang

    2012-12-01

    Many longitudinal studies involve relating an outcome process to a set of possibly time-varying covariates, giving rise to the usual regression models for longitudinal data. When the purpose of the study is to investigate the covariate effects when experimental environment undergoes abrupt changes or to locate the periods with different levels of covariate effects, a simple and easy-to-interpret approach is to introduce change-points in regression coefficients. In this connection, we propose a semiparametric change-point regression model, in which the error process (stochastic component) is nonparametric and the baseline mean function (functional part) is completely unspecified, the observation times are allowed to be subject-specific, and the number, locations and magnitudes of change-points are unknown and need to be estimated. We further develop an estimation procedure which combines the recent advance in semiparametric analysis based on counting process argument and multiple change-points inference, and discuss its large sample properties, including consistency and asymptotic normality, under suitable regularity conditions. Simulation results show that the proposed methods work well under a variety of scenarios. An application to a real data set is also given.

  12. Modeling Menstrual Cycle Length and Variability at the Approach of Menopause Using Hierarchical Change Point Models

    PubMed Central

    Huang, Xiaobi; Elliott, Michael R.; Harlow, Siobán D.

    2013-01-01

    SUMMARY As women approach menopause, the patterns of their menstrual cycle lengths change. To study these changes, we need to jointly model both the mean and variability of cycle length. Our proposed model incorporates separate mean and variance change points for each woman and a hierarchical model to link them together, along with regression components to include predictors of menopausal onset such as age at menarche and parity. Additional complexity arises from the fact that the calendar data have substantial missingness due to hormone use, surgery, and failure to report. We integrate multiple imputation and time-to event modeling in a Bayesian estimation framework to deal with different forms of the missingness. Posterior predictive model checks are applied to evaluate the model fit. Our method successfully models patterns of women’s menstrual cycle trajectories throughout their late reproductive life and identifies change points for mean and variability of segment length, providing insight into the menopausal process. More generally, our model points the way toward increasing use of joint mean-variance models to predict health outcomes and better understand disease processes. PMID:24729638

  13. Systematic survey of high-resolution b value imaging along Californian faults: Inference on asperities

    NASA Astrophysics Data System (ADS)

    Tormann, T.; Wiemer, S.; Mignan, A.

    2014-03-01

    Understanding and forecasting earthquake occurrences is presumably linked to understanding the stress distribution in the Earth's crust. This cannot be measured instrumentally with useful coverage. However, the size distribution of earthquakes, quantified by the Gutenberg-Richter b value, is possibly a proxy to differential stress conditions and could therewith act as a crude stress-meter wherever seismicity is observed. In this study, we improve the methodology of b value imaging for application to a high-resolution 3-D analysis of a complex fault network. In particular, we develop a distance-dependent sampling algorithm and introduce a linearity measure to restrict our output to those regions where the magnitude distribution strictly follows a power law. We assess the catalog completeness along the fault traces using the Bayesian Magnitude of Completeness method and systematically image b values for 243 major fault segments in California. We identify and report b value structures, revisiting previously published features, e.g., the Parkfield asperity, and documenting additional anomalies, e.g., along the San Andreas and Northridge faults. Combining local b values with local earthquake productivity rates, we derive probability maps for the annual potential of one or more M6 events as indicated by the microseismicity of the last three decades. We present a physical concept of how different stressing conditions along a fault surface may lead to b value variation and explain nonlinear frequency-magnitude distributions. Detailed spatial b value information and its physical interpretation can advance our understanding of earthquake occurrence and ideally lead to improved forecasting ability.

  14. Complementing the topsoil information of the Land Use/Land Cover Area Frame Survey (LUCAS) with modelled N2O emissions.

    PubMed

    Lugato, Emanuele; Paniagua, Lily; Jones, Arwyn; de Vries, Wim; Leip, Adrian

    2017-01-01

    Two objectives of the Common Agricultural Policy post-2013 (CAP, 2014-2020) in the European Union (EU) are the sustainable management of natural resources and climate smart agriculture. To understand the CAP impact on these priorities, the Land Use/Cover statistical Area frame Survey (LUCAS) employs direct field observations and soil sub-sampling across the EU. While a huge amount of information can be retrieved from LUCAS points for monitoring the environmental status of agroecosystems and assessing soil carbon sequestration, a fundamental aspect relating to climate change action is missing, namely nitrous oxide (N2O) soil emissions. To fill this gap, we ran the DayCent biogeochemistry model for more than 11'000 LUCAS sampling points under agricultural use, assessing also the model uncertainty. The results showed that current annual N2O emissions followed a skewed distribution with a mean and median values of 2.27 and 1.71 kg N ha-1 yr-1, respectively. Using a Random Forest regression for upscaling the modelled results to the EU level, we estimated direct soil emissions of N2O in the range of 171-195 Tg yr-1 of CO2eq. Moreover, the direct regional upscaling using modelled N2O emissions in LUCAS points was on average 0.95 Mg yr-1 of CO2eq. per hectare, which was within the range of the meta-model upscaling (0.92-1.05 Mg ha-1 yr-1 of CO2eq). We concluded that, if information on management practices would be made available and model bias further reduced by N2O flux measurement at representative LUCAS points, the combination of the land use/soil survey with a well calibrated biogeochemistry model may become a reference tool to support agricultural, environmental and climate policies.

  15. Substrate specificities and intracellular distributions of three N-glycan processing enzymes functioning at a key branch point in the insect N-glycosylation pathway.

    PubMed

    Geisler, Christoph; Jarvis, Donald L

    2012-03-02

    Man(α1-6)[GlcNAc(β1-2)Man(α1-3)]ManGlcNAc(2) is a key branch point intermediate in the insect N-glycosylation pathway because it can be either trimmed by a processing β-N-acetylglucosaminidase (FDL) to produce paucimannosidic N-glycans or elongated by N-acetylglucosaminyltransferase II (GNT-II) to produce complex N-glycans. N-acetylglucosaminyltransferase I (GNT-I) contributes to branch point intermediate production and can potentially reverse the FDL trimming reaction. However, there has been no concerted effort to evaluate the relationships among these three enzymes in any single insect system. Hence, we extended our previous studies on Spodoptera frugiperda (Sf) FDL to include GNT-I and -II. Sf-GNT-I and -II cDNAs were isolated, the predicted protein sequences were analyzed, and both gene products were expressed and their acceptor substrate specificities and intracellular localizations were determined. Sf-GNT-I transferred N-acetylglucosamine to Man(5)GlcNAc(2), Man(3)GlcNAc(2), and GlcNAc(β1-2)Man(α1-6)[Man(α1-3)]ManGlcNAc(2), demonstrating its role in branch point intermediate production and its ability to reverse FDL trimming. Sf-GNT-II only transferred N-acetylglucosamine to Man(α1-6)[GlcNAc(β1-2)Man(α1-3)]ManGlcNAc(2), demonstrating that it initiates complex N-glycan production, but cannot use Man(3)GlcNAc(2) to produce hybrid or complex structures. Fluorescently tagged Sf-GNT-I and -II co-localized with an endogenous Sf Golgi marker and Sf-FDL co-localized with Sf-GNT-I and -II, indicating that all three enzymes are Golgi resident proteins. Unexpectedly, fluorescently tagged Drosophila melanogaster FDL also co-localized with Sf-GNT-I and an endogenous Drosophila Golgi marker, indicating that it is a Golgi resident enzyme in insect cells. Thus, the substrate specificities and physical juxtapositioning of GNT-I, GNT-II, and FDL support the idea that these enzymes function at the N-glycan processing branch point and are major factors determining the

  16. Modeling the contribution of point sources and non-point sources to Thachin River water pollution.

    PubMed

    Schaffner, Monika; Bader, Hans-Peter; Scheidegger, Ruth

    2009-08-15

    Major rivers in developing and emerging countries suffer increasingly of severe degradation of water quality. The current study uses a mathematical Material Flow Analysis (MMFA) as a complementary approach to address the degradation of river water quality due to nutrient pollution in the Thachin River Basin in Central Thailand. This paper gives an overview of the origins and flow paths of the various point- and non-point pollution sources in the Thachin River Basin (in terms of nitrogen and phosphorus) and quantifies their relative importance within the system. The key parameters influencing the main nutrient flows are determined and possible mitigation measures discussed. The results show that aquaculture (as a point source) and rice farming (as a non-point source) are the key nutrient sources in the Thachin River Basin. Other point sources such as pig farms, households and industries, which were previously cited as the most relevant pollution sources in terms of organic pollution, play less significant roles in comparison. This order of importance shifts when considering the model results for the provincial level. Crosschecks with secondary data and field studies confirm the plausibility of our simulations. Specific nutrient loads for the pollution sources are derived; these can be used for a first broad quantification of nutrient pollution in comparable river basins. Based on an identification of the sensitive model parameters, possible mitigation scenarios are determined and their potential to reduce the nutrient load evaluated. A comparison of simulated nutrient loads with measured nutrient concentrations shows that nutrient retention in the river system may be significant. Sedimentation in the slow flowing surface water network as well as nitrogen emission to the air from the warm oxygen deficient waters are certainly partly responsible, but also wetlands along the river banks could play an important role as nutrient sinks.

  17. Fermi Level Control of Point Defects During Growth of Mg-Doped GaN

    NASA Astrophysics Data System (ADS)

    Bryan, Zachary; Hoffmann, Marc; Tweedie, James; Kirste, Ronny; Callsen, Gordon; Bryan, Isaac; Rice, Anthony; Bobea, Milena; Mita, Seiji; Xie, Jinqiao; Sitar, Zlatko; Collazo, Ramón

    2013-05-01

    In this study, Fermi level control of point defects during metalorganic chemical vapor deposition (MOCVD) of Mg-doped GaN has been demonstrated by above-bandgap illumination. Resistivity and photoluminescence (PL) measurements are used to investigate the Mg dopant activation of samples with Mg concentration of 2 × 1019 cm-3 grown with and without exposure to ultraviolet (UV) illumination. Samples grown under UV illumination have five orders of magnitude lower resistivity values compared with typical unannealed GaN:Mg samples. The PL spectra of samples grown with UV exposure are similar to the spectra of those grown without UV exposure that were subsequently annealed, indicating a different incorporation of compensating defects during growth. Based on PL and resistivity measurements we show that Fermi level control of point defects during growth of III-nitrides is feasible.

  18. A travel time forecasting model based on change-point detection method

    NASA Astrophysics Data System (ADS)

    LI, Shupeng; GUANG, Xiaoping; QIAN, Yongsheng; ZENG, Junwei

    2017-06-01

    Travel time parameters obtained from road traffic sensors data play an important role in traffic management practice. A travel time forecasting model is proposed for urban road traffic sensors data based on the method of change-point detection in this paper. The first-order differential operation is used for preprocessing over the actual loop data; a change-point detection algorithm is designed to classify the sequence of large number of travel time data items into several patterns; then a travel time forecasting model is established based on autoregressive integrated moving average (ARIMA) model. By computer simulation, different control parameters are chosen for adaptive change point search for travel time series, which is divided into several sections of similar state.Then linear weight function is used to fit travel time sequence and to forecast travel time. The results show that the model has high accuracy in travel time forecasting.

  19. An interpretation model of GPR point data in tunnel geological prediction

    NASA Astrophysics Data System (ADS)

    He, Yu-yao; Li, Bao-qi; Guo, Yuan-shu; Wang, Teng-na; Zhu, Ya

    2017-02-01

    GPR (Ground Penetrating Radar) point data plays an absolutely necessary role in the tunnel geological prediction. However, the research work on the GPR point data is very little and the results does not meet the actual requirements of the project. In this paper, a GPR point data interpretation model which is based on WD (Wigner distribution) and deep CNN (convolutional neural network) is proposed. Firstly, the GPR point data is transformed by WD to get the map of time-frequency joint distribution; Secondly, the joint distribution maps are classified by deep CNN. The approximate location of geological target is determined by observing the time frequency map in parallel; Finally, the GPR point data is interpreted according to the classification results and position information from the map. The simulation results show that classification accuracy of the test dataset (include 1200 GPR point data) is 91.83% at the 200 iteration. Our model has the advantages of high accuracy and fast training speed, and can provide a scientific basis for the development of tunnel construction and excavation plan.

  20. Pairwise Interaction Extended Point-Particle (PIEP) model for multiphase jets and sedimenting particles

    NASA Astrophysics Data System (ADS)

    Liu, Kai; Balachandar, S.

    2017-11-01

    We perform a series of Euler-Lagrange direct numerical simulations (DNS) for multiphase jets and sedimenting particles. The forces the flow exerts on the particles in these two-way coupled simulations are computed using the Basset-Bousinesq-Oseen (BBO) equations. These forces do not explicitly account for particle-particle interactions, even though such pairwise interactions induced by the perturbations from neighboring particles may be important especially when the particle volume fraction is high. Such effects have been largely unaddressed in the literature. Here, we implement the Pairwise Interaction Extended Point-Particle (PIEP) model to simulate the effect of neighboring particle pairs. A simple collision model is also applied to avoid unphysical overlapping of solid spherical particles. The simulation results indicate that the PIEP model provides a more elaborative and complicated movement of the dispersed phase (droplets and particles). Office of Naval Research (ONR) Multidisciplinary University Research Initiative (MURI) project N00014-16-1-2617.

  1. Pseudo-critical point in anomalous phase diagrams of simple plasma models

    NASA Astrophysics Data System (ADS)

    Chigvintsev, A. Yu; Iosilevskiy, I. L.; Noginova, L. Yu

    2016-11-01

    Anomalous phase diagrams in subclass of simplified (“non-associative”) Coulomb models is under discussion. The common feature of this subclass is absence on definition of individual correlations for charges of opposite sign. It is e.g. modified OCP of ions on uniformly compressible background of ideal Fermi-gas of electrons OCP(∼), or a superposition of two non-ideal OCP(∼) models of ions and electrons etc. In contrast to the ordinary OCP model on non-compressible (“rigid”) background OCP(#) two new phase transitions with upper critical point, boiling and sublimation, appear in OCP(∼) phase diagram in addition to the well-known Wigner crystallization. The point is that the topology of phase diagram in OCP(∼) becomes anomalous at high enough value of ionic charge number Z. Namely, the only one unified crystal- fluid phase transition without critical point exists as continuous superposition of melting and sublimation in OCP(∼) at the interval (Z 1 < Z < Z 2). The most remarkable is appearance of pseudo-critical points at both boundary values Z = Z 1 ≈ 35.5 and Z = Z 2 ≈ 40.0. It should be stressed that critical isotherm is exactly cubic in both these pseudo-critical points. In this study we have improved our previous calculations and utilized more complicated model components equation of state provided by Chabrier and Potekhin (1998 Phys. Rev. E 58 4941).

  2. Ground Motion Simulation for a Large Active Fault System using Empirical Green's Function Method and the Strong Motion Prediction Recipe - a Case Study of the Noubi Fault Zone -

    NASA Astrophysics Data System (ADS)

    Kuriyama, M.; Kumamoto, T.; Fujita, M.

    2005-12-01

    The 1995 Hyogo-ken Nambu Earthquake (1995) near Kobe, Japan, spurred research on strong motion prediction. To mitigate damage caused by large earthquakes, a highly precise method of predicting future strong motion waveforms is required. In this study, we applied empirical Green's function method to forward modeling in order to simulate strong ground motion in the Noubi Fault zone and examine issues related to strong motion prediction for large faults. Source models for the scenario earthquakes were constructed using the recipe of strong motion prediction (Irikura and Miyake, 2001; Irikura et al., 2003). To calculate the asperity area ratio of a large fault zone, the results of a scaling model, a scaling model with 22% asperity by area, and a cascade model were compared, and several rupture points and segmentation parameters were examined for certain cases. A small earthquake (Mw: 4.6) that occurred in northern Fukui Prefecture in 2004 were examined as empirical Green's function, and the source spectrum of this small event was found to agree with the omega-square scaling law. The Nukumi, Neodani, and Umehara segments of the 1891 Noubi Earthquake were targeted in the present study. The positions of the asperity area and rupture starting points were based on the horizontal displacement distributions reported by Matsuda (1974) and the fault branching pattern and rupture direction model proposed by Nakata and Goto (1998). Asymmetry in the damage maps for the Noubi Earthquake was then examined. We compared the maximum horizontal velocities for each case that had a different rupture starting point. In the case, rupture started at the center of the Nukumi Fault, while in another case, rupture started on the southeastern edge of the Umehara Fault; the scaling model showed an approximately 2.1-fold difference between these cases at observation point FKI005 of K-Net. This difference is considered to relate to the directivity effect associated with the direction of rupture

  3. Reconstruction of Consistent 3d CAD Models from Point Cloud Data Using a Priori CAD Models

    NASA Astrophysics Data System (ADS)

    Bey, A.; Chaine, R.; Marc, R.; Thibault, G.; Akkouche, S.

    2011-09-01

    We address the reconstruction of 3D CAD models from point cloud data acquired in industrial environments, using a pre-existing 3D model as an initial estimate of the scene to be processed. Indeed, this prior knowledge can be used to drive the reconstruction so as to generate an accurate 3D model matching the point cloud. We more particularly focus our work on the cylindrical parts of the 3D models. We propose to state the problem in a probabilistic framework: we have to search for the 3D model which maximizes some probability taking several constraints into account, such as the relevancy with respect to the point cloud and the a priori 3D model, and the consistency of the reconstructed model. The resulting optimization problem can then be handled using a stochastic exploration of the solution space, based on the random insertion of elements in the configuration under construction, coupled with a greedy management of the conflicts which efficiently improves the configuration at each step. We show that this approach provides reliable reconstructed 3D models by presenting some results on industrial data sets.

  4. Modelling of the frictional behaviour of the snake skin covered by anisotropic surface nanostructures.

    PubMed

    Filippov, Alexander E; Gorb, Stanislav N

    2016-03-23

    Previous experimental data clearly revealed anisotropic friction on the ventral scale surface of snakes. However, it is known that frictional properties of the ventral surface of the snake skin range in a very broad range and the degree of anisotropy ranges as well to a quite strong extent. This might be due to the variety of species studied, diversity of approaches used for the friction characterization, and/or due to the variety of substrates used as a counterpart in the experiments. In order to understand the interactions between the nanostructure arrays of the ventral surface of the snake skin, this study was undertaken, which is aimed at numerical modeling of frictional properties of the structurally anisotropic surfaces in contact with various size of asperities. The model shows that frictional anisotropy appears on the snake skin only on the substrates with a characteristic range of roughness, which is less or comparable with dimensions of the skin microstructure. In other words, scale of the skin relief should reflect an adaptation to the particular range of surfaces asperities of the substrate.

  5. Modelling of the frictional behaviour of the snake skin covered by anisotropic surface nanostructures

    PubMed Central

    Filippov, Alexander E.; Gorb, Stanislav N.

    2016-01-01

    Previous experimental data clearly revealed anisotropic friction on the ventral scale surface of snakes. However, it is known that frictional properties of the ventral surface of the snake skin range in a very broad range and the degree of anisotropy ranges as well to a quite strong extent. This might be due to the variety of species studied, diversity of approaches used for the friction characterization, and/or due to the variety of substrates used as a counterpart in the experiments. In order to understand the interactions between the nanostructure arrays of the ventral surface of the snake skin, this study was undertaken, which is aimed at numerical modeling of frictional properties of the structurally anisotropic surfaces in contact with various size of asperities. The model shows that frictional anisotropy appears on the snake skin only on the substrates with a characteristic range of roughness, which is less or comparable with dimensions of the skin microstructure. In other words, scale of the skin relief should reflect an adaptation to the particular range of surfaces asperities of the substrate. PMID:27005001

  6. Modelling of the frictional behaviour of the snake skin covered by anisotropic surface nanostructures

    NASA Astrophysics Data System (ADS)

    Filippov, Alexander E.; Gorb, Stanislav N.

    2016-03-01

    Previous experimental data clearly revealed anisotropic friction on the ventral scale surface of snakes. However, it is known that frictional properties of the ventral surface of the snake skin range in a very broad range and the degree of anisotropy ranges as well to a quite strong extent. This might be due to the variety of species studied, diversity of approaches used for the friction characterization, and/or due to the variety of substrates used as a counterpart in the experiments. In order to understand the interactions between the nanostructure arrays of the ventral surface of the snake skin, this study was undertaken, which is aimed at numerical modeling of frictional properties of the structurally anisotropic surfaces in contact with various size of asperities. The model shows that frictional anisotropy appears on the snake skin only on the substrates with a characteristic range of roughness, which is less or comparable with dimensions of the skin microstructure. In other words, scale of the skin relief should reflect an adaptation to the particular range of surfaces asperities of the substrate.

  7. 2. HORSESHOE CURVE IN GLACIER POINT ROAD NEAR GLACIER POINT. ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    2. HORSESHOE CURVE IN GLACIER POINT ROAD NEAR GLACIER POINT. HALF DOME AT CENTER REAR. LOOKING NNE. GIS N-37 43 44.3 / W-119 34 14.1 - Glacier Point Road, Between Chinquapin Flat & Glacier Point, Yosemite Village, Mariposa County, CA

  8. Point cloud modeling using the homogeneous transformation for non-cooperative pose estimation

    NASA Astrophysics Data System (ADS)

    Lim, Tae W.

    2015-06-01

    A modeling process to simulate point cloud range data that a lidar (light detection and ranging) sensor produces is presented in this paper in order to support the development of non-cooperative pose (relative attitude and position) estimation approaches which will help improve proximity operation capabilities between two adjacent vehicles. The algorithms in the modeling process were based on the homogeneous transformation, which has been employed extensively in robotics and computer graphics, as well as in recently developed pose estimation algorithms. Using a flash lidar in a laboratory testing environment, point cloud data of a test article was simulated and compared against the measured point cloud data. The simulated and measured data sets match closely, validating the modeling process. The modeling capability enables close examination of the characteristics of point cloud images of an object as it undergoes various translational and rotational motions. Relevant characteristics that will be crucial in non-cooperative pose estimation were identified such as shift, shadowing, perspective projection, jagged edges, and differential point cloud density. These characteristics will have to be considered in developing effective non-cooperative pose estimation algorithms. The modeling capability will allow extensive non-cooperative pose estimation performance simulations prior to field testing, saving development cost and providing performance metrics of the pose estimation concepts and algorithms under evaluation. The modeling process also provides "truth" pose of the test objects with respect to the sensor frame so that the pose estimation error can be quantified.

  9. Development and application of a coupled bio-geochmical and hydrological model for point and non-point source river water pollution

    NASA Astrophysics Data System (ADS)

    Pohlert, T.

    2007-12-01

    The aim of this paper is to present recent developments of an integrated water- and N-balance model for the assessment of land use changes on water and N-fluxes for meso-scale river catchments. The semi-distributed water-balance model SWAT was coupled with algorithms of the bio-geochemical model DNDC as well as the model CropSyst. The new model that is further denoted as SWAT-N was tested with leaching data from a long- term lysimeter experiment as well as results from a 5-years sampling campaign that was conducted at the outlet of the meso-scale catchment of the River Dill (Germany). The model efficiency for N-load as well as the spatial representation of N-load along the river channel that was tested with results taken from longitudinal profiles show that the accuracy of the model has improved due to the integration of the aforementioned process-oriented models. After model development and model testing, SWAT-N was then used for the assessment of the EU agricultural policy (CAP reform) on land use change and consequent changes on N-fluxes within the Dill Catchment. giessen.de/geb/volltexte/2007/4531/

  10. On the Limitations of Taylor’s Hypothesis in Parker Solar Probe’s Measurements near the Alfvén Critical Point

    NASA Astrophysics Data System (ADS)

    Bourouaine, Sofiane; Perez, Jean C.

    2018-05-01

    In this Letter, we present an analysis of two-point, two-time correlation functions from high-resolution numerical simulations of Reflection-driven Alfvén Turbulence near the Alfvén critical point r c. The simulations model the turbulence in a prescribed background solar wind model chosen to match observational constraints. This analysis allows us to investigate the temporal decorrelation of solar wind turbulence and the validity of Taylor’s approximation near the heliocentric distance r c, which Parker Solar Probe (PSP) is expected to explore in the coming years. The simulations show that the temporal decay of the Fourier-transformed turbulence decorrelation function is better described by a Gaussian model rather than a pure exponential time decay, and that the decorrelation frequency is almost linear with perpendicular wave number k ⊥ (perpendicular with respect to the background magnetic field {{\\boldsymbol{B}}}0). Based on the simulations, we conclude that Taylor’s approximation cannot be used in this instance to provide a connection between the frequency ω of the time signal (measured in the probe frame) and the wavevector k ⊥ of the fluctuations because the frequency k ⊥ V sc (V sc is the spacecraft speed) near r c is comparable to the estimated decorrelation frequency. However, the use of Taylor’s approximation still leads to the correct spectral indices of the power spectra measured at the spacecraft frame. In this Letter, based on a Gaussian model, we suggest a modified relationship between ω and k ⊥, which might be useful in the interpretation of future PSP measurements.

  11. Improved Modeling of Three-Point Estimates for Decision Making: Going Beyond the Triangle

    DTIC Science & Technology

    2016-03-01

    OF THREE-POINT ESTIMATES FOR DECISION MAKING: GOING BEYOND THE TRIANGLE by Daniel W. Mulligan March 2016 Thesis Advisor: Mark Rhoades...REPORT TYPE AND DATES COVERED Master’s thesis 4. TITLE AND SUBTITLE IMPROVED MODELING OF THREE-POINT ESTIMATES FOR DECISION MAKING: GOING BEYOND...unlimited IMPROVED MODELING OF THREE-POINT ESTIMATES FOR DECISION MAKING: GOING BEYOND THE TRIANGLE Daniel W. Mulligan Civilian, National

  12. Marked point process for modelling seismic activity (case study in Sumatra and Java)

    NASA Astrophysics Data System (ADS)

    Pratiwi, Hasih; Sulistya Rini, Lia; Wayan Mangku, I.

    2018-05-01

    Earthquake is a natural phenomenon that is random, irregular in space and time. Until now the forecast of earthquake occurrence at a location is still difficult to be estimated so that the development of earthquake forecast methodology is still carried out both from seismology aspect and stochastic aspect. To explain the random nature phenomena, both in space and time, a point process approach can be used. There are two types of point processes: temporal point process and spatial point process. The temporal point process relates to events observed over time as a sequence of time, whereas the spatial point process describes the location of objects in two or three dimensional spaces. The points on the point process can be labelled with additional information called marks. A marked point process can be considered as a pair (x, m) where x is the point of location and m is the mark attached to the point of that location. This study aims to model marked point process indexed by time on earthquake data in Sumatra Island and Java Island. This model can be used to analyse seismic activity through its intensity function by considering the history process up to time before t. Based on data obtained from U.S. Geological Survey from 1973 to 2017 with magnitude threshold 5, we obtained maximum likelihood estimate for parameters of the intensity function. The estimation of model parameters shows that the seismic activity in Sumatra Island is greater than Java Island.

  13. Stability of equilibrium points in intraguild predation model with disease with SI model

    NASA Astrophysics Data System (ADS)

    Hassan, Aimi Nuraida binti Ali; Bujang, Noriham binti; Mahdi, Ahmad Faisal Bin

    2017-04-01

    Intraguild Predation (IGP) is classified as killing and eating among potential competitors. Intraguild Predation is a universal interaction, differing from competition or predation. Lotka Volterra competition model and Intraguild predation model has been analyze. The assumption for this model is no any immigration or migration involves. This paper is only considered IGP model for susceptible and infective (SI) only. The analysis of stability of the equilibrium points of Intraguild Predation Models with disease using Routh Hurwitz criteria will be illustrated using some numerical example.

  14. Image-Based Airborne LiDAR Point Cloud Encoding for 3d Building Model Retrieval

    NASA Astrophysics Data System (ADS)

    Chen, Yi-Chen; Lin, Chao-Hung

    2016-06-01

    With the development of Web 2.0 and cyber city modeling, an increasing number of 3D models have been available on web-based model-sharing platforms with many applications such as navigation, urban planning, and virtual reality. Based on the concept of data reuse, a 3D model retrieval system is proposed to retrieve building models similar to a user-specified query. The basic idea behind this system is to reuse these existing 3D building models instead of reconstruction from point clouds. To efficiently retrieve models, the models in databases are compactly encoded by using a shape descriptor generally. However, most of the geometric descriptors in related works are applied to polygonal models. In this study, the input query of the model retrieval system is a point cloud acquired by Light Detection and Ranging (LiDAR) systems because of the efficient scene scanning and spatial information collection. Using Point clouds with sparse, noisy, and incomplete sampling as input queries is more difficult than that by using 3D models. Because that the building roof is more informative than other parts in the airborne LiDAR point cloud, an image-based approach is proposed to encode both point clouds from input queries and 3D models in databases. The main goal of data encoding is that the models in the database and input point clouds can be consistently encoded. Firstly, top-view depth images of buildings are generated to represent the geometry surface of a building roof. Secondly, geometric features are extracted from depth images based on height, edge and plane of building. Finally, descriptors can be extracted by spatial histograms and used in 3D model retrieval system. For data retrieval, the models are retrieved by matching the encoding coefficients of point clouds and building models. In experiments, a database including about 900,000 3D models collected from the Internet is used for evaluation of data retrieval. The results of the proposed method show a clear superiority

  15. A removal model for estimating detection probabilities from point-count surveys

    USGS Publications Warehouse

    Farnsworth, G.L.; Pollock, K.H.; Nichols, J.D.; Simons, T.R.; Hines, J.E.; Sauer, J.R.

    2002-01-01

    Use of point-count surveys is a popular method for collecting data on abundance and distribution of birds. However, analyses of such data often ignore potential differences in detection probability. We adapted a removal model to directly estimate detection probability during point-count surveys. The model assumes that singing frequency is a major factor influencing probability of detection when birds are surveyed using point counts. This may be appropriate for surveys in which most detections are by sound. The model requires counts to be divided into several time intervals. Point counts are often conducted for 10 min, where the number of birds recorded is divided into those first observed in the first 3 min, the subsequent 2 min, and the last 5 min. We developed a maximum-likelihood estimator for the detectability of birds recorded during counts divided into those intervals. This technique can easily be adapted to point counts divided into intervals of any length. We applied this method to unlimited-radius counts conducted in Great Smoky Mountains National Park. We used model selection criteria to identify whether detection probabilities varied among species, throughout the morning, throughout the season, and among different observers. We found differences in detection probability among species. Species that sing frequently such as Winter Wren (Troglodytes troglodytes) and Acadian Flycatcher (Empidonax virescens) had high detection probabilities (∼90%) and species that call infrequently such as Pileated Woodpecker (Dryocopus pileatus) had low detection probability (36%). We also found detection probabilities varied with the time of day for some species (e.g. thrushes) and between observers for other species. We used the same approach to estimate detection probability and density for a subset of the observations with limited-radius point counts.

  16. HORSESHOE CURVE IN GLACIER POINT ROAD NEAR GLACIER POINT. HALF ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    HORSESHOE CURVE IN GLACIER POINT ROAD NEAR GLACIER POINT. HALF DOME AT CENTER REAR. SAME VIEW AT CA-157-2. LOOKING NNE. GIS: N-37' 43 44.3 / W-119 34 14.1 - Glacier Point Road, Between Chinquapin Flat & Glacier Point, Yosemite Village, Mariposa County, CA

  17. An infrared sky model based on the IRAS point source data

    NASA Technical Reports Server (NTRS)

    Cohen, Martin; Walker, Russell; Wainscoat, Richard; Volk, Kevin; Walker, Helen; Schwartz, Deborah

    1990-01-01

    A detailed model for the infrared point source sky is presented that comprises geometrically and physically realistic representations of the galactic disk, bulge, spheroid, spiral arms, molecular ring, and absolute magnitudes. The model was guided by a parallel Monte Carlo simulation of the Galaxy. The content of the galactic source table constitutes an excellent match to the 12 micrometer luminosity function in the simulation, as well as the luminosity functions at V and K. Models are given for predicting the density of asteroids to be observed, and the diffuse background radiance of the Zodiacal cloud. The model can be used to predict the character of the point source sky expected for observations from future infrared space experiments.

  18. Low Temperature Properties for Correlation Functions in Classical N-Vector Spin Models

    NASA Astrophysics Data System (ADS)

    Balaban, Tadeusz; O'Carroll, Michael

    We obtain convergent multi-scale expansions for the one-and two-point correlation functions of the low temperature lattice classical N- vector spin model in d>= 3 dimensions, N>= 2. The Gibbs factor is taken as where , , , are large and 0 < v<= 1. In the thermodynamic and limits, with h=e1, and Δ≡∂*∂, the expansion gives (spontaneous magnetization), , (Goldstone Bosons), , and , where , for some ρ > 0, and c0 is aprecisely determined constant.

  19. Statistical properties of several models of fractional random point processes

    NASA Astrophysics Data System (ADS)

    Bendjaballah, C.

    2011-08-01

    Statistical properties of several models of fractional random point processes have been analyzed from the counting and time interval statistics points of view. Based on the criterion of the reduced variance, it is seen that such processes exhibit nonclassical properties. The conditions for these processes to be treated as conditional Poisson processes are examined. Numerical simulations illustrate part of the theoretical calculations.

  20. Multiscale contact mechanics model for RF-MEMS switches with quantified uncertainties

    NASA Astrophysics Data System (ADS)

    Kim, Hojin; Huda Shaik, Nurul; Xu, Xin; Raman, Arvind; Strachan, Alejandro

    2013-12-01

    We introduce a multiscale model for contact mechanics between rough surfaces and apply it to characterize the force-displacement relationship for a metal-dielectric contact relevant for radio frequency micro-electromechanicl system (MEMS) switches. We propose a mesoscale model to describe the history-dependent force-displacement relationships in terms of the surface roughness, the long-range attractive interaction between the two surfaces, and the repulsive interaction between contacting asperities (including elastic and plastic deformation). The inputs to this model are the experimentally determined surface topography and the Hamaker constant as well as the mechanical response of individual asperities obtained from density functional theory calculations and large-scale molecular dynamics simulations. The model captures non-trivial processes including the hysteresis during loading and unloading due to plastic deformation, yet it is computationally efficient enough to enable extensive uncertainty quantification and sensitivity analysis. We quantify how uncertainties and variability in the input parameters, both experimental and theoretical, affect the force-displacement curves during approach and retraction. In addition, a sensitivity analysis quantifies the relative importance of the various input quantities for the prediction of force-displacement during contact closing and opening. The resulting force-displacement curves with quantified uncertainties can be directly used in device-level simulations of micro-switches and enable the incorporation of atomic and mesoscale phenomena in predictive device-scale simulations.

  1. Laboratory experiment of seismic cycles using compliant viscoelastic materials

    NASA Astrophysics Data System (ADS)

    Yamaguchi, T.

    2016-12-01

    It is well known that surface asperities at fault interfaces play an essential role in stick-slip friction. There have been many laboratory experiments conducted using rocks and some analogue materials to understand the effects of asperities and the underlying mechanisms. Among such materials, soft polymer gels have great advantages of slowing down propagating rupture front speed as well as shear wave speed: it facilitates observation of the dynamic rupture behavior. However, most experiments were done with bimaterial interfaces (combination of soft and hard materials) and there are few experiments with an identical (gel on gel) setup. Furthermore, there have been also few studies mentioning the link between local asperity contact and macroscopic dynamic rupture behavior. In this talk, we report our experimental studies on stick-slip friction between gels having controlled artificial asperities. We show that, depending on number density and configuration randomness of the asperities, the rupture behavior greatly changes: when the asperities are located periodically with optimum number densities, fast rupture propagation occurs, while slow and heterogeneous slip behavior is observed for samples having randomly located asperities. We discuss the importance of low frequency (large wavelength) excitation of the normal displacement contributing to weakening the fault interface. We also discuss the observed regular to slow slip transition with a simple model.

  2. Modeling of normal contact of elastic bodies with surface relief taken into account

    NASA Astrophysics Data System (ADS)

    Goryacheva, I. G.; Tsukanov, I. Yu

    2018-04-01

    An approach to account the surface relief in normal contact problems for rough bodies on the basis of an additional displacement function for asperities is considered. The method and analytic expressions for calculating the additional displacement function for one-scale and two-scale wavy relief are presented. The influence of the microrelief geometric parameters, including the number of scales and asperities density, on additional displacements of the rough layer is analyzed.

  3. Isoelectric point and adsorption activity of porous g-C3N4

    NASA Astrophysics Data System (ADS)

    Zhu, Bicheng; Xia, Pengfei; Ho, Wingkei; Yu, Jiaguo

    2015-07-01

    The isoelectric point (IEP) is an important physicochemical parameter of many compounds, such as oxides, hydroxides, and nitrides, and can contribute to estimation of the surface charges of compound particles at various pH conditions. In this work, three types of graphitic carbon nitrides (g-C3N4) were synthesized by directly heating melamine, thiourea, and urea. The prepared samples showed different microstructures and IEPs that influenced their adsorption activity. Differences in microstructure resulted from the various precursors used during synthesis. The IEPs of the obtained g-C3N4 were measured to be approximately 4-5, which is due to the equilibrium of chemical reactions between hydrogen ions, hydroxyl ions, and amine groups on the g-C3N4 surface. The IEP of g-C3N4 prepared from thiourea was lower than those of the corresponding samples prepared from melamine and urea. The adsorption activity of methylene blue on g-C3N4 prepared from urea and thiourea was excellent, which indicates that g-C3N4 is a promising adsorbent. This work provides a useful reference for choosing precursors with which to prepare g-C3N4 and combining g-C3N4 with other compounds in solution.

  4. Stochastic point-source modeling of ground motions in the Cascadia region

    USGS Publications Warehouse

    Atkinson, G.M.; Boore, D.M.

    1997-01-01

    A stochastic model is used to develop preliminary ground motion relations for the Cascadia region for rock sites. The model parameters are derived from empirical analyses of seismographic data from the Cascadia region. The model is based on a Brune point-source characterized by a stress parameter of 50 bars. The model predictions are compared to ground-motion data from the Cascadia region and to data from large earthquakes in other subduction zones. The point-source simulations match the observations from moderate events (M 100 km). The discrepancy at large magnitudes suggests further work on modeling finite-fault effects and regional attenuation is warranted. In the meantime, the preliminary equations are satisfactory for predicting motions from events of M < 7 and provide conservative estimates of motions from larger events at distances less than 100 km.

  5. Critical point of gas-liquid type phase transition and phase equilibrium functions in developed two-component plasma model.

    PubMed

    Butlitsky, M A; Zelener, B B; Zelener, B V

    2014-07-14

    A two-component plasma model, which we called a "shelf Coulomb" model has been developed in this work. A Monte Carlo study has been undertaken to calculate equations of state, pair distribution functions, internal energies, and other thermodynamics properties. A canonical NVT ensemble with periodic boundary conditions was used. The motivation behind the model is also discussed in this work. The "shelf Coulomb" model can be compared to classical two-component (electron-proton) model where charges with zero size interact via a classical Coulomb law. With important difference for interaction of opposite charges: electrons and protons interact via the Coulomb law for large distances between particles, while interaction potential is cut off on small distances. The cut off distance is defined by an arbitrary ɛ parameter, which depends on system temperature. All the thermodynamics properties of the model depend on dimensionless parameters ɛ and γ = βe(2)n(1/3) (where β = 1/kBT, n is the particle's density, kB is the Boltzmann constant, and T is the temperature) only. In addition, it has been shown that the virial theorem works in this model. All the calculations were carried over a wide range of dimensionless ɛ and γ parameters in order to find the phase transition region, critical point, spinodal, and binodal lines of a model system. The system is observed to undergo a first order gas-liquid type phase transition with the critical point being in the vicinity of ɛ(crit) ≈ 13(T(*)(crit) ≈ 0.076), γ(crit) ≈ 1.8(v(*)(crit) ≈ 0.17), P(*)(crit) ≈ 0.39, where specific volume v* = 1/γ(3) and reduced temperature T(*) = ɛ(-1).

  6. Critical point of gas-liquid type phase transition and phase equilibrium functions in developed two-component plasma model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Butlitsky, M. A.; Zelener, B. V.; Zelener, B. B.

    A two-component plasma model, which we called a “shelf Coulomb” model has been developed in this work. A Monte Carlo study has been undertaken to calculate equations of state, pair distribution functions, internal energies, and other thermodynamics properties. A canonical NVT ensemble with periodic boundary conditions was used. The motivation behind the model is also discussed in this work. The “shelf Coulomb” model can be compared to classical two-component (electron-proton) model where charges with zero size interact via a classical Coulomb law. With important difference for interaction of opposite charges: electrons and protons interact via the Coulomb law for largemore » distances between particles, while interaction potential is cut off on small distances. The cut off distance is defined by an arbitrary ε parameter, which depends on system temperature. All the thermodynamics properties of the model depend on dimensionless parameters ε and γ = βe{sup 2}n{sup 1/3} (where β = 1/k{sub B}T, n is the particle's density, k{sub B} is the Boltzmann constant, and T is the temperature) only. In addition, it has been shown that the virial theorem works in this model. All the calculations were carried over a wide range of dimensionless ε and γ parameters in order to find the phase transition region, critical point, spinodal, and binodal lines of a model system. The system is observed to undergo a first order gas-liquid type phase transition with the critical point being in the vicinity of ε{sub crit}≈13(T{sub crit}{sup *}≈0.076),γ{sub crit}≈1.8(v{sub crit}{sup *}≈0.17),P{sub crit}{sup *}≈0.39, where specific volume v* = 1/γ{sup 3} and reduced temperature T{sup *} = ε{sup −1}.« less

  7. Bayesian Multiscale Modeling of Closed Curves in Point Clouds

    PubMed Central

    Gu, Kelvin; Pati, Debdeep; Dunson, David B.

    2014-01-01

    Modeling object boundaries based on image or point cloud data is frequently necessary in medical and scientific applications ranging from detecting tumor contours for targeted radiation therapy, to the classification of organisms based on their structural information. In low-contrast images or sparse and noisy point clouds, there is often insufficient data to recover local segments of the boundary in isolation. Thus, it becomes critical to model the entire boundary in the form of a closed curve. To achieve this, we develop a Bayesian hierarchical model that expresses highly diverse 2D objects in the form of closed curves. The model is based on a novel multiscale deformation process. By relating multiple objects through a hierarchical formulation, we can successfully recover missing boundaries by borrowing structural information from similar objects at the appropriate scale. Furthermore, the model’s latent parameters help interpret the population, indicating dimensions of significant structural variability and also specifying a ‘central curve’ that summarizes the collection. Theoretical properties of our prior are studied in specific cases and efficient Markov chain Monte Carlo methods are developed, evaluated through simulation examples and applied to panorex teeth images for modeling teeth contours and also to a brain tumor contour detection problem. PMID:25544786

  8. D Modeling of Components of a Garden by Using Point Cloud Data

    NASA Astrophysics Data System (ADS)

    Kumazakia, R.; Kunii, Y.

    2016-06-01

    Laser measurement is currently applied to several tasks such as plumbing management, road investigation through mobile mapping systems, and elevation model utilization through airborne LiDAR. Effective laser measurement methods have been well-documented in civil engineering, but few attempts have been made to establish equally effective methods in landscape engineering. By using point cloud data acquired through laser measurement, the aesthetic landscaping of Japanese gardens can be enhanced. This study focuses on simple landscape simulations for pruning and rearranging trees as well as rearranging rocks, lanterns, and other garden features by using point cloud data. However, such simulations lack concreteness. Therefore, this study considers the construction of a library of garden features extracted from point cloud data. The library would serve as a resource for creating new gardens and simulating gardens prior to conducting repairs. Extracted garden features are imported as 3ds Max objects, and realistic 3D models are generated by using a material editor system. As further work toward the publication of a 3D model library, file formats for tree crowns and trunks should be adjusted. Moreover, reducing the size of created models is necessary. Models created using point cloud data are informative because simply shaped garden features such as trees are often seen in the 3D industry.

  9. The importance of topographically corrected null models for analyzing ecological point processes.

    PubMed

    McDowall, Philip; Lynch, Heather J

    2017-07-01

    Analyses of point process patterns and related techniques (e.g., MaxEnt) make use of the expected number of occurrences per unit area and second-order statistics based on the distance between occurrences. Ecologists working with point process data often assume that points exist on a two-dimensional x-y plane or within a three-dimensional volume, when in fact many observed point patterns are generated on a two-dimensional surface existing within three-dimensional space. For many surfaces, however, such as the topography of landscapes, the projection from the surface to the x-y plane preserves neither area nor distance. As such, when these point patterns are implicitly projected to and analyzed in the x-y plane, our expectations of the point pattern's statistical properties may not be met. When used in hypothesis testing, we find that the failure to account for the topography of the generating surface may bias statistical tests that incorrectly identify clustering and, furthermore, may bias coefficients in inhomogeneous point process models that incorporate slope as a covariate. We demonstrate the circumstances under which this bias is significant, and present simple methods that allow point processes to be simulated with corrections for topography. These point patterns can then be used to generate "topographically corrected" null models against which observed point processes can be compared. © 2017 by the Ecological Society of America.

  10. H5N1 pathogenesis studies in mammalian models

    PubMed Central

    Belser, Jessica A.; Tumpey, Terrence M.

    2017-01-01

    H5N1 influenza viruses are capable of causing severe disease and death in humans, and represent a potential pandemic subtype should they acquire a transmissible phenotype. Due to the expanding host and geographic range of this virus subtype, there is an urgent need to better understand the contribution of both virus and host responses following H5N1 virus infection to prevent and control human disease. The use of mammalian models, notably the mouse and ferret, has enabled the detailed study of both complex virus–host interactions as well as the contribution of individual viral proteins and point mutations which influence virulence. In this review, we describe the behavior of H5N1 viruses which exhibit high and low virulence in numerous mammalian species, and highlight the contribution of inoculation route to virus pathogenicity. The involvement of host responses as studied in both inbred and outbred mammalian models is discussed. The roles of individual viral gene products and molecular determinants which modulate the severity of H5N1 disease in vivo are presented. This research contributes not only to our understanding of influenza virus pathogenesis, but also identifies novel preventative and therapeutic targets to mitigate the disease burden caused by avian influenza viruses. PMID:23458998

  11. Nonrelativistic approaches derived from point-coupling relativistic models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lourenco, O.; Dutra, M.; Delfino, A.

    2010-03-15

    We construct nonrelativistic versions of relativistic nonlinear hadronic point-coupling models, based on new normalized spinor wave functions after small component reduction. These expansions give us energy density functionals that can be compared to their relativistic counterparts. We show that the agreement between the nonrelativistic limit approach and the Skyrme parametrizations becomes strongly dependent on the incompressibility of each model. We also show that the particular case A=B=0 (Walecka model) leads to the same energy density functional of the Skyrme parametrizations SV and ZR2, while the truncation scheme, up to order {rho}{sup 3}, leads to parametrizations for which {sigma}=1.

  12. Identifying influential data points in hydrological model calibration and their impact on streamflow predictions

    NASA Astrophysics Data System (ADS)

    Wright, David; Thyer, Mark; Westra, Seth

    2015-04-01

    Highly influential data points are those that have a disproportionately large impact on model performance, parameters and predictions. However, in current hydrological modelling practice the relative influence of individual data points on hydrological model calibration is not commonly evaluated. This presentation illustrates and evaluates several influence diagnostics tools that hydrological modellers can use to assess the relative influence of data. The feasibility and importance of including influence detection diagnostics as a standard tool in hydrological model calibration is discussed. Two classes of influence diagnostics are evaluated: (1) computationally demanding numerical "case deletion" diagnostics; and (2) computationally efficient analytical diagnostics, based on Cook's distance. These diagnostics are compared against hydrologically orientated diagnostics that describe changes in the model parameters (measured through the Mahalanobis distance), performance (objective function displacement) and predictions (mean and maximum streamflow). These influence diagnostics are applied to two case studies: a stage/discharge rating curve model, and a conceptual rainfall-runoff model (GR4J). Removing a single data point from the calibration resulted in differences to mean flow predictions of up to 6% for the rating curve model, and differences to mean and maximum flow predictions of up to 10% and 17%, respectively, for the hydrological model. When using the Nash-Sutcliffe efficiency in calibration, the computationally cheaper Cook's distance metrics produce similar results to the case-deletion metrics at a fraction of the computational cost. However, Cooks distance is adapted from linear regression with inherit assumptions on the data and is therefore less flexible than case deletion. Influential point detection diagnostics show great potential to improve current hydrological modelling practices by identifying highly influential data points. The findings of this

  13. Hybrid Scheme for Modeling Local Field Potentials from Point-Neuron Networks.

    PubMed

    Hagen, Espen; Dahmen, David; Stavrinou, Maria L; Lindén, Henrik; Tetzlaff, Tom; van Albada, Sacha J; Grün, Sonja; Diesmann, Markus; Einevoll, Gaute T

    2016-12-01

    With rapidly advancing multi-electrode recording technology, the local field potential (LFP) has again become a popular measure of neuronal activity in both research and clinical applications. Proper understanding of the LFP requires detailed mathematical modeling incorporating the anatomical and electrophysiological features of neurons near the recording electrode, as well as synaptic inputs from the entire network. Here we propose a hybrid modeling scheme combining efficient point-neuron network models with biophysical principles underlying LFP generation by real neurons. The LFP predictions rely on populations of network-equivalent multicompartment neuron models with layer-specific synaptic connectivity, can be used with an arbitrary number of point-neuron network populations, and allows for a full separation of simulated network dynamics and LFPs. We apply the scheme to a full-scale cortical network model for a ∼1 mm 2 patch of primary visual cortex, predict laminar LFPs for different network states, assess the relative LFP contribution from different laminar populations, and investigate effects of input correlations and neuron density on the LFP. The generic nature of the hybrid scheme and its public implementation in hybridLFPy form the basis for LFP predictions from other and larger point-neuron network models, as well as extensions of the current application with additional biological detail. © The Author 2016. Published by Oxford University Press.

  14. N =4 supergravity next-to-maximally-helicity-violating six-point one-loop amplitude

    NASA Astrophysics Data System (ADS)

    Dunbar, David C.; Perkins, Warren B.

    2016-12-01

    We construct the six-point, next-to-maximally-helicity-violating one-loop amplitude in N =4 supergravity using unitarity and recursion. The use of recursion requires the introduction of rational descendants of the cut-constructible pieces of the amplitude and the computation of the nonstandard factorization terms arising from the loop integrals.

  15. Source parameters of microearthquakes on an interplate asperity off Kamaishi, NE Japan over two earthquake cycles

    USGS Publications Warehouse

    Uchida, Naoki; Matsuzawa, Toru; Ellsworth, William L.; Imanishi, Kazutoshi; Shimamura, Kouhei; Hasegawa, Akira

    2012-01-01

    We have estimated the source parameters of interplate earthquakes in an earthquake cluster off Kamaishi, NE Japan over two cycles of M~ 4.9 repeating earthquakes. The M~ 4.9 earthquake sequence is composed of nine events that occurred since 1957 which have a strong periodicity (5.5 ± 0.7 yr) and constant size (M4.9 ± 0.2), probably due to stable sliding around the source area (asperity). Using P- and S-wave traveltime differentials estimated from waveform cross-spectra, three M~ 4.9 main shocks and 50 accompanying microearthquakes (M1.5–3.6) from 1995 to 2008 were precisely relocated. The source sizes, stress drops and slip amounts for earthquakes of M2.4 or larger were also estimated from corner frequencies and seismic moments using simultaneous inversion of stacked spectral ratios. Relocation using the double-difference method shows that the slip area of the 2008 M~ 4.9 main shock is co-located with those of the 1995 and 2001 M~ 4.9 main shocks. Four groups of microearthquake clusters are located in and around the mainshock slip areas. Of these, two clusters are located at the deeper and shallower edge of the slip areas and most of these microearthquakes occurred repeatedly in the interseismic period. Two other clusters located near the centre of the mainshock source areas are not as active as the clusters near the edge. The occurrence of these earthquakes is limited to the latter half of the earthquake cycles of the M~ 4.9 main shock. Similar spatial and temporal features of microearthquake occurrence were seen for two other cycles before the 1995 M5.0 and 1990 M5.0 main shocks based on group identification by waveform similarities. Stress drops of microearthquakes are 3–11 MPa and are relatively constant within each group during the two earthquake cycles. The 2001 and 2008 M~ 4.9 earthquakes have larger stress drops of 41 and 27 MPa, respectively. These results show that the stress drop is probably determined by the fault properties and does not change

  16. a Fast Method for Measuring the Similarity Between 3d Model and 3d Point Cloud

    NASA Astrophysics Data System (ADS)

    Zhang, Zongliang; Li, Jonathan; Li, Xin; Lin, Yangbin; Zhang, Shanxin; Wang, Cheng

    2016-06-01

    This paper proposes a fast method for measuring the partial Similarity between 3D Model and 3D point Cloud (SimMC). It is crucial to measure SimMC for many point cloud-related applications such as 3D object retrieval and inverse procedural modelling. In our proposed method, the surface area of model and the Distance from Model to point Cloud (DistMC) are exploited as measurements to calculate SimMC. Here, DistMC is defined as the weighted distance of the distances between points sampled from model and point cloud. Similarly, Distance from point Cloud to Model (DistCM) is defined as the average distance of the distances between points in point cloud and model. In order to reduce huge computational burdens brought by calculation of DistCM in some traditional methods, we define SimMC as the ratio of weighted surface area of model to DistMC. Compared to those traditional SimMC measuring methods that are only able to measure global similarity, our method is capable of measuring partial similarity by employing distance-weighted strategy. Moreover, our method is able to be faster than other partial similarity assessment methods. We demonstrate the superiority of our method both on synthetic data and laser scanning data.

  17. The four fixed points of scale invariant single field cosmological models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xue, BingKan, E-mail: bxue@princeton.edu

    2012-10-01

    We introduce a new set of flow parameters to describe the time dependence of the equation of state and the speed of sound in single field cosmological models. A scale invariant power spectrum is produced if these flow parameters satisfy specific dynamical equations. We analyze the flow of these parameters and find four types of fixed points that encompass all known single field models. Moreover, near each fixed point we uncover new models where the scale invariance of the power spectrum relies on having simultaneously time varying speed of sound and equation of state. We describe several distinctive new modelsmore » and discuss constraints from strong coupling and superluminality.« less

  18. General methodology for nonlinear modeling of neural systems with Poisson point-process inputs.

    PubMed

    Marmarelis, V Z; Berger, T W

    2005-07-01

    This paper presents a general methodological framework for the practical modeling of neural systems with point-process inputs (sequences of action potentials or, more broadly, identical events) based on the Volterra and Wiener theories of functional expansions and system identification. The paper clarifies the distinctions between Volterra and Wiener kernels obtained from Poisson point-process inputs. It shows that only the Wiener kernels can be estimated via cross-correlation, but must be defined as zero along the diagonals. The Volterra kernels can be estimated far more accurately (and from shorter data-records) by use of the Laguerre expansion technique adapted to point-process inputs, and they are independent of the mean rate of stimulation (unlike their P-W counterparts that depend on it). The Volterra kernels can also be estimated for broadband point-process inputs that are not Poisson. Useful applications of this modeling approach include cases where we seek to determine (model) the transfer characteristics between one neuronal axon (a point-process 'input') and another axon (a point-process 'output') or some other measure of neuronal activity (a continuous 'output', such as population activity) with which a causal link exists.

  19. Surface morphology and electrical properties of Au/Ni/ Left-Pointing-Angle-Bracket C Right-Pointing-Angle-Bracket /n-Ga{sub 2}O{sub 3}/p-GaSe Left-Pointing-Angle-Bracket KNO{sub 3} Right-Pointing-Angle-Bracket hybrid structures fabricated on the basis of a layered semiconductor with nanoscale ferroelectric inclusions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bakhtinov, A. P., E-mail: chimsp@ukrpost.ua; Vodopyanov, V. N.; Netyaga, V. V.

    2012-03-15

    Features of the formation of Au/Ni/ Left-Pointing-Angle-Bracket C Right-Pointing-Angle-Bracket /n-Ga{sub 2}O{sub 3} hybrid nanostructures on a Van der Waals surface (0001) of 'layered semiconductor-ferroelectric' composite nanostructures (p-GaSe Left-Pointing-Angle-Bracket KNO{sub 3} Right-Pointing-Angle-Bracket ) are studied using atomic-force microscopy. The room-temperature current-voltage characteristics and the dependence of the impedance spectrum of hybrid structures on a bias voltage are studied. The current-voltage characteristic includes a resonance peak and a portion with negative differential resistance. The current attains a maximum at a certain bias voltage, when electric polarization switching in nanoscale three-dimensional inclusions in the layered GaSe matrix occurs. In the high-frequency region (fmore » > 10{sup 6} Hz), inductive-type impedance (a large negative capacitance of structures, {approx}10{sup 6} F/mm{sup 2}) is detected. This effect is due to spinpolarized electron transport in a series of interconnected semiconductor composite nanostructures with multiple p-GaSe Left-Pointing-Angle-Bracket KNO{sub 3} Right-Pointing-Angle-Bracket quantum wells and a forward-biased 'ferromagnetic metal-semiconductor' polarizer (Au/Ni/ Left-Pointing-Angle-Bracket C Right-Pointing-Angle-Bracket /n{sup +}-Ga{sub 2}O{sub 3}/n-Ga{sub 2}O{sub 3}). A shift of the maximum (current hysteresis) is detected in the current-voltage characteristics for various directions of the variations in bias voltage.« less

  20. Saddle point localization of molecular wavefunctions.

    PubMed

    Mellau, Georg Ch; Kyuberis, Alexandra A; Polyansky, Oleg L; Zobov, Nikolai; Field, Robert W

    2016-09-15

    The quantum mechanical description of isomerization is based on bound eigenstates of the molecular potential energy surface. For the near-minimum regions there is a textbook-based relationship between the potential and eigenenergies. Here we show how the saddle point region that connects the two minima is encoded in the eigenstates of the model quartic potential and in the energy levels of the [H, C, N] potential energy surface. We model the spacing of the eigenenergies with the energy dependent classical oscillation frequency decreasing to zero at the saddle point. The eigenstates with the smallest spacing are localized at the saddle point. The analysis of the HCN ↔ HNC isomerization states shows that the eigenstates with small energy spacing relative to the effective (v1, v3, ℓ) bending potentials are highly localized in the bending coordinate at the transition state. These spectroscopically detectable states represent a chemical marker of the transition state in the eigenenergy spectrum. The method developed here provides a basis for modeling characteristic patterns in the eigenenergy spectrum of bound states.

  1. Locating the quantum critical point of the Bose-Hubbard model through singularities of simple observables.

    PubMed

    Łącki, Mateusz; Damski, Bogdan; Zakrzewski, Jakub

    2016-12-02

    We show that the critical point of the two-dimensional Bose-Hubbard model can be easily found through studies of either on-site atom number fluctuations or the nearest-neighbor two-point correlation function (the expectation value of the tunnelling operator). Our strategy to locate the critical point is based on the observation that the derivatives of these observables with respect to the parameter that drives the superfluid-Mott insulator transition are singular at the critical point in the thermodynamic limit. Performing the quantum Monte Carlo simulations of the two-dimensional Bose-Hubbard model, we show that this technique leads to the accurate determination of the position of its critical point. Our results can be easily extended to the three-dimensional Bose-Hubbard model and different Hubbard-like models. They provide a simple experimentally-relevant way of locating critical points in various cold atomic lattice systems.

  2. Comparison of the adjuvant activity of aluminum hydroxide and calcium phosphate on the antibody response towards Bothrops asper snake venom.

    PubMed

    Olmedo, Hidekel; Herrera, María; Rojas, Leonardo; Villalta, Mauren; Vargas, Mariángela; Leiguez, Elbio; Teixeira, Catarina; Estrada, Ricardo; Gutiérrez, José María; León, Guillermo; Montero, Mavis L

    2014-01-01

    The adjuvanticity of aluminum hydroxide and calcium phosphate on the antibody response in mice towards the venom of the snake Bothrops asper was studied. It was found that, in vitro, most of the venom proteins are similarly adsorbed by both mineral salts, with the exception of some basic phospholipases A2, which are better adsorbed by calcium phosphate. After injection, the adjuvants promoted a slow release of the venom, as judged by the lack of acute toxicity when lethal doses of venom were administered to mice. Leukocyte recruitment induced by the venom was enhanced when it was adsorbed on both mineral salts; however, venom adsorbed on calcium phosphate induced a higher antibody response towards all tested HPLC fractions of the venom. On the other hand, co-precipitation of venom with calcium phosphate was the best strategy for increasing: (1) the capacity of the salt to couple venom proteins in vitro; (2) the venom ability to induce leukocyte recruitment; (3) phagocytosis by macrophages; and (4) a host antibody response. These findings suggest that the chemical nature is not the only one determining factor of the adjuvant activity of mineral salts.

  3. Modeling deep brain stimulation: point source approximation versus realistic representation of the electrode

    NASA Astrophysics Data System (ADS)

    Zhang, Tianhe C.; Grill, Warren M.

    2010-12-01

    Deep brain stimulation (DBS) has emerged as an effective treatment for movement disorders; however, the fundamental mechanisms by which DBS works are not well understood. Computational models of DBS can provide insights into these fundamental mechanisms and typically require two steps: calculation of the electrical potentials generated by DBS and, subsequently, determination of the effects of the extracellular potentials on neurons. The objective of this study was to assess the validity of using a point source electrode to approximate the DBS electrode when calculating the thresholds and spatial distribution of activation of a surrounding population of model neurons in response to monopolar DBS. Extracellular potentials in a homogenous isotropic volume conductor were calculated using either a point current source or a geometrically accurate finite element model of the Medtronic DBS 3389 lead. These extracellular potentials were coupled to populations of model axons, and thresholds and spatial distributions were determined for different electrode geometries and axon orientations. Median threshold differences between DBS and point source electrodes for individual axons varied between -20.5% and 9.5% across all orientations, monopolar polarities and electrode geometries utilizing the DBS 3389 electrode. Differences in the percentage of axons activated at a given amplitude by the point source electrode and the DBS electrode were between -9.0% and 12.6% across all monopolar configurations tested. The differences in activation between the DBS and point source electrodes occurred primarily in regions close to conductor-insulator interfaces and around the insulating tip of the DBS electrode. The robustness of the point source approximation in modeling several special cases—tissue anisotropy, a long active electrode and bipolar stimulation—was also examined. Under the conditions considered, the point source was shown to be a valid approximation for predicting excitation

  4. The RTOG Outcomes Model: economic end points and measures.

    PubMed

    Konski, Andre; Watkins-Bruner, Deborah

    2004-03-01

    Recognising the value added by economic evaluations of clinical trials and the interaction of clinical, humanistic and economic end points, the Radiation Therapy Oncology Group (RTOG) has developed an Outcomes Model that guides the comprehensive assessment of this triad of end points. This paper will focus on the economic component of the model. The Economic Impact Committee was founded in 1994 to study the economic impact of clinical trials of cancer care. A steep learning curve ensued with considerable time initially spent understanding the methodology of economic analysis. Since then, economic analyses have been performed on RTOG clinical trials involving treatments for patients with non-small cell lung cancer, locally-advanced head and neck cancer and prostate cancer. As the care of cancer patients evolves with time, so has the economic analyses performed by the Economic Impact Committee. This paper documents the evolution of the cost-effectiveness analyses of RTOG from performing average cost-utility analysis to more technically sophisticated Monte Carlo simulation of Markov models, to incorporating prospective economic analyses as an initial end point. Briefly, results indicated that, accounting for quality-adjusted survival, concurrent chemotherapy and radiation for the treatment of non-small cell lung cancer, more aggressive radiation fractionation schedules for head and neck cancer and the addition of hormone therapy to radiation for prostate cancer are within the range of economically acceptable recommendations. The RTOG economic analyses have provided information that can further inform clinicians and policy makers of the value added of new or improved treatments.

  5. Electromagnetic braking revisited with a magnetic point dipole model

    NASA Astrophysics Data System (ADS)

    Land, Sara; McGuire, Patrick; Bumb, Nikhil; Mann, Brian P.; Yellen, Benjamin B.

    2016-04-01

    A theoretical model is developed to predict the trajectory of magnetized spheres falling through a copper pipe. The derive magnetic point dipole model agrees well with the experimental trajectories for NdFeB spherical magnets of varying diameter, which are embedded inside 3D printed shells with fixed outer dimensions. This demonstration of electrodynamic phenomena and Lenz's law serves as a good laboratory exercise for physics, electromagnetics, and dynamics classes at the undergraduate level.

  6. Electrically active point defects in Mg implanted n-type GaN grown by metal-organic chemical vapor deposition

    NASA Astrophysics Data System (ADS)

    Alfieri, G.; Sundaramoorthy, V. K.; Micheletto, R.

    2018-05-01

    Magnesium (Mg) is the p-type doping of choice for GaN, and selective area doping by ion implantation is a routine technique employed during device processing. While electrically active defects have been thoroughly studied in as-grown GaN, not much is known about defects generated by ion implantation. This is especially true for the case of Mg. In this study, we carried out an electrical characterization investigation of point defects generated by Mg implantation in GaN. We have found at least nine electrically active levels in the 0.2-1.2 eV energy range, below the conduction band. The isochronal annealing behavior of these levels showed that most of them are thermally stable up to 1000 °C. The nature of the detected defects is then discussed in the light of the results found in the literature.

  7. End-group-functionalized poly(N,N-diethylacrylamide) via free-radical chain transfer polymerization: Influence of sulfur oxidation and cyclodextrin on self-organization and cloud points in water

    PubMed Central

    Reinelt, Sebastian; Steinke, Daniel

    2014-01-01

    Summary In this work we report the synthesis of thermo-, oxidation- and cyclodextrin- (CD) responsive end-group-functionalized polymers, based on N,N-diethylacrylamide (DEAAm). In a classical free-radical chain transfer polymerization, using thiol-functionalized 4-alkylphenols, namely 3-(4-(1,1-dimethylethan-1-yl)phenoxy)propane-1-thiol and 3-(4-(2,4,4-trimethylpentan-2-yl)phenoxy)propane-1-thiol, poly(N,N-diethylacrylamide) (PDEAAm) with well-defined hydrophobic end-groups is obtained. These end-group-functionalized polymers show different cloud point values, depending on the degree of polymerization and the presence of randomly methylated β-cyclodextrin (RAMEB-CD). Additionally, the influence of the oxidation of the incorporated thioether linkages on the cloud point is investigated. The resulting hydrophilic sulfoxides show higher cloud point values for the lower critical solution temperature (LCST). A high degree of functionalization is supported by 1H NMR-, SEC-, FTIR- and MALDI–TOF measurements. PMID:24778720

  8. An automated model-based aim point distribution system for solar towers

    NASA Astrophysics Data System (ADS)

    Schwarzbözl, Peter; Rong, Amadeus; Macke, Ansgar; Säck, Jan-Peter; Ulmer, Steffen

    2016-05-01

    Distribution of heliostat aim points is a major task during central receiver operation, as the flux distribution produced by the heliostats varies continuously with time. Known methods for aim point distribution are mostly based on simple aim point patterns and focus on control strategies to meet local temperature and flux limits of the receiver. Lowering the peak flux on the receiver to avoid hot spots and maximizing thermal output are obviously competing targets that call for a comprehensive optimization process. This paper presents a model-based method for online aim point optimization that includes the current heliostat field mirror quality derived through an automated deflectometric measurement process.

  9. Joint Clustering and Component Analysis of Correspondenceless Point Sets: Application to Cardiac Statistical Modeling.

    PubMed

    Gooya, Ali; Lekadir, Karim; Alba, Xenia; Swift, Andrew J; Wild, Jim M; Frangi, Alejandro F

    2015-01-01

    Construction of Statistical Shape Models (SSMs) from arbitrary point sets is a challenging problem due to significant shape variation and lack of explicit point correspondence across the training data set. In medical imaging, point sets can generally represent different shape classes that span healthy and pathological exemplars. In such cases, the constructed SSM may not generalize well, largely because the probability density function (pdf) of the point sets deviates from the underlying assumption of Gaussian statistics. To this end, we propose a generative model for unsupervised learning of the pdf of point sets as a mixture of distinctive classes. A Variational Bayesian (VB) method is proposed for making joint inferences on the labels of point sets, and the principal modes of variations in each cluster. The method provides a flexible framework to handle point sets with no explicit point-to-point correspondences. We also show that by maximizing the marginalized likelihood of the model, the optimal number of clusters of point sets can be determined. We illustrate this work in the context of understanding the anatomical phenotype of the left and right ventricles in heart. To this end, we use a database containing hearts of healthy subjects, patients with Pulmonary Hypertension (PH), and patients with Hypertrophic Cardiomyopathy (HCM). We demonstrate that our method can outperform traditional PCA in both generalization and specificity measures.

  10. Pursuit Eye-Movements in Curve Driving Differentiate between Future Path and Tangent Point Models

    PubMed Central

    Lappi, Otto; Pekkanen, Jami; Itkonen, Teemu H.

    2013-01-01

    For nearly 20 years, looking at the tangent point on the road edge has been prominent in models of visual orientation in curve driving. It is the most common interpretation of the commonly observed pattern of car drivers looking through a bend, or at the apex of the curve. Indeed, in the visual science literature, visual orientation towards the inside of a bend has become known as “tangent point orientation”. Yet, it remains to be empirically established whether it is the tangent point the drivers are looking at, or whether some other reference point on the road surface, or several reference points, are being targeted in addition to, or instead of, the tangent point. Recently discovered optokinetic pursuit eye-movements during curve driving can provide complementary evidence over and above traditional gaze-position measures. This paper presents the first detailed quantitative analysis of pursuit eye movements elicited by curvilinear optic flow in real driving. The data implicates the far zone beyond the tangent point as an important gaze target area during steady-state cornering. This is in line with the future path steering models, but difficult to reconcile with any pure tangent point steering model. We conclude that the tangent point steering models do not provide a general explanation of eye movement and steering during a curve driving sequence and cannot be considered uncritically as the default interpretation when the gaze position distribution is observed to be situated in the region of the curve apex. PMID:23894300

  11. Polarizable six-point water models from computational and empirical optimization.

    PubMed

    Tröster, Philipp; Lorenzen, Konstantin; Tavan, Paul

    2014-02-13

    Tröster et al. (J. Phys. Chem B 2013, 117, 9486-9500) recently suggested a mixed computational and empirical approach to the optimization of polarizable molecular mechanics (PMM) water models. In the empirical part the parameters of Buckingham potentials are optimized by PMM molecular dynamics (MD) simulations. The computational part applies hybrid calculations, which combine the quantum mechanical description of a H2O molecule by density functional theory (DFT) with a PMM model of its liquid phase environment generated by MD. While the static dipole moments and polarizabilities of the PMM water models are fixed at the experimental gas phase values, the DFT/PMM calculations are employed to optimize the remaining electrostatic properties. These properties cover the width of a Gaussian inducible dipole positioned at the oxygen and the locations of massless negative charge points within the molecule (the positive charges are attached to the hydrogens). The authors considered the cases of one and two negative charges rendering the PMM four- and five-point models TL4P and TL5P. Here we extend their approach to three negative charges, thus suggesting the PMM six-point model TL6P. As compared to the predecessors and to other PMM models, which also exhibit partial charges at fixed positions, TL6P turned out to predict all studied properties of liquid water at p0 = 1 bar and T0 = 300 K with a remarkable accuracy. These properties cover, for instance, the diffusion constant, viscosity, isobaric heat capacity, isothermal compressibility, dielectric constant, density, and the isobaric thermal expansion coefficient. This success concurrently provides a microscopic physical explanation of corresponding shortcomings of previous models. It uniquely assigns the failures of previous models to substantial inaccuracies in the description of the higher electrostatic multipole moments of liquid phase water molecules. Resulting favorable properties concerning the transferability to

  12. Recent tests of the equilibrium-point hypothesis (lambda model).

    PubMed

    Feldman, A G; Ostry, D J; Levin, M F; Gribble, P L; Mitnitski, A B

    1998-07-01

    The lambda model of the equilibrium-point hypothesis (Feldman & Levin, 1995) is an approach to motor control which, like physics, is based on a logical system coordinating empirical data. The model has gone through an interesting period. On one hand, several nontrivial predictions of the model have been successfully verified in recent studies. In addition, the explanatory and predictive capacity of the model has been enhanced by its extension to multimuscle and multijoint systems. On the other hand, claims have recently appeared suggesting that the model should be abandoned. The present paper focuses on these claims and concludes that they are unfounded. Much of the experimental data that have been used to reject the model are actually consistent with it.

  13. Automatic pole-like object modeling via 3D part-based analysis of point cloud

    NASA Astrophysics Data System (ADS)

    He, Liu; Yang, Haoxiang; Huang, Yuchun

    2016-10-01

    Pole-like objects, including trees, lampposts and traffic signs, are indispensable part of urban infrastructure. With the advance of vehicle-based laser scanning (VLS), massive point cloud of roadside urban areas becomes applied in 3D digital city modeling. Based on the property that different pole-like objects have various canopy parts and similar trunk parts, this paper proposed the 3D part-based shape analysis to robustly extract, identify and model the pole-like objects. The proposed method includes: 3D clustering and recognition of trunks, voxel growing and part-based 3D modeling. After preprocessing, the trunk center is identified as the point that has local density peak and the largest minimum inter-cluster distance. Starting from the trunk centers, the remaining points are iteratively clustered to the same centers of their nearest point with higher density. To eliminate the noisy points, cluster border is refined by trimming boundary outliers. Then, candidate trunks are extracted based on the clustering results in three orthogonal planes by shape analysis. Voxel growing obtains the completed pole-like objects regardless of overlaying. Finally, entire trunk, branch and crown part are analyzed to obtain seven feature parameters. These parameters are utilized to model three parts respectively and get signal part-assembled 3D model. The proposed method is tested using the VLS-based point cloud of Wuhan University, China. The point cloud includes many kinds of trees, lampposts and other pole-like posters under different occlusions and overlaying. Experimental results show that the proposed method can extract the exact attributes and model the roadside pole-like objects efficiently.

  14. N-mixture models for estimating population size from spatially replicated counts

    USGS Publications Warehouse

    Royle, J. Andrew

    2004-01-01

    Spatial replication is a common theme in count surveys of animals. Such surveys often generate sparse count data from which it is difficult to estimate population size while formally accounting for detection probability. In this article, i describe a class of models (n-mixture models) which allow for estimation of population size from such data. The key idea is to view site-specific population sizes, n, as independent random variables distributed according to some mixing distribution (e.g., Poisson). Prior parameters are estimated from the marginal likelihood of the data, having integrated over the prior distribution for n. Carroll and lombard (1985, journal of american statistical association 80, 423-426) proposed a class of estimators based on mixing over a prior distribution for detection probability. Their estimator can be applied in limited settings, but is sensitive to prior parameter values that are fixed a priori. Spatial replication provides additional information regarding the parameters of the prior distribution on n that is exploited by the n-mixture models and which leads to reasonable estimates of abundance from sparse data. A simulation study demonstrates superior operating characteristics (bias, confidence interval coverage) of the n-mixture estimator compared to the caroll and lombard estimator. Both estimators are applied to point count data on six species of birds illustrating the sensitivity to choice of prior on p and substantially different estimates of abundance as a consequence.

  15. N = 1 supersymmetric indices and the four-dimensional A-model

    NASA Astrophysics Data System (ADS)

    Closset, Cyril; Kim, Heeyeon; Willett, Brian

    2017-08-01

    We compute the supersymmetric partition function of N = 1 supersymmetric gauge theories with an R-symmetry on M_4\\cong M_{g,p}× {S}^1 , a principal elliptic fiber bundle of degree p over a genus- g Riemann surface, Σ g . Equivalently, we compute the generalized supersymmetric index I_{M}{_{g,p}, with the supersymmetric three-manifold M_{g,p} as the spatial slice. The ordinary N = 1 supersymmetric index on the round three-sphere is recovered as a special case. We approach this computation from the point of view of a topological A-model for the abelianized gauge fields on the base Σ g . This A-model — or A-twisted two-dimensional N = (2 , 2) gauge theory — encodes all the information about the generalized indices, which are viewed as expectations values of some canonically-defined surface defects wrapped on T 2 inside Σ g × T 2. Being defined by compactification on the torus, the A-model also enjoys natural modular properties, governed by the four-dimensional 't Hooft anomalies. As an application of our results, we provide new tests of Seiberg duality. We also present a new evaluation formula for the three-sphere index as a sum over two-dimensional vacua.

  16. Fitting IRT Models to Dichotomous and Polytomous Data: Assessing the Relative Model-Data Fit of Ideal Point and Dominance Models

    ERIC Educational Resources Information Center

    Tay, Louis; Ali, Usama S.; Drasgow, Fritz; Williams, Bruce

    2011-01-01

    This study investigated the relative model-data fit of an ideal point item response theory (IRT) model (the generalized graded unfolding model [GGUM]) and dominance IRT models (e.g., the two-parameter logistic model [2PLM] and Samejima's graded response model [GRM]) to simulated dichotomous and polytomous data generated from each of these models.…

  17. First Prismatic Building Model Reconstruction from Tomosar Point Clouds

    NASA Astrophysics Data System (ADS)

    Sun, Y.; Shahzad, M.; Zhu, X.

    2016-06-01

    This paper demonstrates for the first time the potential of explicitly modelling the individual roof surfaces to reconstruct 3-D prismatic building models using spaceborne tomographic synthetic aperture radar (TomoSAR) point clouds. The proposed approach is modular and works as follows: it first extracts the buildings via DSM generation and cutting-off the ground terrain. The DSM is smoothed using BM3D denoising method proposed in (Dabov et al., 2007) and a gradient map of the smoothed DSM is generated based on height jumps. Watershed segmentation is then adopted to oversegment the DSM into different regions. Subsequently, height and polygon complexity constrained merging is employed to refine (i.e., to reduce) the retrieved number of roof segments. Coarse outline of each roof segment is then reconstructed and later refined using quadtree based regularization plus zig-zag line simplification scheme. Finally, height is associated to each refined roof segment to obtain the 3-D prismatic model of the building. The proposed approach is illustrated and validated over a large building (convention center) in the city of Las Vegas using TomoSAR point clouds generated from a stack of 25 images using Tomo-GENESIS software developed at DLR.

  18. Self-healing slip pulses in dynamic rupture models due to velocity-dependent strength

    USGS Publications Warehouse

    Beeler, N.M.; Tullis, T.E.

    1996-01-01

    Seismological observations of short slip duration on faults (short rise time on seismograms) during earthquakes are not consistent with conventional crack models of dynamic rupture and fault slip. In these models, the leading edge of rupture stops only when a strong region is encountered, and slip at an interior point ceases only when waves from the stopped edge of slip propagate back to that point. In contrast, some seismological evidence suggests that the duration of slip is too short for waves to propagate from the nearest edge of the ruptured surface, perhaps even if the distance used is an asperity size instead of the entire rupture dimension. What controls slip duration, if not dimensions of the fault or of asperities? In this study, dynamic earthquake rupture and slip are represented by a propagating shear crack. For all propagating shear cracks, slip velocity is highest near the rupture front, and at a small distance behind the rupture front, the slip velocity decreases. As pointed out by Heaton (1990), if the crack obeys a negative slip-rate-dependent strength relation, the lower slip velocity behind the rupture front will lead to strengthening that further reduces the velocity, and under certain circumstances, healing of slip can occur. The boundary element method of Hamano (1974) is used in a program adapted from Andrews (1985) for numerical simulations of mode II rupture with two different velocity-dependent strength functions. For the first function, after a slip-weakening displacement, the crack follows an exponential velocity-weakening relation. The characteristic velocity V0 of the exponential determines the magnitude of the velocity-dependence at dynamic velocities. The velocity-dependence at high velocity is essentially zero when V0 is small and the resulting slip velocity distribution is similar to slip weakening. If V0 is larger, rupture propagation initially resembles slip-weakening, but spontaneous healing occurs behind the rupture front. The

  19. A Gibbs point field model for the spatial pattern of coronary capillaries

    NASA Astrophysics Data System (ADS)

    Karch, R.; Neumann, M.; Neumann, F.; Ullrich, R.; Neumüller, J.; Schreiner, W.

    2006-09-01

    We propose a Gibbs point field model for the pattern of coronary capillaries in transverse histologic sections from human hearts, based on the physiology of oxygen supply from capillaries to tissue. To specify the potential energy function of the Gibbs point field, we draw on an analogy between the equation of steady-state oxygen diffusion from an array of parallel capillaries to the surrounding tissue and Poisson's equation for the electrostatic potential of a two-dimensional distribution of identical point charges. The influence of factors other than diffusion is treated as a thermal disturbance. On this basis, we arrive at the well-known two-dimensional one-component plasma, a system of identical point charges exhibiting a weak (logarithmic) repulsive interaction that is completely characterized by a single dimensionless parameter. By variation of this parameter, the model is able to reproduce many characteristics of real capillary patterns.

  20. Analysis of point-to-point lung motion with full inspiration and expiration CT data using non-linear optimization method: optimal geometric assumption model for the effective registration algorithm

    NASA Astrophysics Data System (ADS)

    Kim, Namkug; Seo, Joon Beom; Heo, Jeong Nam; Kang, Suk-Ho

    2007-03-01

    The study was conducted to develop a simple model for more robust lung registration of volumetric CT data, which is essential for various clinical lung analysis applications, including the lung nodule matching in follow up CT studies, semi-quantitative assessment of lung perfusion, and etc. The purpose of this study is to find the most effective reference point and geometric model based on the lung motion analysis from the CT data sets obtained in full inspiration (In.) and expiration (Ex.). Ten pairs of CT data sets in normal subjects obtained in full In. and Ex. were used in this study. Two radiologists were requested to draw 20 points representing the subpleural point of the central axis in each segment. The apex, hilar point, and center of inertia (COI) of each unilateral lung were proposed as the reference point. To evaluate optimal expansion point, non-linear optimization without constraints was employed. The objective function is sum of distances from the line, consist of the corresponding points between In. and Ex. to the optimal point x. By using the nonlinear optimization, the optimal points was evaluated and compared between reference points. The average distance between the optimal point and each line segment revealed that the balloon model was more suitable to explain the lung expansion model. This lung motion analysis based on vector analysis and non-linear optimization shows that balloon model centered on the center of inertia of lung is most effective geometric model to explain lung expansion by breathing.

  1. A model to estimate the size of nanoparticle agglomerates in gas-solid fluidized beds

    NASA Astrophysics Data System (ADS)

    de Martín, Lilian; van Ommen, J. Ruud

    2013-11-01

    The estimation of nanoparticle agglomerates' size in fluidized beds remains an open challenge, mainly due to the difficulty of characterizing the inter-agglomerate van der Waals force. The current approach is to describe micron-sized nanoparticle agglomerates as micron-sized particles with 0.1-0.2-μm asperities. This simplification does not capture the influence of the particle size on the van der Waals attraction between agglomerates. In this paper, we propose a new description where the agglomerates are micron-sized particles with nanoparticles on the surface, acting as asperities. As opposed to previous models, here the van der Waals force between agglomerates decreases with an increase in the particle size. We have also included an additional force due to the hydrogen bond formation between the surfaces of hydrophilic and dry nanoparticles. The average size of the fluidized agglomerates has been estimated equating the attractive force obtained from this method to the weight of the individual agglomerates. The results have been compared to 54 experimental values, most of them collected from the literature. Our model approximates without a systematic error the size of most of the nanopowders, both in conventional and centrifugal fluidized beds, outperforming current models. Although simple, the model is able to capture the influence of the nanoparticle size, particle density, and Hamaker coefficient on the inter-agglomerate forces.

  2. Influences of point defects on electrical and optical properties of InGaN light-emitting diodes at cryogenic temperature

    NASA Astrophysics Data System (ADS)

    Tu, Yi; Ruan, Yujiao; Zhu, Lihong; Tu, Qingzhen; Wang, Hongwei; Chen, Jie; Lu, Yijun; Gao, Yulin; Shih, Tien-Mo; Chen, Zhong; Lin, Yue

    2018-04-01

    We investigate the cryogenic external quantum efficiency (EQE) for some InGaN light-emitting diodes with different indium contents. We observe a monotonic decrease in EQE with the increasing forward current before the "U-turn" point, beyond which the thermal effect increases the EQE. We discover positive dependences among the droop rate (χ), differential electrical resistance (Rd), and indium content. Also, χ and Rd of individual green samples shift correspondingly during the aging test, when the Mg ions are activated at high injection density and diffuse into the active region. Considering the fact that both In and Mg ions would introduce point defects (PDs), we proposed a model that reveals the mechanism of interplay between PDs and carriers. PDs serve as both energy traps and non-radiative recombination centers. They attract and confine carriers, leading to an increase in Rd and a decrease in EQE.

  3. Regular and reverse nanoscale stick-slip behavior: Modeling and experiments

    NASA Astrophysics Data System (ADS)

    Landolsi, Fakhreddine; Sun, Yuekai; Lu, Hao; Ghorbel, Fathi H.; Lou, Jun

    2010-02-01

    We recently proposed a new nanoscale friction model based on the bristle interpretation of single asperity contacts. The model is mathematically continuous and dynamic which makes it suitable for implementation in nanomanipulation and nanorobotic modeling. In the present paper, friction force microscope (FFM) scans of muscovite mica samples and vertically aligned multi-wall carbon nanotubes (VAMWCNTs) arrays are conducted. The choice of these materials is motivated by the fact that they exibit different stick-slip behaviors. The corresponding experimental and simulation results are compared. Our nanoscale friction model is shown to represent both the regular and reverse frictional sawtooth characteristics of the muscovite mica and the VAMWCNTs, respectively.

  4. Modelling of point and diffuse pollution: application of the Moneris model in the Ipojuca river basin, Pernambuco State, Brazil.

    PubMed

    de Lima Barros, Alessandra Maciel; do Carmo Sobral, Maria; Gunkel, Günter

    2013-01-01

    Emissions of pollutants and nutrients are causing several problems in aquatic ecosystems, and in general an excess of nutrients, specifically nitrogen and phosphorus, is responsible for the eutrophication process in water bodies. In most developed countries, more attention is given to diffuse pollution because problems with point pollution have already been solved. In many non-developed countries basic data for point and diffuse pollution are not available. The focus of the presented studies is to quantify nutrient emissions from point and diffuse sources in the Ipojuca river basin, Pernambuco State, Brazil, using the Moneris model (Modelling Nutrient Emissions in River Systems). This model has been developed in Germany and has already been implemented in more than 600 river basins. The model is mainly based on river flow, water quality and geographical information system data. According to the Moneris model results, untreated domestic sewage is the major source of nutrients in the Ipojuca river basin. The Moneris model has shown itself to be a useful tool that allows the identification and quantification of point and diffuse nutrient sources, thus enabling the adoption of measures to reduce them. The Moneris model, conducted for the first time in a tropical river basin with intermittent flow, can be used as a reference for implementation in other watersheds.

  5. A Point Kinetics Model for Estimating Neutron Multiplication of Bare Uranium Metal in Tagged Neutron Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tweardy, Matthew C.; McConchie, Seth; Hayward, Jason P.

    An extension of the point kinetics model is developed in this paper to describe the neutron multiplicity response of a bare uranium object under interrogation by an associated particle imaging deuterium-tritium (D-T) measurement system. This extended model is used to estimate the total neutron multiplication of the uranium. Both MCNPX-PoliMi simulations and data from active interrogation measurements of highly enriched and depleted uranium geometries are used to evaluate the potential of this method and to identify the sources of systematic error. The detection efficiency correction for measured coincidence response is identified as a large source of systematic error. If themore » detection process is not considered, results suggest that the method can estimate total multiplication to within 13% of the simulated value. Values for multiplicity constants in the point kinetics equations are sensitive to enrichment due to (n, xn) interactions by D-T neutrons and can introduce another significant source of systematic bias. This can theoretically be corrected if isotopic composition is known a priori. Finally, the spatial dependence of multiplication is also suspected of introducing further systematic bias for high multiplication uranium objects.« less

  6. A Point Kinetics Model for Estimating Neutron Multiplication of Bare Uranium Metal in Tagged Neutron Measurements

    DOE PAGES

    Tweardy, Matthew C.; McConchie, Seth; Hayward, Jason P.

    2017-06-13

    An extension of the point kinetics model is developed in this paper to describe the neutron multiplicity response of a bare uranium object under interrogation by an associated particle imaging deuterium-tritium (D-T) measurement system. This extended model is used to estimate the total neutron multiplication of the uranium. Both MCNPX-PoliMi simulations and data from active interrogation measurements of highly enriched and depleted uranium geometries are used to evaluate the potential of this method and to identify the sources of systematic error. The detection efficiency correction for measured coincidence response is identified as a large source of systematic error. If themore » detection process is not considered, results suggest that the method can estimate total multiplication to within 13% of the simulated value. Values for multiplicity constants in the point kinetics equations are sensitive to enrichment due to (n, xn) interactions by D-T neutrons and can introduce another significant source of systematic bias. This can theoretically be corrected if isotopic composition is known a priori. Finally, the spatial dependence of multiplication is also suspected of introducing further systematic bias for high multiplication uranium objects.« less

  7. PCTO-SIM: Multiple-point geostatistical modeling using parallel conditional texture optimization

    NASA Astrophysics Data System (ADS)

    Pourfard, Mohammadreza; Abdollahifard, Mohammad J.; Faez, Karim; Motamedi, Sayed Ahmad; Hosseinian, Tahmineh

    2017-05-01

    Multiple-point Geostatistics is a well-known general statistical framework by which complex geological phenomena have been modeled efficiently. Pixel-based and patch-based are two major categories of these methods. In this paper, the optimization-based category is used which has a dual concept in texture synthesis as texture optimization. Our extended version of texture optimization uses the energy concept to model geological phenomena. While honoring the hard point, the minimization of our proposed cost function forces simulation grid pixels to be as similar as possible to training images. Our algorithm has a self-enrichment capability and creates a richer training database from a sparser one through mixing the information of all surrounding patches of the simulation nodes. Therefore, it preserves pattern continuity in both continuous and categorical variables very well. It also shows a fuzzy result in its every realization similar to the expected result of multi realizations of other statistical models. While the main core of most previous Multiple-point Geostatistics methods is sequential, the parallel main core of our algorithm enabled it to use GPU efficiently to reduce the CPU time. One new validation method for MPS has also been proposed in this paper.

  8. Capacity Estimation Model for Signalized Intersections under the Impact of Access Point

    PubMed Central

    Zhao, Jing; Li, Peng; Zhou, Xizhao

    2016-01-01

    Highway Capacity Manual 2010 provides various factors to adjust the base saturation flow rate for the capacity analysis of signalized intersections. No factors, however, is considered for the potential change of signalized intersections capacity caused by the access point closeing to the signalized intersection. This paper presented a theoretical model to estimate the lane group capacity at signalized intersections with the consideration of the effects of access points. Two scenarios of access point locations, upstream or downstream of the signalized intersection, and impacts of six types of access traffic flow are taken into account. The proposed capacity model was validated based on VISSIM simulation. Results of extensive numerical analysis reveal the substantial impact of access point on the capacity, which has an inverse correlation with both the number of major street lanes and the distance between the intersection and access point. Moreover, among the six types of access traffic flows, the access traffic flow 1 (right-turning traffic from major street), flow 4 (left-turning traffic from access point), and flow 5 (left-turning traffic from major street) cause a more significant effect on lane group capacity than others. Some guidance on the mitigation of the negative effect is provided for practitioners. PMID:26726998

  9. Two-Point Turbulence Closure Applied to Variable Resolution Modeling

    NASA Technical Reports Server (NTRS)

    Girimaji, Sharath S.; Rubinstein, Robert

    2011-01-01

    Variable resolution methods have become frontline CFD tools, but in order to take full advantage of this promising new technology, more formal theoretical development is desirable. Two general classes of variable resolution methods can be identified: hybrid or zonal methods in which RANS and LES models are solved in different flow regions, and bridging or seamless models which interpolate smoothly between RANS and LES. This paper considers the formulation of bridging methods using methods of two-point closure theory. The fundamental problem is to derive a subgrid two-equation model. We compare and reconcile two different approaches to this goal: the Partially Integrated Transport Model, and the Partially Averaged Navier-Stokes method.

  10. Depinning of the Bragg glass in a point disordered model superconductor.

    PubMed

    Olsson, Peter

    2007-03-02

    We perform simulations of the three-dimensional frustrated anisotropic XY model with point disorder as a model of a type-II superconductor with quenched point pinning in a magnetic field and a weak applied current. Using resistively shunted junction dynamics, we find a critical current I_{c} that separates a creep region with immeasurably low voltage from a region with a voltage V proportional, variant(I-I_{c}) and also identify the mechanism behind this behavior. It also turns out that data at fixed disorder strength may be collapsed by plotting V versus TI, where T is the temperature, though the reason for this behavior as yet not is fully understood.

  11. Effects of Reduced Terrestrial LiDAR Point Density on High-Resolution Grain Crop Surface Models in Precision Agriculture

    PubMed Central

    Hämmerle, Martin; Höfle, Bernhard

    2014-01-01

    3D geodata play an increasingly important role in precision agriculture, e.g., for modeling in-field variations of grain crop features such as height or biomass. A common data capturing method is LiDAR, which often requires expensive equipment and produces large datasets. This study contributes to the improvement of 3D geodata capturing efficiency by assessing the effect of reduced scanning resolution on crop surface models (CSMs). The analysis is based on high-end LiDAR point clouds of grain crop fields of different varieties (rye and wheat) and nitrogen fertilization stages (100%, 50%, 10%). Lower scanning resolutions are simulated by keeping every n-th laser beam with increasing step widths n. For each iteration step, high-resolution CSMs (0.01 m2 cells) are derived and assessed regarding their coverage relative to a seamless CSM derived from the original point cloud, standard deviation of elevation and mean elevation. Reducing the resolution to, e.g., 25% still leads to a coverage of >90% and a mean CSM elevation of >96% of measured crop height. CSM types (maximum elevation or 90th-percentile elevation) react differently to reduced scanning resolutions in different crops (variety, density). The results can help to assess the trade-off between CSM quality and minimum requirements regarding equipment and capturing set-up. PMID:25521383

  12. David Adler Lectureship Award: n-point Correlation Functions in Heterogeneous Materials.

    NASA Astrophysics Data System (ADS)

    Torquato, Salvatore

    2009-03-01

    The determination of the bulk transport, electromagnetic, mechanical, and optical properties of heterogeneous materials has a long and venerable history, attracting the attention of some of the luminaries of science, including Maxwell, Lord Rayleigh, and Einstein. The bulk properties can be shown to depend rigorously upon infinite sets of various n-point correlation functions. Many different types of correlation functions arise, depending on the physics of the problem. A unified approach to characterize the microstructure and bulk properties of a large class of disordered materials is developed [S. Torquato, Random Heterogeneous Materials: Microstructure and Macroscopic Properties (Springer-Verlag, New York, 2002)]. This is accomplished via a canonical n-point function Hn from which one can derive exact analytical expressions for any microstructural function of interest. This microstructural information can then be used to estimate accurately the bulk properties of the material. Unlike homogeneous materials, seemingly different bulk properties (e.g., transport and mechanical properties) of a heterogeneous material can be linked to one another because of the common microstructure that they share. Such cross-property relations can be used to estimate one property given a measurement of another. A recently identified decorrelation principle, roughly speaking, refers to the phenomenon that unconstrained correlations that exist in low-dimensional disordered materials vanish as the space dimension becomes large. Among other results, this implies that in sufficiently high dimensions the densest spheres packings may be disordered (rather than ordered) [S. Torquato and F. H. Stillinger, ``New Conjectural Lower Bounds on the Optimal Density of Sphere Packings," Experimental Mathematics, 15, 307 (2006)].

  13. Diverse point mutations in the human gene for polymorphic N-acetyltransferase

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vatsis, K.P.; Martell, K.J.; Weber, W.W.

    1991-07-15

    Classification of humans as rapid or slow acetylators is based on hereditary differences in rates of N-acetylation of therapeutic and carcinogenic agents, but N-acetylation of certain arylamine drugs displays no genetic variation. Two highly homologous human genes for N-acetyltransferase NAT1 and NAT2, presumably code for the genetically invariant and variant NAT proteins, respectively. In the present investigation, 1.9-kilobase human genomic EcoRI fragments encoding NAT2 were generated by the polymerase chain reaction with liver and leukocyte DNA from seven subjects phenotyped as homozygous and heterozygous acetylators. Direct sequencing revealed multiple point mutations in the coding region of two distinct NAT2 variants.more » One of these was derived from leukocytes of a slow acetylator and was distinguished by a silent mutation (coden 94) and a separate G {r arrow} A transition (position 590) leading to replacement of Arg-197 by Gln; the mutated guanine was part of a CpG dinucleotide and a Taq I site. The second NAT2 variant originated from liver with low N-acetylation activity. It was characterized by three nucleotide transitions giving rise to a silent mutation (codon 161), accompanied by obliteration of the sole Kpn I site, and two amino acid substitutions. The results show conclusively that the genetically variant NAT is encoded by NAT2.« less

  14. Equilibrium points, stability and numerical solutions of fractional-order predator-prey and rabies models

    NASA Astrophysics Data System (ADS)

    Ahmed, E.; El-Sayed, A. M. A.; El-Saka, H. A. A.

    2007-01-01

    In this paper we are concerned with the fractional-order predator-prey model and the fractional-order rabies model. Existence and uniqueness of solutions are proved. The stability of equilibrium points are studied. Numerical solutions of these models are given. An example is given where the equilibrium point is a centre for the integer order system but locally asymptotically stable for its fractional-order counterpart.

  15. A removal model for estimating detection probabilities from point-count surveys

    USGS Publications Warehouse

    Farnsworth, G.L.; Pollock, K.H.; Nichols, J.D.; Simons, T.R.; Hines, J.E.; Sauer, J.R.

    2000-01-01

    We adapted a removal model to estimate detection probability during point count surveys. The model assumes one factor influencing detection during point counts is the singing frequency of birds. This may be true for surveys recording forest songbirds when most detections are by sound. The model requires counts to be divided into several time intervals. We used time intervals of 2, 5, and 10 min to develop a maximum-likelihood estimator for the detectability of birds during such surveys. We applied this technique to data from bird surveys conducted in Great Smoky Mountains National Park. We used model selection criteria to identify whether detection probabilities varied among species, throughout the morning, throughout the season, and among different observers. The overall detection probability for all birds was 75%. We found differences in detection probability among species. Species that sing frequently such as Winter Wren and Acadian Flycatcher had high detection probabilities (about 90%) and species that call infrequently such as Pileated Woodpecker had low detection probability (36%). We also found detection probabilities varied with the time of day for some species (e.g. thrushes) and between observers for other species. This method of estimating detectability during point count surveys offers a promising new approach to using count data to address questions of the bird abundance, density, and population trends.

  16. 46 CFR 7.130 - Point Conception, CA to Point Sur, CA.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... LINES Pacific Coast § 7.130 Point Conception, CA to Point Sur, CA. (a) A line drawn from the... 46 Shipping 1 2014-10-01 2014-10-01 false Point Conception, CA to Point Sur, CA. 7.130 Section 7... Breakwater. (b) A line drawn from the outer end of Morro Bay Entrance East Breakwater to latitude 35°21.5′ N...

  17. 46 CFR 7.130 - Point Conception, CA to Point Sur, CA.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... LINES Pacific Coast § 7.130 Point Conception, CA to Point Sur, CA. (a) A line drawn from the... 46 Shipping 1 2011-10-01 2011-10-01 false Point Conception, CA to Point Sur, CA. 7.130 Section 7... Breakwater. (b) A line drawn from the outer end of Morro Bay Entrance East Breakwater to latitude 35°21.5′ N...

  18. 46 CFR 7.130 - Point Conception, CA to Point Sur, CA.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... LINES Pacific Coast § 7.130 Point Conception, CA to Point Sur, CA. (a) A line drawn from the... 46 Shipping 1 2013-10-01 2013-10-01 false Point Conception, CA to Point Sur, CA. 7.130 Section 7... Breakwater. (b) A line drawn from the outer end of Morro Bay Entrance East Breakwater to latitude 35°21.5′ N...

  19. 46 CFR 7.130 - Point Conception, CA to Point Sur, CA.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... LINES Pacific Coast § 7.130 Point Conception, CA to Point Sur, CA. (a) A line drawn from the... 46 Shipping 1 2012-10-01 2012-10-01 false Point Conception, CA to Point Sur, CA. 7.130 Section 7... Breakwater. (b) A line drawn from the outer end of Morro Bay Entrance East Breakwater to latitude 35°21.5′ N...

  20. 46 CFR 7.130 - Point Conception, CA to Point Sur, CA.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... LINES Pacific Coast § 7.130 Point Conception, CA to Point Sur, CA. (a) A line drawn from the... 46 Shipping 1 2010-10-01 2010-10-01 false Point Conception, CA to Point Sur, CA. 7.130 Section 7... Breakwater. (b) A line drawn from the outer end of Morro Bay Entrance East Breakwater to latitude 35°21.5′ N...

  1. Critical length scale controls adhesive wear mechanisms

    PubMed Central

    Aghababaei, Ramin; Warner, Derek H.; Molinari, Jean-Francois

    2016-01-01

    The adhesive wear process remains one of the least understood areas of mechanics. While it has long been established that adhesive wear is a direct result of contacting surface asperities, an agreed upon understanding of how contacting asperities lead to wear debris particle has remained elusive. This has restricted adhesive wear prediction to empirical models with limited transferability. Here we show that discrepant observations and predictions of two distinct adhesive wear mechanisms can be reconciled into a unified framework. Using atomistic simulations with model interatomic potentials, we reveal a transition in the asperity wear mechanism when contact junctions fall below a critical length scale. A simple analytic model is formulated to predict the transition in both the simulation results and experiments. This new understanding may help expand use of computer modelling to explore adhesive wear processes and to advance physics-based wear laws without empirical coefficients. PMID:27264270

  2. Application of the nudged elastic band method to the point-to-point radio wave ray tracing in IRI modeled ionosphere

    NASA Astrophysics Data System (ADS)

    Nosikov, I. A.; Klimenko, M. V.; Bessarab, P. F.; Zhbankov, G. A.

    2017-07-01

    Point-to-point ray tracing is an important problem in many fields of science. While direct variational methods where some trajectory is transformed to an optimal one are routinely used in calculations of pathways of seismic waves, chemical reactions, diffusion processes, etc., this approach is not widely known in ionospheric point-to-point ray tracing. We apply the Nudged Elastic Band (NEB) method to a radio wave propagation problem. In the NEB method, a chain of points which gives a discrete representation of the radio wave ray is adjusted iteratively to an optimal configuration satisfying the Fermat's principle, while the endpoints of the trajectory are kept fixed according to the boundary conditions. Transverse displacements define the radio ray trajectory, while springs between the points control their distribution along the ray. The method is applied to a study of point-to-point ionospheric ray tracing, where the propagation medium is obtained with the International Reference Ionosphere model taking into account traveling ionospheric disturbances. A 2-dimensional representation of the optical path functional is developed and used to gain insight into the fundamental difference between high and low rays. We conclude that high and low rays are minima and saddle points of the optical path functional, respectively.

  3. Point-based and model-based geolocation analysis of airborne laser scanning data

    NASA Astrophysics Data System (ADS)

    Sefercik, Umut Gunes; Buyuksalih, Gurcan; Jacobsen, Karsten; Alkan, Mehmet

    2017-01-01

    Airborne laser scanning (ALS) is one of the most effective remote sensing technologies providing precise three-dimensional (3-D) dense point clouds. A large-size ALS digital surface model (DSM) covering the whole Istanbul province was analyzed by point-based and model-based comprehensive statistical approaches. Point-based analysis was performed using checkpoints on flat areas. Model-based approaches were implemented in two steps as strip to strip comparing overlapping ALS DSMs individually in three subareas and comparing the merged ALS DSMs with terrestrial laser scanning (TLS) DSMs in four other subareas. In the model-based approach, the standard deviation of height and normalized median absolute deviation were used as the accuracy indicators combined with the dependency of terrain inclination. The results demonstrate that terrain roughness has a strong impact on the vertical accuracy of ALS DSMs. From the relative horizontal shifts determined and partially improved by merging the overlapping strips and comparison of the ALS, and the TLS, data were found not to be negligible. The analysis of ALS DSM in relation to TLS DSM allowed us to determine the characteristics of the DSM in detail.

  4. Box-modeling of 15N/14N in mammals.

    PubMed

    Balter, Vincent; Simon, Laurent; Fouillet, Hélène; Lécuyer, Christophe

    2006-03-01

    The 15N/14N signature of animal proteins is now commonly used to understand their physiology and quantify the flows of nutrient in trophic webs. These studies assume that animals are predictably 15N-enriched relative to their food, but the isotopic mechanism which accounts for this enrichment remains unknown. We developed a box model of the nitrogen isotope cycle in mammals in order to predict the 15N/14N ratios of body reservoirs as a function of time, N intake and body mass. Results of modeling show that a combination of kinetic isotope fractionation during the N transfer between amines and equilibrium fractionation related to the reversible conversion of N-amine into ammonia is required to account for the well-established approximately 4 per thousand 15N-enrichment of body proteins relative to the diet. This isotopic enrichment observed in proteins is due to the partial recycling of 15N-enriched urea and the urinary excretion of a fraction of the strongly 15N-depleted ammonia reservoir. For a given body mass and diet delta15N, the isotopic compositions are mainly controlled by the N intake. Increase of the urea turnover combined with a decrease of the N intake lead to calculate a delta15N increase of the proteins, in agreement with the observed increase of collagen delta15N of herbivorous animals with aridity. We further show that the low delta15N collagen values of cave bears cannot be attributed to the dormancy periods as it is commonly thought, but inversely to the hyperphagia behavior. This model highlights the need for experimental investigations performed with large mammals in order to improve our understanding of natural variations of delta15N collagen.

  5. D Point Cloud Model Colorization by Dense Registration of Digital Images

    NASA Astrophysics Data System (ADS)

    Crombez, N.; Caron, G.; Mouaddib, E.

    2015-02-01

    Architectural heritage is a historic and artistic property which has to be protected, preserved, restored and must be shown to the public. Modern tools like 3D laser scanners are more and more used in heritage documentation. Most of the time, the 3D laser scanner is completed by a digital camera which is used to enrich the accurate geometric informations with the scanned objects colors. However, the photometric quality of the acquired point clouds is generally rather low because of several problems presented below. We propose an accurate method for registering digital images acquired from any viewpoints on point clouds which is a crucial step for a good colorization by colors projection. We express this image-to-geometry registration as a pose estimation problem. The camera pose is computed using the entire images intensities under a photometric visual and virtual servoing (VVS) framework. The camera extrinsic and intrinsic parameters are automatically estimated. Because we estimates the intrinsic parameters we do not need any informations about the camera which took the used digital image. Finally, when the point cloud model and the digital image are correctly registered, we project the 3D model in the digital image frame and assign new colors to the visible points. The performance of the approach is proven in simulation and real experiments on indoor and outdoor datasets of the cathedral of Amiens, which highlight the success of our method, leading to point clouds with better photometric quality and resolution.

  6. Topological characterization of antireflective and hydrophobic rough surfaces: are random process theory and fractal modeling applicable?

    NASA Astrophysics Data System (ADS)

    Borri, Claudia; Paggi, Marco

    2015-02-01

    The random process theory (RPT) has been widely applied to predict the joint probability distribution functions (PDFs) of asperity heights and curvatures of rough surfaces. A check of the predictions of RPT against the actual statistics of numerically generated random fractal surfaces and of real rough surfaces has been only partially undertaken. The present experimental and numerical study provides a deep critical comparison on this matter, providing some insight into the capabilities and limitations in applying RPT and fractal modeling to antireflective and hydrophobic rough surfaces, two important types of textured surfaces. A multi-resolution experimental campaign using a confocal profilometer with different lenses is carried out and a comprehensive software for the statistical description of rough surfaces is developed. It is found that the topology of the analyzed textured surfaces cannot be fully described according to RPT and fractal modeling. The following complexities emerge: (i) the presence of cut-offs or bi-fractality in the power-law power-spectral density (PSD) functions; (ii) a more pronounced shift of the PSD by changing resolution as compared to what was expected from fractal modeling; (iii) inaccuracy of the RPT in describing the joint PDFs of asperity heights and curvatures of textured surfaces; (iv) lack of resolution-invariance of joint PDFs of textured surfaces in case of special surface treatments, not accounted for by fractal modeling.

  7. The Interface Influence in TiN/SiN x Multilayer Nanocomposite Under Irradiation

    NASA Astrophysics Data System (ADS)

    Uglov, V. V.; Safronov, I. V.; Kvasov, N. T.; Remnev, G. E.; Shimanski, V. I.

    2018-01-01

    The paper focuses on studying the kinetics of radiation-induced point defects formed in TiN/SiN x multilayer nanocomposites with account of their generation, diffusion recombination, and the influence of sinks functioning as interfaces. In order to describe the kinetics in nanocrystalline TiN and amorphous SiN x phases, a finite-difference method is used to solve the system of balance kinetic equations for absolute defect concentrations depending on the spatiotemporal variables. A model of the disclination-dislocation interface structure is used to study the absorption of radiation-induced point defects on the boundaries in created stress fields. It is shown that the interface effectively absorbs point defects in these phases of TiN/SiN x multilayer nanocomposite, thereby reducing their amount within the space between phases. This behavior of point defects partially explains a mechanism of the radiation resistance in this type of nanocomposites.

  8. Development of atmospheric N2O isotopomers model based on a chemistry-coupled atmospheric general circulation model

    NASA Astrophysics Data System (ADS)

    Ishijima, K.; Toyoda, S.; Sudo, K.; Yoshikawa, C.; Nanbu, S.; Aoki, S.; Nakazawa, T.; Yoshida, N.

    2009-12-01

    almost underestimated, relative to the balloon observations, although the concentration is well simulated. The tendency has been somewhat improved by incorporating another photolysis scheme with slightly higher wave-length resolution into the model. From another point of view, these facts indicate that N2O isotopomers can be used for validation of the stratospheric photochemical calculations in model, because of very high sensitivity of the isotopomer ratio values to some settings such as the wave-length resolution in the photochemical scheme.Therefore, N2O isotopomers modeling seems to be not only useful for validation of the fractionation coefficients and of isotopic characterization of sources, but also have the possibility to be an index especially for precision in the stratospheric photolysis in model.

  9. Survey of fishes and environmental conditions in Abbotts Lagoon, Point Reyes National Seashore, California

    USGS Publications Warehouse

    Saiki, M.K.; Martin, B.A.

    2001-01-01

    This study was conducted to gain a better understanding of fishery resources in Abbotts Lagoon, Point Reyes National Seashore. During February/March, May, August, and November 1999, fish were sampled with floating variable-mesh gill nets and small minnow traps from as many as 14 sites in the lagoon. Water temperature, dissolved oxygen, pH, total ammonia(NH3 + NH4+), salinity, turbidity, water depth, and bottom substrate composition were also measured at each site. A total of 2,656 fish represented by eight species was captured during the study. Gill nets captured Sacramento perch, Archoplites interruptus; largemouth bass, Micropterus salmoides; Pacific herring, Clupea pallasi; prickly sculpin, Cottus asper, silver surfperch, Hyperprosopon ellipticum; longfin smelt, Spirinchus thaleichthys; and striped bass, Morone saxatilis; whereas minnow traps captured Sacramento perch; prickly sculpin; and threespine stickleback, Gasterosteus aculeatus. Cluster analysis (Ward's minimum variance method of fish catch statistics identified two major species assemblages-the first dominated by Sacramento perch and, to a lesser extent, by largemouth bass, and the second dominated by Pacific herring and threespine stickleback. Simple discriminant analysis of environmental variables indicated that salinity contributed the most towards separating the two assemblages.

  10. Defect production in nonlinear quench across a quantum critical point.

    PubMed

    Sen, Diptiman; Sengupta, K; Mondal, Shreyoshi

    2008-07-04

    We show that the defect density n, for a slow nonlinear power-law quench with a rate tau(-1) and an exponent alpha>0, which takes the system through a critical point characterized by correlation length and dynamical critical exponents nu and z, scales as n approximately tau(-alphanud/(alphaznu+1)) [n approximately (alphag((alpha-1)/alpha)/tau)(nud/(znu+1))] if the quench takes the system across the critical point at time t=0 [t=t(0) not = 0], where g is a nonuniversal constant and d is the system dimension. These scaling laws constitute the first theoretical results for defect production in nonlinear quenches across quantum critical points and reproduce their well-known counterpart for a linear quench (alpha=1) as a special case. We supplement our results with numerical studies of well-known models and suggest experiments to test our theory.

  11. Complete N-point superstring disk amplitude II. Amplitude and hypergeometric function structure

    NASA Astrophysics Data System (ADS)

    Mafra, Carlos R.; Schlotterer, Oliver; Stieberger, Stephan

    2013-08-01

    Using the pure spinor formalism in part I (Mafra et al., preprint [1]) we compute the complete tree-level amplitude of N massless open strings and find a striking simple and compact form in terms of minimal building blocks: the full N-point amplitude is expressed by a sum over (N-3)! Yang-Mills partial subamplitudes each multiplying a multiple Gaussian hypergeometric function. While the former capture the space-time kinematics of the amplitude the latter encode the string effects. This result disguises a lot of structure linking aspects of gauge amplitudes as color and kinematics with properties of generalized Euler integrals. In this part II the structure of the multiple hypergeometric functions is analyzed in detail: their relations to monodromy equations, their minimal basis structure, and methods to determine their poles and transcendentality properties are proposed. Finally, a Gröbner basis analysis provides independent sets of rational functions in the Euler integrals. In contrast to [1] here we use momenta redefined by a factor of i. As a consequence the signs of the kinematic invariants are flipped, e.g. |→|.

  12. The environmental zero-point problem in evolutionary reaction norm modeling.

    PubMed

    Ergon, Rolf

    2018-04-01

    There is a potential problem in present quantitative genetics evolutionary modeling based on reaction norms. Such models are state-space models, where the multivariate breeder's equation in some form is used as the state equation that propagates the population state forward in time. These models use the implicit assumption of a constant reference environment, in many cases set to zero. This zero-point is often the environment a population is adapted to, that is, where the expected geometric mean fitness is maximized. Such environmental reference values follow from the state of the population system, and they are thus population properties. The environment the population is adapted to, is, in other words, an internal population property, independent of the external environment. It is only when the external environment coincides with the internal reference environment, or vice versa, that the population is adapted to the current environment. This is formally a result of state-space modeling theory, which is an important theoretical basis for evolutionary modeling. The potential zero-point problem is present in all types of reaction norm models, parametrized as well as function-valued, and the problem does not disappear when the reference environment is set to zero. As the environmental reference values are population characteristics, they ought to be modeled as such. Whether such characteristics are evolvable is an open question, but considering the complexity of evolutionary processes, such evolvability cannot be excluded without good arguments. As a straightforward solution, I propose to model the reference values as evolvable mean traits in their own right, in addition to other reaction norm traits. However, solutions based on an evolvable G matrix are also possible.

  13. Point Processes.

    DTIC Science & Technology

    1987-05-01

    O and N(B) < - a.s. for each BEMA. (ii) N(U Bn) = N(Bn) a.s. for any disjoint BI.B 2 .. in ’ R . n n The random variable N(A) represents the number...P(N’(B) = 0). B E o . (C) If (v.X1 X2 ....) (v.Xi.X .. ). then N d N’. The converse is true when E = R + or R and the X ’s are the ordered T ’s.+ n n... R + that are continuous and such that (x: f(x)> O ) is a bounded set. Theorem 1.4. Suppose N and N’ are point processes on E. The following statements

  14. Factors influencing superimposition error of 3D cephalometric landmarks by plane orientation method using 4 reference points: 4 point superimposition error regression model.

    PubMed

    Hwang, Jae Joon; Kim, Kee-Deog; Park, Hyok; Park, Chang Seo; Jeong, Ho-Gul

    2014-01-01

    Superimposition has been used as a method to evaluate the changes of orthodontic or orthopedic treatment in the dental field. With the introduction of cone beam CT (CBCT), evaluating 3 dimensional changes after treatment became possible by superimposition. 4 point plane orientation is one of the simplest ways to achieve superimposition of 3 dimensional images. To find factors influencing superimposition error of cephalometric landmarks by 4 point plane orientation method and to evaluate the reproducibility of cephalometric landmarks for analyzing superimposition error, 20 patients were analyzed who had normal skeletal and occlusal relationship and took CBCT for diagnosis of temporomandibular disorder. The nasion, sella turcica, basion and midpoint between the left and the right most posterior point of the lesser wing of sphenoidal bone were used to define a three-dimensional (3D) anatomical reference co-ordinate system. Another 15 reference cephalometric points were also determined three times in the same image. Reorientation error of each landmark could be explained substantially (23%) by linear regression model, which consists of 3 factors describing position of each landmark towards reference axes and locating error. 4 point plane orientation system may produce an amount of reorientation error that may vary according to the perpendicular distance between the landmark and the x-axis; the reorientation error also increases as the locating error and shift of reference axes viewed from each landmark increases. Therefore, in order to reduce the reorientation error, accuracy of all landmarks including the reference points is important. Construction of the regression model using reference points of greater precision is required for the clinical application of this model.

  15. Discrimination between diffuse and point sources of arsenic at Zimapán, Hidalgo state, Mexico.

    PubMed

    Sracek, Ondra; Armienta, María Aurora; Rodríguez, Ramiro; Villaseñor, Guadalupe

    2010-01-01

    There are two principal sources of arsenic in Zimapán. Point sources are linked to mining and smelting activities and especially to mine tailings. Diffuse sources are not well defined and are linked to regional flow systems in carbonate rocks. Both sources are caused by the oxidation of arsenic-rich sulfidic mineralization. Point sources are characterized by Ca-SO(4)-HCO(3) ground water type and relatively enriched values of deltaD, delta(18)O, and delta(34)S(SO(4)). Diffuse sources are characterized by Ca-Na-HCO(3) type of ground water and more depleted values of deltaD, delta(18)O, and delta(34)S(SO(4)). Values of deltaD and delta(18)O indicate similar altitude of recharge for both arsenic sources and stronger impact of evaporation for point sources in mine tailings. There are also different values of delta(34)S(SO(4)) for both sources, presumably due to different types of mineralization or isotopic zonality in deposits. In Principal Component Analysis (PCA), the principal component 1 (PC1), which describes the impact of sulfide oxidation and neutralization by the dissolution of carbonates, has higher values in samples from point sources. In spite of similar concentrations of As in ground water affected by diffuse sources and point sources (mean values 0.21 mg L(-1) and 0.31 mg L(-1), respectively, in the years from 2003 to 2008), the diffuse sources have more impact on the health of population in Zimapán. This is caused by the extraction of ground water from wells tapping regional flow system. In contrast, wells located in the proximity of mine tailings are not generally used for water supply.

  16. Boiling points of halogenated ethanes: an explanatory model implicating weak intermolecular hydrogen-halogen bonding.

    PubMed

    Beauchamp, Guy

    2008-10-23

    This study explores via structural clues the influence of weak intermolecular hydrogen-halogen bonds on the boiling point of halogenated ethanes. The plot of boiling points of 86 halogenated ethanes versus the molar refraction (linked to polarizability) reveals a series of straight lines, each corresponding to one of nine possible arrangements of hydrogen and halogen atoms on the two-carbon skeleton. A multiple linear regression model of the boiling points could be designed based on molar refraction and subgroup structure as independent variables (R(2) = 0.995, standard error of boiling point 4.2 degrees C). The model is discussed in view of the fact that molar refraction can account for approximately 83.0% of the observed variation in boiling point, while 16.5% could be ascribed to weak C-X...H-C intermolecular interactions. The difference in the observed boiling point of molecules having similar molar refraction values but differing in hydrogen-halogen intermolecular bonds can reach as much as 90 degrees C.

  17. Multiplicative point process as a model of trading activity

    NASA Astrophysics Data System (ADS)

    Gontis, V.; Kaulakys, B.

    2004-11-01

    Signals consisting of a sequence of pulses show that inherent origin of the 1/ f noise is a Brownian fluctuation of the average interevent time between subsequent pulses of the pulse sequence. In this paper, we generalize the model of interevent time to reproduce a variety of self-affine time series exhibiting power spectral density S( f) scaling as a power of the frequency f. Furthermore, we analyze the relation between the power-law correlations and the origin of the power-law probability distribution of the signal intensity. We introduce a stochastic multiplicative model for the time intervals between point events and analyze the statistical properties of the signal analytically and numerically. Such model system exhibits power-law spectral density S( f)∼1/ fβ for various values of β, including β= {1}/{2}, 1 and {3}/{2}. Explicit expressions for the power spectra in the low-frequency limit and for the distribution density of the interevent time are obtained. The counting statistics of the events is analyzed analytically and numerically, as well. The specific interest of our analysis is related with the financial markets, where long-range correlations of price fluctuations largely depend on the number of transactions. We analyze the spectral density and counting statistics of the number of transactions. The model reproduces spectral properties of the real markets and explains the mechanism of power-law distribution of trading activity. The study provides evidence that the statistical properties of the financial markets are enclosed in the statistics of the time interval between trades. A multiplicative point process serves as a consistent model generating this statistics.

  18. Effect of water on slip weakening of cohesive rocks during earthquakes (EMRP Division Outstanding ECS Award Lecture)

    NASA Astrophysics Data System (ADS)

    Violay, Marie; Alejandro Acosta, Mateo; Passelegue, François; Schubnel, Alexandre

    2017-04-01

    Fluids play an important role in fault zone and in earthquakes generation. Experimental studies of fault frictional properties in presence of fluid can provide unique insights into this phenomenon. Here we compare rotary shear experiments and tri-axial stick slip tests performed on cohesive silicate-bearing rocks (gabbro and granite) in the presence of fluids. Surprisingly, for both type of tests, the weakening mechanism (melting of the asperities) is hindered in the presence of water. Indeed, in rotary shear experiments, at a given effective normal stress (σn-pf), the decay in friction is more gradual and longer in the presence of pore water (32% of friction drop after 20 mm of slip) than under room humidity (41% after 20 mm of slip) and vacuum conditions (60% after 20 mm of slip). During stick slip tests, at a given effective confining pressure (Pc-pf), the dynamic shear stress drops are lower ( 30%) and slip distances were shorter ( 30 to 40%) in the presence of high pressure pore water (Pc=95 MPa; Pf=25 MPa) than under room humidity conditions (Pc=70 MPa; Pf=0 MPa). Thermal modeling of the asperity contacts under load shows that the presence of fluids cools the asperities and delays the formation of melt patches, increasing weakening duration.

  19. Robust group-wise rigid registration of point sets using t-mixture model

    NASA Astrophysics Data System (ADS)

    Ravikumar, Nishant; Gooya, Ali; Frangi, Alejandro F.; Taylor, Zeike A.

    2016-03-01

    A probabilistic framework for robust, group-wise rigid alignment of point-sets using a mixture of Students t-distribution especially when the point sets are of varying lengths, are corrupted by an unknown degree of outliers or in the presence of missing data. Medical images (in particular magnetic resonance (MR) images), their segmentations and consequently point-sets generated from these are highly susceptible to corruption by outliers. This poses a problem for robust correspondence estimation and accurate alignment of shapes, necessary for training statistical shape models (SSMs). To address these issues, this study proposes to use a t-mixture model (TMM), to approximate the underlying joint probability density of a group of similar shapes and align them to a common reference frame. The heavy-tailed nature of t-distributions provides a more robust registration framework in comparison to state of the art algorithms. Significant reduction in alignment errors is achieved in the presence of outliers, using the proposed TMM-based group-wise rigid registration method, in comparison to its Gaussian mixture model (GMM) counterparts. The proposed TMM-framework is compared with a group-wise variant of the well-known Coherent Point Drift (CPD) algorithm and two other group-wise methods using GMMs, using both synthetic and real data sets. Rigid alignment errors for groups of shapes are quantified using the Hausdorff distance (HD) and quadratic surface distance (QSD) metrics.

  20. An analysis of the least-squares problem for the DSN systematic pointing error model

    NASA Technical Reports Server (NTRS)

    Alvarez, L. S.

    1991-01-01

    A systematic pointing error model is used to calibrate antennas in the Deep Space Network. The least squares problem is described and analyzed along with the solution methods used to determine the model's parameters. Specifically studied are the rank degeneracy problems resulting from beam pointing error measurement sets that incorporate inadequate sky coverage. A least squares parameter subset selection method is described and its applicability to the systematic error modeling process is demonstrated on Voyager 2 measurement distribution.

  1. The multichannel n-propyl + O2 reaction surface: Definitive theory on a model hydrocarbon oxidation mechanism

    NASA Astrophysics Data System (ADS)

    Bartlett, Marcus A.; Liang, Tao; Pu, Liang; Schaefer, Henry F.; Allen, Wesley D.

    2018-03-01

    The n-propyl + O2 reaction is an important model of chain branching reactions in larger combustion systems. In this work, focal point analyses (FPAs) extrapolating to the ab initio limit were performed on the n-propyl + O2 system based on explicit quantum chemical computations with electron correlation treatments through coupled cluster single, double, triple, and perturbative quadruple excitations [CCSDT(Q)] and basis sets up to cc-pV5Z. All reaction species and transition states were fully optimized at the rigorous CCSD(T)/cc-pVTZ level of theory, revealing some substantial differences in comparison to the density functional theory geometries existing in the literature. A mixed Hessian methodology was implemented and benchmarked that essentially makes the computations of CCSD(T)/cc-pVTZ vibrational frequencies feasible and thus provides critical improvements to zero-point vibrational energies for the n-propyl + O2 system. Two key stationary points, n-propylperoxy radical (MIN1) and its concerted elimination transition state (TS1), were located 32.7 kcal mol-1 and 2.4 kcal mol-1 below the reactants, respectively. Two competitive β-hydrogen transfer transition states (TS2 and TS2') were found separated by only 0.16 kcal mol-1, a fact unrecognized in the current combustion literature. Incorporating TS2' in master equation (ME) kinetic models might reduce the large discrepancy of 2.5 kcal mol-1 between FPA and ME barrier heights for TS2. TS2 exhibits an anomalously large diagonal Born-Oppenheimer correction (ΔDBOC = 1.71 kcal mol-1), which is indicative of a nearby surface crossing and possible nonadiabatic reaction dynamics. The first systematic conformational search of three hydroperoxypropyl (QOOH) intermediates was completed, uncovering a total of 32 rotamers lying within 1.6 kcal mol-1 of their respective lowest-energy minima. Our definitive energetics for stationary points on the n-propyl + O2 potential energy surface provide key benchmarks for future studies

  2. Multiple-point principle with a scalar singlet extension of the standard model

    DOE PAGES

    Haba, Naoyuki; Ishida, Hiroyuki; Okada, Nobuchika; ...

    2017-01-21

    Here, we suggest a scalar singlet extension of the standard model, in which the multiple-point principle (MPP) condition of a vanishing Higgs potential at the Planck scale is realized. Although there have been lots of attempts to realize the MPP at the Planck scale, the realization with keeping naturalness is quite difficult. This model can easily achieve the MPP at the Planck scale without large Higgs mass corrections. It is worth noting that the electroweak symmetry can be radiatively broken in our model. In the naturalness point of view, the singlet scalar mass should be of O(1 TeV) or less.more » Also, we consider right-handed neutrino extension of the model for neutrino mass generation. The model does not affect the MPP scenario, and might keep the naturalness with the new particle mass scale beyond TeV, thanks to accidental cancellation of Higgs mass corrections.« less

  3. Fermion-induced quantum critical points.

    PubMed

    Li, Zi-Xiang; Jiang, Yi-Fan; Jian, Shao-Kai; Yao, Hong

    2017-08-22

    A unified theory of quantum critical points beyond the conventional Landau-Ginzburg-Wilson paradigm remains unknown. According to Landau cubic criterion, phase transitions should be first-order when cubic terms of order parameters are allowed by symmetry in the Landau-Ginzburg free energy. Here, from renormalization group analysis, we show that second-order quantum phase transitions can occur at such putatively first-order transitions in interacting two-dimensional Dirac semimetals. As such type of Landau-forbidden quantum critical points are induced by gapless fermions, we call them fermion-induced quantum critical points. We further introduce a microscopic model of SU(N) fermions on the honeycomb lattice featuring a transition between Dirac semimetals and Kekule valence bond solids. Remarkably, our large-scale sign-problem-free Majorana quantum Monte Carlo simulations show convincing evidences of a fermion-induced quantum critical points for N = 2, 3, 4, 5 and 6, consistent with the renormalization group analysis. We finally discuss possible experimental realizations of the fermion-induced quantum critical points in graphene and graphene-like materials.Quantum phase transitions are governed by Landau-Ginzburg theory and the exceptions are rare. Here, Li et al. propose a type of Landau-forbidden quantum critical points induced by gapless fermions in two-dimensional Dirac semimetals.

  4. Post-processing of global model output to forecast point rainfall

    NASA Astrophysics Data System (ADS)

    Hewson, Tim; Pillosu, Fatima

    2016-04-01

    ECMWF (the European Centre for Medium range Weather Forecasts) has recently embarked upon a new project to post-process gridbox rainfall forecasts from its ensemble prediction system, to provide probabilistic forecasts of point rainfall. The new post-processing strategy relies on understanding how different rainfall generation mechanisms lead to different degrees of sub-grid variability in rainfall totals. We use a number of simple global model parameters, such as the convective rainfall fraction, to anticipate the sub-grid variability, and then post-process each ensemble forecast into a pdf (probability density function) for a point-rainfall total. The final forecast will comprise the sum of the different pdfs from all ensemble members. The post-processing is essentially a re-calibration exercise, which needs only rainfall totals from standard global reporting stations (and forecasts) to train it. High density observations are not needed. This presentation will describe results from the initial 'proof of concept' study, which has been remarkably successful. Reference will also be made to other useful outcomes of the work, such as gaining insights into systematic model biases in different synoptic settings. The special case of orographic rainfall will also be discussed. Work ongoing this year will also be described. This involves further investigations of which model parameters can provide predictive skill, and will then move on to development of an operational system for predicting point rainfall across the globe. The main practical benefit of this system will be a greatly improved capacity to predict extreme point rainfall, and thereby provide early warnings, for the whole world, of flash flood potential for lead times that extend beyond day 5. This will be incorporated into the suite of products output by GLOFAS (the GLObal Flood Awareness System) which is hosted at ECMWF. As such this work offers a very cost-effective approach to satisfying user needs right

  5. Filling-driven Mott transition in SU(N ) Hubbard models

    NASA Astrophysics Data System (ADS)

    Lee, Seung-Sup B.; von Delft, Jan; Weichselbaum, Andreas

    2018-04-01

    We study the filling-driven Mott transition involving the metallic and paramagnetic insulating phases in SU (N ) Fermi-Hubbard models, using the dynamical mean-field theory and the numerical renormalization group as its impurity solver. The compressibility shows a striking temperature dependence: near the critical end-point temperature, it is strongly enhanced in the metallic phase close to the insulating phase. We demonstrate that this compressibility enhancement is associated with the thermal suppression of the quasiparticle peak in the local spectral functions. We also explain that the asymmetric shape of the quasiparticle peak originates from the asymmetry in the dynamics of the generalized doublons and holons.

  6. A Multidimensional Ideal Point Item Response Theory Model for Binary Data

    ERIC Educational Resources Information Center

    Maydeu-Olivares, Albert; Hernandez, Adolfo; McDonald, Roderick P.

    2006-01-01

    We introduce a multidimensional item response theory (IRT) model for binary data based on a proximity response mechanism. Under the model, a respondent at the mode of the item response function (IRF) endorses the item with probability one. The mode of the IRF is the ideal point, or in the multidimensional case, an ideal hyperplane. The model…

  7. The Nearctic oak-feeding sawflies of Periclista subgenus Neocharactus (Hymenoptera: Tenthredinidae)

    USDA-ARS?s Scientific Manuscript database

    Eleven species of Periclista subg. Neocharactus MacGillivray are recognized in North America. Six species occur in eastern North America: P. (N.) absens, n. sp., P. (N.) asper, n. sp., P. (N.) inaequidens (Norton), P. (N.) major, n. sp., P. (N.) subtruncata Dyar, and P. (N.) varia, n. sp. Five sp...

  8. Material point method modeling in oil and gas reservoirs

    DOEpatents

    Vanderheyden, William Brian; Zhang, Duan

    2016-06-28

    A computer system and method of simulating the behavior of an oil and gas reservoir including changes in the margins of frangible solids. A system of equations including state equations such as momentum, and conservation laws such as mass conservation and volume fraction continuity, are defined and discretized for at least two phases in a modeled volume, one of which corresponds to frangible material. A material point model technique for numerically solving the system of discretized equations, to derive fluid flow at each of a plurality of mesh nodes in the modeled volume, and the velocity of at each of a plurality of particles representing the frangible material in the modeled volume. A time-splitting technique improves the computational efficiency of the simulation while maintaining accuracy on the deformation scale. The method can be applied to derive accurate upscaled model equations for larger volume scale simulations.

  9. Indoor Navigation from Point Clouds: 3d Modelling and Obstacle Detection

    NASA Astrophysics Data System (ADS)

    Díaz-Vilariño, L.; Boguslawski, P.; Khoshelham, K.; Lorenzo, H.; Mahdjoubi, L.

    2016-06-01

    In the recent years, indoor modelling and navigation has become a research of interest because many stakeholders require navigation assistance in various application scenarios. The navigational assistance for blind or wheelchair people, building crisis management such as fire protection, augmented reality for gaming, tourism or training emergency assistance units are just some of the direct applications of indoor modelling and navigation. Navigational information is traditionally extracted from 2D drawings or layouts. Real state of indoors, including opening position and geometry for both windows and doors, and the presence of obstacles is commonly ignored. In this work, a real indoor-path planning methodology based on 3D point clouds is developed. The value and originality of the approach consist on considering point clouds not only for reconstructing semantically-rich 3D indoor models, but also for detecting potential obstacles in the route planning and using these for readapting the routes according to the real state of the indoor depictured by the laser scanner.

  10. The Pliocene Horcón Formation, Central Chile: a case study of earthquake-induced landslide susceptibility

    NASA Astrophysics Data System (ADS)

    Valdivia, D.; Elgueta, S.; Hodgkin, A.; Marquardt, C.; del Valle, F.; Yáñez Morroni, G.

    2017-12-01

    Stability slope analysis is typically focused on modeling using cohesion and friction angle parameters but in earthquake-induced landslides, susceptibility is correlated more to lithological and stratigraphic parameters. In sedimentary deposits whose cohesion and diagenesis are very low, the risk of landslides increases. The Horcón Formation, which crops out continuously along cliffs in Central Chile between 32.5° and 33°S, is a Miocene-Pliocene well preserved, horizontally stratified unit composed of marine strata which overlies Paleozoic-Mesozoic igneous basement. During the Quaternary, the sequence was tectonically uplifted 80 meters and covered by unconsolidated eolian deposits. Given that Seismotectonic and Barrier-Asperity models suggest the occurrence of a forthcoming megathrust earthquake in a segment which includes this area, the Horcón Formation constitutes a good case study to characterize the susceptibility of this type of sediment for mass movements triggered by earthquakes. Field mapping, stratigraphic and sedimentological studies, including petrographic analyses to determine lithological composition and paragenesis of diagenetic events, have been carried out along with limited gravimetric profiling and CPTU drill tests. High resolution digital elevation modeling has also been applied. This work has led to the recognition of a shallow marine lithofacies association composed of weakly lithified fossiliferous and bioturbated medium to fine grained litharenite, mudstone, and fine conglomerate. The low grade of diagenesis in the sedimentary deposits was in response to a short period of burial and a subsequent accelerated uplift evidenced along the coast of Chile during the Quaternary. We have generated a predictive model of landslide susceptibility for the Horcón Formation and for the overlying Quaternary eolian deposits incorporating variables such as composition and diagenesis of lithofacies, slope, structures, weathering and landcover. The model

  11. Applicability of the single equivalent point dipole model to represent a spatially distributed bio-electrical source

    NASA Technical Reports Server (NTRS)

    Armoundas, A. A.; Feldman, A. B.; Sherman, D. A.; Cohen, R. J.

    2001-01-01

    Although the single equivalent point dipole model has been used to represent well-localised bio-electrical sources, in realistic situations the source is distributed. Consequently, position estimates of point dipoles determined by inverse algorithms suffer from systematic error due to the non-exact applicability of the inverse model. In realistic situations, this systematic error cannot be avoided, a limitation that is independent of the complexity of the torso model used. This study quantitatively investigates the intrinsic limitations in the assignment of a location to the equivalent dipole due to distributed electrical source. To simulate arrhythmic activity in the heart, a model of a wave of depolarisation spreading from a focal source over the surface of a spherical shell is used. The activity is represented by a sequence of concentric belt sources (obtained by slicing the shell with a sequence of parallel plane pairs), with constant dipole moment per unit length (circumferentially) directed parallel to the propagation direction. The distributed source is represented by N dipoles at equal arc lengths along the belt. The sum of the dipole potentials is calculated at predefined electrode locations. The inverse problem involves finding a single equivalent point dipole that best reproduces the electrode potentials due to the distributed source. The inverse problem is implemented by minimising the chi2 per degree of freedom. It is found that the trajectory traced by the equivalent dipole is sensitive to the location of the spherical shell relative to the fixed electrodes. It is shown that this trajectory does not coincide with the sequence of geometrical centres of the consecutive belt sources. For distributed sources within a bounded spherical medium, displaced from the sphere's centre by 40% of the sphere's radius, it is found that the error in the equivalent dipole location varies from 3 to 20% for sources with size between 5 and 50% of the sphere's radius

  12. Using change-point models to estimate empirical critical loads for nitrogen in mountain ecosystems.

    PubMed

    Roth, Tobias; Kohli, Lukas; Rihm, Beat; Meier, Reto; Achermann, Beat

    2017-01-01

    To protect ecosystems and their services, the critical load concept has been implemented under the framework of the Convention on Long-range Transboundary Air Pollution (UNECE) to develop effects-oriented air pollution abatement strategies. Critical loads are thresholds below which damaging effects on sensitive habitats do not occur according to current knowledge. Here we use change-point models applied in a Bayesian context to overcome some of the difficulties when estimating empirical critical loads for nitrogen (N) from empirical data. We tested the method using simulated data with varying sample sizes, varying effects of confounding variables, and with varying negative effects of N deposition on species richness. The method was applied to the national-scale plant species richness data from mountain hay meadows and (sub)alpine scrubs sites in Switzerland. Seven confounding factors (elevation, inclination, precipitation, calcareous content, aspect as well as indicator values for humidity and light) were selected based on earlier studies examining numerous environmental factors to explain Swiss vascular plant diversity. The estimated critical load confirmed the existing empirical critical load of 5-15 kg N ha -1 yr -1 for (sub)alpine scrubs, while for mountain hay meadows the estimated critical load was at the lower end of the current empirical critical load range. Based on these results, we suggest to narrow down the critical load range for mountain hay meadows to 10-15 kg N ha -1 yr -1 . Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Defining the end-point of mastication: A conceptual model.

    PubMed

    Gray-Stuart, Eli M; Jones, Jim R; Bronlund, John E

    2017-10-01

    The great risks of swallowing are choking and aspiration of food into the lungs. Both are rare in normal functioning humans, which is remarkable given the diversity of foods and the estimated 10 million swallows performed in a lifetime. Nevertheless, it remains a major challenge to define the food properties that are necessary to ensure a safe swallow. Here, the mouth is viewed as a well-controlled processor where mechanical sensory assessment occurs throughout the occlusion-circulation cycle of mastication. Swallowing is a subsequent action. It is proposed here that, during mastication, temporal maps of interfacial property data are generated, which the central nervous system compares against a series of criteria in order to be sure that the bolus is safe to swallow. To determine these criteria, an engineering hazard analysis tool, alongside an understanding of fluid and particle mechanics, is used to deduce the mechanisms by which food may deposit or become stranded during swallowing. These mechanisms define the food properties that must be avoided. By inverting the thinking, from hazards to ensuring safety, six criteria arise which are necessary for a safe-to-swallow bolus. A new conceptual model is proposed to define when food is safe to swallow during mastication. This significantly advances earlier mouth models. The conceptual model proposed in this work provides a framework of decision-making to define when food is safe to swallow. This will be of interest to designers of dietary foods, foods for dysphagia sufferers and will aid the further development of mastication robots for preparation of artificial boluses for digestion research. It enables food designers to influence the swallow-point properties of their products. For example, a product may be designed to satisfy five of the criteria for a safe-to-swallow bolus, which means the sixth criterion and its attendant food properties define the swallow-point. Alongside other organoleptic factors, these

  14. Asymptotic behaviour of two-point functions in multi-species models

    NASA Astrophysics Data System (ADS)

    Kozlowski, Karol K.; Ragoucy, Eric

    2016-05-01

    We extract the long-distance asymptotic behaviour of two-point correlation functions in massless quantum integrable models containing multi-species excitations. For such a purpose, we extend to these models the method of a large-distance regime re-summation of the form factor expansion of correlation functions. The key feature of our analysis is a technical hypothesis on the large-volume behaviour of the form factors of local operators in such models. We check the validity of this hypothesis on the example of the SU (3)-invariant XXX magnet by means of the determinant representations for the form factors of local operators in this model. Our approach confirms the structure of the critical exponents obtained previously for numerous models solvable by the nested Bethe Ansatz.

  15. Nonequilibrium dynamics of the O( N ) model on dS3 and AdS crunches

    NASA Astrophysics Data System (ADS)

    Kumar, S. Prem; Vaganov, Vladislav

    2018-03-01

    We study the nonperturbative quantum evolution of the interacting O( N ) vector model at large- N , formulated on a spatial two-sphere, with time dependent couplings which diverge at finite time. This model - the so-called "E-frame" theory, is related via a conformal transformation to the interacting O( N ) model in three dimensional global de Sitter spacetime with time independent couplings. We show that with a purely quartic, relevant deformation the quantum evolution of the E-frame model is regular even when the classical theory is rendered singular at the end of time by the diverging coupling. Time evolution drives the E-frame theory to the large- N Wilson-Fisher fixed point when the classical coupling diverges. We study the quantum evolution numerically for a variety of initial conditions and demonstrate the finiteness of the energy at the classical "end of time". With an additional (time dependent) mass deformation, quantum backreaction lowers the mass, with a putative smooth time evolution only possible in the limit of infinite quartic coupling. We discuss the relevance of these results for the resolution of crunch singularities in AdS geometries dual to E-frame theories with a classical gravity dual.

  16. A Point-process Response Model for Spike Trains from Single Neurons in Neural Circuits under Optogenetic Stimulation

    PubMed Central

    Luo, X.; Gee, S.; Sohal, V.; Small, D.

    2015-01-01

    Optogenetics is a new tool to study neuronal circuits that have been genetically modified to allow stimulation by flashes of light. We study recordings from single neurons within neural circuits under optogenetic stimulation. The data from these experiments present a statistical challenge of modeling a high frequency point process (neuronal spikes) while the input is another high frequency point process (light flashes). We further develop a generalized linear model approach to model the relationships between two point processes, employing additive point-process response functions. The resulting model, Point-process Responses for Optogenetics (PRO), provides explicit nonlinear transformations to link the input point process with the output one. Such response functions may provide important and interpretable scientific insights into the properties of the biophysical process that governs neural spiking in response to optogenetic stimulation. We validate and compare the PRO model using a real dataset and simulations, and our model yields a superior area-under-the- curve value as high as 93% for predicting every future spike. For our experiment on the recurrent layer V circuit in the prefrontal cortex, the PRO model provides evidence that neurons integrate their inputs in a sophisticated manner. Another use of the model is that it enables understanding how neural circuits are altered under various disease conditions and/or experimental conditions by comparing the PRO parameters. PMID:26411923

  17. A point cloud modeling method based on geometric constraints mixing the robust least squares method

    NASA Astrophysics Data System (ADS)

    Yue, JIanping; Pan, Yi; Yue, Shun; Liu, Dapeng; Liu, Bin; Huang, Nan

    2016-10-01

    The appearance of 3D laser scanning technology has provided a new method for the acquisition of spatial 3D information. It has been widely used in the field of Surveying and Mapping Engineering with the characteristics of automatic and high precision. 3D laser scanning data processing process mainly includes the external laser data acquisition, the internal industry laser data splicing, the late 3D modeling and data integration system. For the point cloud modeling, domestic and foreign researchers have done a lot of research. Surface reconstruction technology mainly include the point shape, the triangle model, the triangle Bezier surface model, the rectangular surface model and so on, and the neural network and the Alfa shape are also used in the curved surface reconstruction. But in these methods, it is often focused on single surface fitting, automatic or manual block fitting, which ignores the model's integrity. It leads to a serious problems in the model after stitching, that is, the surfaces fitting separately is often not satisfied with the well-known geometric constraints, such as parallel, vertical, a fixed angle, or a fixed distance. However, the research on the special modeling theory such as the dimension constraint and the position constraint is not used widely. One of the traditional modeling methods adding geometric constraints is a method combing the penalty function method and the Levenberg-Marquardt algorithm (L-M algorithm), whose stability is pretty good. But in the research process, it is found that the method is greatly influenced by the initial value. In this paper, we propose an improved method of point cloud model taking into account the geometric constraint. We first apply robust least-squares to enhance the initial value's accuracy, and then use penalty function method to transform constrained optimization problems into unconstrained optimization problems, and finally solve the problems using the L-M algorithm. The experimental results

  18. A N-S fossil transform fault reactivated by the March 2, 2016 Mw7.8 southwest of Sumatra, Indonesia earthquake

    NASA Astrophysics Data System (ADS)

    Zhang, H.; van der Lee, S.

    2016-12-01

    Warton Basin (WB) is characterized by N-S striking fossil transform faults and E-W trending extinct ridges. The 2016 Mw7.8 southwest of Sumatra earthquake, nearby the WB's center, was first imaged by back-projecting P-waves from three regional seismic networks in Europn, Japan, and Australia. Next, the rupture direction of the earthquake was further determined using the rupture directivity analysis to P-waves from the global seismic network (GSN). Finally, we inverting these GSN waveforms on a defined N-S striking vertical fault for a kinematic source model. The results show that the earthquake reactivates a 190 degree N-S striking vertical fossil transform fault and asymmetrically bilaterally ruptures a 65 km by 30 km asperity over 35 s. Specifically, the earthquake first bilaterally ruptures northward and southward at a speed of 1.0 km/s over the first 12 s, and then mainly rupture northward at a speed of 1.6 km/s. Compared with two previous M≥7.8 WB earthquakes, including the 2000 southern WB earthquake and 2012 Mw8.6 Sumatra earthquake, the lower seismic energy radiation efficiency and slower rupture velicity of the 2016 earthquake indicate the rupture of the earthquake is probably controlled by the warmer ambient slab and tectonic stress regime.

  19. Poisson point process modeling for polyphonic music transcription.

    PubMed

    Peeling, Paul; Li, Chung-fai; Godsill, Simon

    2007-04-01

    Peaks detected in the frequency domain spectrum of a musical chord are modeled as realizations of a nonhomogeneous Poisson point process. When several notes are superimposed to make a chord, the processes for individual notes combine to give another Poisson process, whose likelihood is easily computable. This avoids a data association step linking individual harmonics explicitly with detected peaks in the spectrum. The likelihood function is ideal for Bayesian inference about the unknown note frequencies in a chord. Here, maximum likelihood estimation of fundamental frequencies shows very promising performance on real polyphonic piano music recordings.

  20. Civil tiltrotor transport point design: Model 940A

    NASA Technical Reports Server (NTRS)

    Rogers, Charles; Reisdorfer, Dale

    1993-01-01

    The objective of this effort is to produce a vehicle layout for the civil tiltrotor wing and center fuselage in sufficient detail to obtain aerodynamic and inertia loads for determining member sizing. This report addresses the parametric configuration and loads definition for a 40 passenger civil tilt rotor transport. A preliminary (point) design is developed for the tiltrotor wing box and center fuselage. This summary report provides all design details used in the pre-design; provides adequate detail to allow a preliminary design finite element model to be developed; and contains guidelines for dynamic constraints.

  1. Normalized Implicit Radial Models for Scattered Point Cloud Data without Normal Vectors

    DTIC Science & Technology

    2009-03-23

    points by shrinking a discrete membrane, Computer Graphics Forum, Vol. 24-4, 2005, pp. 791-808 [8] Floater , M. S., Reimers, M.: Meshless...Parameterization and Surface Reconstruction, Computer Aided Geometric Design 18, 2001, pp 77-92 [9] Floater , M. S.: Parameterization of Triangulations and...Unorganized Points, In: Tutorials on Multiresolution in Geometric Modelling, A. Iske, E. Quak and M. S. Floater (eds.), Springer , 2002, pp. 287-316 [10

  2. Optimal time points sampling in pathway modelling.

    PubMed

    Hu, Shiyan

    2004-01-01

    Modelling cellular dynamics based on experimental data is at the heart of system biology. Considerable progress has been made to dynamic pathway modelling as well as the related parameter estimation. However, few of them gives consideration for the issue of optimal sampling time selection for parameter estimation. Time course experiments in molecular biology rarely produce large and accurate data sets and the experiments involved are usually time consuming and expensive. Therefore, to approximate parameters for models with only few available sampling data is of significant practical value. For signal transduction, the sampling intervals are usually not evenly distributed and are based on heuristics. In the paper, we investigate an approach to guide the process of selecting time points in an optimal way to minimize the variance of parameter estimates. In the method, we first formulate the problem to a nonlinear constrained optimization problem by maximum likelihood estimation. We then modify and apply a quantum-inspired evolutionary algorithm, which combines the advantages of both quantum computing and evolutionary computing, to solve the optimization problem. The new algorithm does not suffer from the morass of selecting good initial values and being stuck into local optimum as usually accompanied with the conventional numerical optimization techniques. The simulation results indicate the soundness of the new method.

  3. Ocean Turbulence I: One-Point Closure Model Momentum and Heat Vertical Diffusivities

    NASA Technical Reports Server (NTRS)

    Canuto, V. M.; Howard, A.; Cheng, Y.; Dubovikov, M. S.

    1999-01-01

    Since the early forties, one-point turbulence closure models have been the canonical tools used to describe turbulent flows in many fields. In geophysics, Mellor and Yamada applied such models using the 1980 state-of-the art. Since then, no improvements were introduced to alleviate two major difficulties: 1) closure of the pressure correlations, which affects the correct determination of the critical Richardson number Ri(sub cr) above which turbulent mixing is no longer possible and 2) the need to express the non-local third-order moments (TOM) in terms of lower order moments rather than via the down-gradient approximation as done thus far, since the latter seriously underestimates the TOMs. Since 1) and 2) are still being dealt with adjustable parameters which weaken the credibility of the models, alternative models, not based on turbulence modeling, have been suggested. The aim of this paper is to show that new information, partly derived from the newest 2-point closure model discussed, can be used to solve these shortcomings. The new one-point closure model, which in its simplest form is algebraic and thus simple to implement, is first shown to reproduce a variety of data. Then, it is used in a Ocean-General Circulation Model (O-GCM) where it reproduces well a large variety of ocean data. While phenomenological models are specifically tuned to ocean turbulence, the present model is not. It is first tested against laboratory data on stably stratified flows and then used in an O-GCM. It is more general, more predictive and more resilient, e.g., it can incorporate phenomena like wave-breaking at the surface, salinity diffusivity, non-locality, etc. One important feature that naturally comes out of the new model is that the predicted Richardson critical value Ri(sub cr) is Ri (sub cr approx. = 1) in agreement with both Large Eddy Simulations (LES) and empirical evidence while all previous models predicted Ri (sub cr approx. = 0.2) which led to a considerable

  4. s-Process Branch Point (n,γ) Measurements using NIF^*

    NASA Astrophysics Data System (ADS)

    Bernstein, Lee; Bleuel, D. L.; Cerjan, C.; Greife, U.; Hoffman, R. D.; Phair, L.; McEvoy, A.; Moody, K. J.; Schneider, D. H. G.; Shaughnessy, D.; Stoyer, M. A.

    2008-10-01

    The National Ignition Facility (NIF) at LLNL is a laser-driven inertial confinement fusion laboratory designed to compress pellets containing small (<10^20 atoms) samples of material to densities in excess of 100 g/cm^3 and temperatures up to kBT 10 keV. Early NIF shots will feature a proton-tritium (HT) fuel mix that creates a neutron spectrum similar to that found in AGB main sequence stars. In this talk I will discuss nuclear physics experiments using NIF and present a plan to measure the ^171Tm(n,γ) s-process branch point cross section in a NIF plasma environment which will include the plasma-induced population of the first excited state at Ex=5.0 keV. *This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract W-7405-Eng-48 and under Contract DE-AC52-07NA27344. For LBNL this work was supported by the U.S. Department of Energy under Contract No. DE-AC02-05CH11231.

  5. Using Model Point Spread Functions to Identifying Binary Brown Dwarf Systems

    NASA Astrophysics Data System (ADS)

    Matt, Kyle; Stephens, Denise C.; Lunsford, Leanne T.

    2017-01-01

    A Brown Dwarf (BD) is a celestial object that is not massive enough to undergo hydrogen fusion in its core. BDs can form in pairs called binaries. Due to the great distances between Earth and these BDs, they act as point sources of light and the angular separation between binary BDs can be small enough to appear as a single, unresolved object in images, according to Rayleigh Criterion. It is not currently possible to resolve some of these objects into separate light sources. Stephens and Noll (2006) developed a method that used model point spread functions (PSFs) to identify binary Trans-Neptunian Objects, we will use this method to identify binary BD systems in the Hubble Space Telescope archive. This method works by comparing model PSFs of single and binary sources to the observed PSFs. We also use a method to compare model spectral data for single and binary fits to determine the best parameter values for each component of the system. We describe these methods, its challenges and other possible uses in this poster.

  6. Quasisaddles as relevant points of the potential energy surface in the dynamics of supercooled liquids

    NASA Astrophysics Data System (ADS)

    Angelani, L.; Di Leonardo, R.; Ruocco, G.; Scala, A.; Sciortino, F.

    2002-06-01

    The supercooled dynamics of a Lennard-Jones model liquid is numerically investigated studying relevant points of the potential energy surface, i.e., the minima of the square gradient of total potential energy V. The main findings are (i) the number of negative curvatures n of these sampled points appears to extrapolate to zero at the mode coupling critical temperature Tc; (ii) the temperature behavior of n(T) has a close relationship with the temperature behavior of the diffusivity; (iii) the potential energy landscape shows a high regularity in the distances among the relevant points and in their energy location. Finally we discuss a model of the landscape, previously introduced by Madan and Keyes [J. Chem. Phys. 98, 3342 (1993)], able to reproduce the previous findings.

  7. LIDAR Point Cloud Data Extraction and Establishment of 3D Modeling of Buildings

    NASA Astrophysics Data System (ADS)

    Zhang, Yujuan; Li, Xiuhai; Wang, Qiang; Liu, Jiang; Liang, Xin; Li, Dan; Ni, Chundi; Liu, Yan

    2018-01-01

    This paper takes the method of Shepard’s to deal with the original LIDAR point clouds data, and generate regular grid data DSM, filters the ground point cloud and non ground point cloud through double least square method, and obtains the rules of DSM. By using region growing method for the segmentation of DSM rules, the removal of non building point cloud, obtaining the building point cloud information. Uses the Canny operator to extract the image segmentation is needed after the edges of the building, uses Hough transform line detection to extract the edges of buildings rules of operation based on the smooth and uniform. At last, uses E3De3 software to establish the 3D model of buildings.

  8. Large-N solution of the heterotic CP(N-1) model with twisted masses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bolokhov, Pavel A.; Department of Physics and Astronomy, University of Victoria, Victoria, BC, V8P 1A1; Shifman, Mikhail

    2010-07-15

    We address a number of unanswered questions in the N=(0,2)-deformed CP(N-1) model with twisted masses. In particular, we complete the program of solving the CP(N-1) model with twisted masses in the large-N limit. In A. Gorsky, M. Shifman, and A. Yung, Phys. Rev. D 73, 065011 (2006), a nonsupersymmetric version of the model with the Z{sub N} symmetric twisted masses was analyzed in the framework of Witten's method. In M. Shifman and A. Yung, Phys. Rev. D 77, 125017 (2008), this analysis was extended: the large-N solution of the heterotic N=(0,2) CP(N-1) model with no twisted masses was found. Heremore » we solve this model with the twisted masses switched on. Dynamical scenarios at large and small m are studied (m is the twisted-mass scale). We found three distinct phases and two phase transitions on the m plane. Two phases with the spontaneously broken Z{sub N} symmetry are separated by a phase with unbroken Z{sub N}. This latter phase is characterized by a unique vacuum and confinement of all U(1) charged fields (''quarks''). In the broken phases (one of them is at strong coupling) there are N degenerate vacua and no confinement, similarly to the situation in the N=(2,2) model. Supersymmetry is spontaneously broken everywhere except a circle |m|={Lambda} in the Z{sub N}-unbroken phase. Related issues are considered. In particular, we discuss the mirror representation for the heterotic model in a certain limiting case.« less

  9. Multiple-Point statistics for stochastic modeling of aquifers, where do we stand?

    NASA Astrophysics Data System (ADS)

    Renard, P.; Julien, S.

    2017-12-01

    In the last 20 years, multiple-point statistics have been a focus of much research, successes and disappointments. The aim of this geostatistical approach was to integrate geological information into stochastic models of aquifer heterogeneity to better represent the connectivity of high or low permeability structures in the underground. Many different algorithms (ENESIM, SNESIM, SIMPAT, CCSIM, QUILTING, IMPALA, DEESSE, FILTERSIM, HYPPS, etc.) have been and are still proposed. They are all based on the concept of a training data set from which spatial statistics are derived and used in a further step to generate conditional realizations. Some of these algorithms evaluate the statistics of the spatial patterns for every pixel, other techniques consider the statistics at the scale of a patch or a tile. While the method clearly succeeded in enabling modelers to generate realistic models, several issues are still the topic of debate both from a practical and theoretical point of view, and some issues such as training data set availability are often hindering the application of the method in practical situations. In this talk, the aim is to present a review of the status of these approaches both from a theoretical and practical point of view using several examples at different scales (from pore network to regional aquifer).

  10. Models for mean bonding length, melting point and lattice thermal expansion of nanoparticle materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Omar, M.S., E-mail: dr_m_s_omar@yahoo.com

    2012-11-15

    Graphical abstract: Three models are derived to explain the nanoparticles size dependence of mean bonding length, melting temperature and lattice thermal expansion applied on Sn, Si and Au. The following figures are shown as an example for Sn nanoparticles indicates hilly applicable models for nanoparticles radius larger than 3 nm. Highlights: ► A model for a size dependent mean bonding length is derived. ► The size dependent melting point of nanoparticles is modified. ► The bulk model for lattice thermal expansion is successfully used on nanoparticles. -- Abstract: A model, based on the ratio number of surface atoms to thatmore » of its internal, is derived to calculate the size dependence of lattice volume of nanoscaled materials. The model is applied to Si, Sn and Au nanoparticles. For Si, that the lattice volume is increases from 20 Å{sup 3} for bulk to 57 Å{sup 3} for a 2 nm size nanocrystals. A model, for calculating melting point of nanoscaled materials, is modified by considering the effect of lattice volume. A good approach of calculating size-dependent melting point begins from the bulk state down to about 2 nm diameter nanoparticle. Both values of lattice volume and melting point obtained for nanosized materials are used to calculate lattice thermal expansion by using a formula applicable for tetrahedral semiconductors. Results for Si, change from 3.7 × 10{sup −6} K{sup −1} for a bulk crystal down to a minimum value of 0.1 × 10{sup −6} K{sup −1} for a 6 nm diameter nanoparticle.« less

  11. Tight-binding modeling and low-energy behavior of the semi-Dirac point.

    PubMed

    Banerjee, S; Singh, R R P; Pardo, V; Pickett, W E

    2009-07-03

    We develop a tight-binding model description of semi-Dirac electronic spectra, with highly anisotropic dispersion around point Fermi surfaces, recently discovered in electronic structure calculations of VO2-TiO2 nanoheterostructures. We contrast their spectral properties with the well-known Dirac points on the honeycomb lattice relevant to graphene layers and the spectra of bands touching each other in zero-gap semiconductors. We also consider the lowest order dispersion around one of the semi-Dirac points and calculate the resulting electronic energy levels in an external magnetic field. In spite of apparently similar electronic structures, Dirac and semi-Dirac systems support diverse low-energy physics.

  12. Uncertainty in measurements of the photorespiratory CO2 compensation point and its impact on models of leaf photosynthesis.

    PubMed

    Walker, Berkley J; Orr, Douglas J; Carmo-Silva, Elizabete; Parry, Martin A J; Bernacchi, Carl J; Ort, Donald R

    2017-06-01

    Rates of carbon dioxide assimilation through photosynthesis are readily modeled using the Farquhar, von Caemmerer, and Berry (FvCB) model based on the biochemistry of the initial Rubisco-catalyzed reaction of net C 3 photosynthesis. As models of CO 2 assimilation rate are used more broadly for simulating photosynthesis among species and across scales, it is increasingly important that their temperature dependencies are accurately parameterized. A vital component of the FvCB model, the photorespiratory CO 2 compensation point (Γ * ), combines the biochemistry of Rubisco with the stoichiometry of photorespiratory release of CO 2 . This report details a comparison of the temperature response of Γ * measured using different techniques in three important model and crop species (Nicotiana tabacum, Triticum aestivum, and Glycine max). We determined that the different Γ * determination methods produce different temperature responses in the same species that are large enough to impact higher-scale leaf models of CO 2 assimilation rate. These differences are largest in N. tabacum and could be the result of temperature-dependent increases in the amount of CO 2 lost from photorespiration per Rubisco oxygenation reaction.

  13. Applying n-bit floating point numbers and integers, and the n-bit filter of HDF5 to reduce file sizes of remote sensing products in memory-sensitive environments

    NASA Astrophysics Data System (ADS)

    Zinke, Stephan

    2017-02-01

    Memory sensitive applications for remote sensing data require memory-optimized data types in remote sensing products. Hierarchical Data Format version 5 (HDF5) offers user defined floating point numbers and integers and the n-bit filter to create data types optimized for memory consumption. The European Organisation for the Exploitation of Meteorological Satellites (EUMETSAT) applies a compaction scheme to the disseminated products of the Day and Night Band (DNB) data of Suomi National Polar-orbiting Partnership (S-NPP) satellite's instrument Visible Infrared Imager Radiometer Suite (VIIRS) through the EUMETSAT Advanced Retransmission Service, converting the original 32 bits floating point numbers to user defined floating point numbers in combination with the n-bit filter for the radiance dataset of the product. The radiance dataset requires a floating point representation due to the high dynamic range of the DNB. A compression factor of 1.96 is reached by using an automatically determined exponent size and an 8 bits trailing significand and thus reducing the bandwidth requirements for dissemination. It is shown how the parameters needed for user defined floating point numbers are derived or determined automatically based on the data present in a product.

  14. On the two-loop divergences of the 2-point hypermultiplet supergraphs for 6D, N = (1 , 1) SYM theory

    NASA Astrophysics Data System (ADS)

    Buchbinder, I. L.; Ivanov, E. A.; Merzlikin, B. S.; Stepanyantz, K. V.

    2018-03-01

    We consider 6D, N = (1 , 1) supersymmetric Yang-Mills theory formulated in N = (1 , 0) harmonic superspace and analyze the structure of the two-loop divergences in the hypermultiplet sector. Using the N = (1 , 0) superfield background field method we study the two-point supergraphs with the hypermultiplet legs and prove that their total contribution to the divergent part of effective action vanishes off shell.

  15. Self-Exciting Point Process Modeling of Conversation Event Sequences

    NASA Astrophysics Data System (ADS)

    Masuda, Naoki; Takaguchi, Taro; Sato, Nobuo; Yano, Kazuo

    Self-exciting processes of Hawkes type have been used to model various phenomena including earthquakes, neural activities, and views of online videos. Studies of temporal networks have revealed that sequences of social interevent times for individuals are highly bursty. We examine some basic properties of event sequences generated by the Hawkes self-exciting process to show that it generates bursty interevent times for a wide parameter range. Then, we fit the model to the data of conversation sequences recorded in company offices in Japan. In this way, we can estimate relative magnitudes of the self excitement, its temporal decay, and the base event rate independent of the self excitation. These variables highly depend on individuals. We also point out that the Hawkes model has an important limitation that the correlation in the interevent times and the burstiness cannot be independently modulated.

  16. QSAR modeling of cumulative environmental end-points for the prioritization of hazardous chemicals.

    PubMed

    Gramatica, Paola; Papa, Ester; Sangion, Alessandro

    2018-01-24

    The hazard of chemicals in the environment is inherently related to the molecular structure and derives simultaneously from various chemical properties/activities/reactivities. Models based on Quantitative Structure Activity Relationships (QSARs) are useful to screen, rank and prioritize chemicals that may have an adverse impact on humans and the environment. This paper reviews a selection of QSAR models (based on theoretical molecular descriptors) developed for cumulative multivariate endpoints, which were derived by mathematical combination of multiple effects and properties. The cumulative end-points provide an integrated holistic point of view to address environmentally relevant properties of chemicals.

  17. Analysis of 3d Building Models Accuracy Based on the Airborne Laser Scanning Point Clouds

    NASA Astrophysics Data System (ADS)

    Ostrowski, W.; Pilarska, M.; Charyton, J.; Bakuła, K.

    2018-05-01

    Creating 3D building models in large scale is becoming more popular and finds many applications. Nowadays, a wide term "3D building models" can be applied to several types of products: well-known CityGML solid models (available on few Levels of Detail), which are mainly generated from Airborne Laser Scanning (ALS) data, as well as 3D mesh models that can be created from both nadir and oblique aerial images. City authorities and national mapping agencies are interested in obtaining the 3D building models. Apart from the completeness of the models, the accuracy aspect is also important. Final accuracy of a building model depends on various factors (accuracy of the source data, complexity of the roof shapes, etc.). In this paper the methodology of inspection of dataset containing 3D models is presented. The proposed approach check all building in dataset with comparison to ALS point clouds testing both: accuracy and level of details. Using analysis of statistical parameters for normal heights for reference point cloud and tested planes and segmentation of point cloud provides the tool that can indicate which building and which roof plane in do not fulfill requirement of model accuracy and detail correctness. Proposed method was tested on two datasets: solid and mesh model.

  18. A Hybrid Vortex Sheet / Point Vortex Model for Unsteady Separated Flows

    NASA Astrophysics Data System (ADS)

    Darakananda, Darwin; Eldredge, Jeff D.; Colonius, Tim; Williams, David R.

    2015-11-01

    The control of separated flow over an airfoil is essential for obtaining lift enhancement, drag reduction, and the overall ability to perform high agility maneuvers. In order to develop reliable flight control systems capable of realizing agile maneuvers, we need a low-order aerodynamics model that can accurately predict the force response of an airfoil to arbitrary disturbances and/or actuation. In the present work, we integrate vortex sheets and variable strength point vortices into a method that is able to capture the formation of coherent vortex structures while remaining computationally tractable for control purposes. The role of the vortex sheet is limited to tracking the dynamics of the shear layer immediately behind the airfoil. When parts of the sheet develop into large scale structures, those sections are replaced by variable strength point vortices. We prevent the vortex sheets from growing indefinitely by truncating the tips of the sheets and transfering their circulation into nearby point vortices whenever the length of sheet exceeds a threshold. We demonstrate the model on a variety of canonical problems, including pitch-up and impulse translation of an airfoil at various angles of attack. Support by the U.S. Air Force Office of Scientific Research (FA9550-14-1-0328) with program manager Dr. Douglas Smith is gratefully acknowledged.

  19. Modeling elephant-mediated cascading effects of water point closure.

    PubMed

    Hilbers, Jelle P; Van Langevelde, Frank; Prins, Herbert H T; Grant, C C; Peel, Mike J S; Coughenour, Michael B; De Knegt, Henrik J; Slotow, Rob; Smit, Izak P J; Kiker, Greg A; De Boer, Willem F

    2015-03-01

    Wildlife management to reduce the impact of wildlife on their habitat can be done in several ways, among which removing animals (by either culling or translocation) is most often used. There are, however, alternative ways to control wildlife densities, such as opening or closing water points. The effects of these alternatives are poorly studied. In this paper, we focus on manipulating large herbivores through the closure of water points (WPs). Removal of artificial WPs has been suggested in order to change the distribution of African elephants, which occur in high densities in national parks in Southern Africa and are thought to have a destructive effect on the vegetation. Here, we modeled the long-term effects of different scenarios of WP closure on the spatial distribution of elephants, and consequential effects on the vegetation and other herbivores in Kruger National Park, South Africa. Using a dynamic ecosystem model, SAVANNA, scenarios were evaluated that varied in availability of artificial WPs; levels of natural water; and elephant densities. Our modeling results showed that elephants can indirectly negatively affect the distributions of meso-mixed feeders, meso-browsers, and some meso-grazers under wet conditions. The closure of artificial WPs hardly had any effect during these natural wet conditions. Under dry conditions, the spatial distribution of both elephant bulls and cows changed when the availability of artificial water was severely reduced in the model. These changes in spatial distribution triggered changes in the spatial availability of woody biomass over the simulation period of 80 years, and this led to changes in the rest of the herbivore community, resulting in increased densities of all herbivores, except for giraffe and steenbok, in areas close to rivers. The spatial distributions of elephant bulls and cows showed to be less affected by the closure of WPs than most of the other herbivore species. Our study contributes to ecologically

  20. Multistate Landau-Zener models with all levels crossing at one point

    DOE PAGES

    Li, Fuxiang; Sun, Chen; Chernyak, Vladimir Y.; ...

    2017-08-04

    Within this paper, we discuss common properties and reasons for integrability in the class of multistate Landau-Zener models with all diabatic levels crossing at one point. Exploring the Stokes phenomenon, we show that each previously solved model has a dual one, whose scattering matrix can be also obtained analytically. For applications, we demonstrate how our results can be used to study conversion of molecular into atomic Bose condensates during passage through the Feshbach resonance, and provide purely algebraic solutions of the bowtie and special cases of the driven Tavis-Cummings model.

  1. 33 CFR 167.452 - In the Santa Barbara Channel: Between Point Conception and Point Arguello.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ....452 In the Santa Barbara Channel: Between Point Conception and Point Arguello. (a) A separation zone... 120°30.16′ W. 34°18.90′ N 120°30.96′ W. 34°25.70′ N 120°51.81′ W. 34°23.75′ N 120°52.51′ W. (b) A traffic lane for westbound traffic is established between the separation zone and a line connecting the...

  2. 33 CFR 167.452 - In the Santa Barbara Channel: Between Point Conception and Point Arguello.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ....452 In the Santa Barbara Channel: Between Point Conception and Point Arguello. (a) A separation zone... 120°30.16′ W. 34°18.90′ N 120°30.96′ W. 34°25.70′ N 120°51.81′ W. 34°23.75′ N 120°52.51′ W. (b) A traffic lane for westbound traffic is established between the separation zone and a line connecting the...

  3. 33 CFR 167.452 - In the Santa Barbara Channel: Between Point Conception and Point Arguello.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ....452 In the Santa Barbara Channel: Between Point Conception and Point Arguello. (a) A separation zone... 120°30.16′ W. 34°18.90′ N 120°30.96′ W. 34°25.70′ N 120°51.81′ W. 34°23.75′ N 120°52.51′ W. (b) A traffic lane for westbound traffic is established between the separation zone and a line connecting the...

  4. 33 CFR 167.452 - In the Santa Barbara Channel: Between Point Conception and Point Arguello.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ....452 In the Santa Barbara Channel: Between Point Conception and Point Arguello. (a) A separation zone... 120°30.16′ W. 34°18.90′ N 120°30.96′ W. 34°25.70′ N 120°51.81′ W. 34°23.75′ N 120°52.51′ W. (b) A traffic lane for westbound traffic is established between the separation zone and a line connecting the...

  5. 33 CFR 167.452 - In the Santa Barbara Channel: Between Point Conception and Point Arguello.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ....452 In the Santa Barbara Channel: Between Point Conception and Point Arguello. (a) A separation zone... 120°30.16′ W. 34°18.90′ N 120°30.96′ W. 34°25.70′ N 120°51.81′ W. 34°23.75′ N 120°52.51′ W. (b) A traffic lane for westbound traffic is established between the separation zone and a line connecting the...

  6. Rupture process of large earthquakes in the northern Mexico subduction zone

    NASA Astrophysics Data System (ADS)

    Ruff, Larry J.; Miller, Angus D.

    1994-03-01

    The Cocos plate subducts beneath North America at the Mexico trench. The northernmost segment of this trench, between the Orozco and Rivera fracture zones, has ruptured in a sequence of five large earthquakes from 1973 to 1985; the Jan. 30, 1973 Colima event ( M s 7.5) at the northern end of the segment near Rivera fracture zone; the Mar. 14, 1979 Petatlan event ( M s 7.6) at the southern end of the segment on the Orozco fracture zone; the Oct. 25, 1981 Playa Azul event ( M s 7.3) in the middle of the Michoacan “gap”; the Sept. 19, 1985 Michoacan mainshock ( M s 8.1); and the Sept. 21, 1985 Michoacan aftershock ( M s 7.6) that reruptured part of the Petatlan zone. Body wave inversion for the rupture process of these earthquakes finds the best: earthquake depth; focal mechanism; overall source time function; and seismic moment, for each earthquake. In addition, we have determined spatial concentrations of seismic moment release for the Colima earthquake, and the Michoacan mainshock and aftershock. These spatial concentrations of slip are interpreted as asperities; and the resultant asperity distribution for Mexico is compared to other subduction zones. The body wave inversion technique also determines the Moment Tensor Rate Functions; but there is no evidence for statistically significant changes in the moment tensor during rupture for any of the five earthquakes. An appendix describes the Moment Tensor Rate Functions methodology in detail. The systematic bias between global and regional determinations of epicentral locations in Mexico must be resolved to enable plotting of asperities with aftershocks and geographic features. We have spatially “shifted” all of our results to regional determinations of epicenters. The best point source depths for the five earthquakes are all above 30 km, consistent with the idea that the down-dip edge of the seismogenic plate interface in Mexico is shallow compared to other subduction zones. Consideration of uncertainties in

  7. Creative use of pilot points to address site and regional scale heterogeneity in a variable-density model

    USGS Publications Warehouse

    Dausman, Alyssa M.; Doherty, John; Langevin, Christian D.

    2010-01-01

    Pilot points for parameter estimation were creatively used to address heterogeneity at both the well field and regional scales in a variable-density groundwater flow and solute transport model designed to test multiple hypotheses for upward migration of fresh effluent injected into a highly transmissive saline carbonate aquifer. Two sets of pilot points were used within in multiple model layers, with one set of inner pilot points (totaling 158) having high spatial density to represent hydraulic conductivity at the site, while a second set of outer points (totaling 36) of lower spatial density was used to represent hydraulic conductivity further from the site. Use of a lower spatial density outside the site allowed (1) the total number of pilot points to be reduced while maintaining flexibility to accommodate heterogeneity at different scales, and (2) development of a model with greater areal extent in order to simulate proper boundary conditions that have a limited effect on the area of interest. The parameters associated with the inner pilot points were log transformed hydraulic conductivity multipliers of the conductivity field obtained by interpolation from outer pilot points. The use of this dual inner-outer scale parameterization (with inner parameters constituting multipliers for outer parameters) allowed smooth transition of hydraulic conductivity from the site scale, where greater spatial variability of hydraulic properties exists, to the regional scale where less spatial variability was necessary for model calibration. While the model is highly parameterized to accommodate potential aquifer heterogeneity, the total number of pilot points is kept at a minimum to enable reasonable calibration run times.

  8. Emergence of gravity, fermion, gauge and Chern-Simons fields during formation of N-dimensional manifolds from joining point-like ones

    NASA Astrophysics Data System (ADS)

    Sepehri, Alireza; Shoorvazi, Somayyeh

    In this paper, we will consider the birth and evolution of fields during formation of N-dimensional manifolds from joining point-like ones. We will show that at the beginning, only there are point-like manifolds which some strings are attached to them. By joining these manifolds, 1-dimensional manifolds are appeared and gravity, fermion, and gauge fields are emerged. By coupling these manifolds, higher dimensional manifolds are produced and higher orders of fermion, gauge fields and gravity are emerged. By decaying N-dimensional manifold, two child manifolds and a Chern-Simons one are born and anomaly is emerged. The Chern-Simons manifold connects two child manifolds and leads to the energy transmission from the bulk to manifolds and their expansion. We show that F-gravity can be emerged during the formation of N-dimensional manifold from point-like manifolds. This type of F-gravity includes both type of fermionic and bosonic gravity. G-fields and also C-fields which are produced by fermionic strings produce extra energy and change the gravity.

  9. Virtual and Printed 3D Models for Teaching Crystal Symmetry and Point Groups

    ERIC Educational Resources Information Center

    Casas, Lluís; Estop, Euge`nia

    2015-01-01

    Both, virtual and printed 3D crystal models can help students and teachers deal with chemical education topics such as symmetry and point groups. In the present paper, two freely downloadable tools (interactive PDF files and a mobile app) are presented as examples of the application of 3D design to study point-symmetry. The use of 3D printing to…

  10. Is Slow Slip a Cause or a Result of Tremor?

    NASA Astrophysics Data System (ADS)

    Luo, Y.; Ampuero, J. P.

    2017-12-01

    While various modeling efforts have been conducted to reproduce subsets of observations of tremor and slow-slip events (SSE), a fundamental but yet unanswered question is whether slow slip is a cause or a result of tremor. Tremor is commonly regarded as driven by SSE. This view is mainly based on observations of SSE without detected tremors and on (frequency-limited) estimates of total tremor seismic moment being lower than 1% of their concomitant SSE moment. In previous studies we showed that models of heterogeneous faults, composed of seismic asperities embedded in an aseismic fault zone matrix, reproduce quantitatively the hierarchical patterns of tremor migration observed in Cascadia and Shikoku. To address the title question, we design two end-member models of a heterogeneous fault. In the SSE-driven-tremor model, slow slip events are spontaneously generated by the matrix (even in the absence of seismic asperities) and drive tremor. In the Tremor-driven-SSE model the matrix is stable (it slips steadily in the absence of asperities) and slow slip events result from the collective behavior of tremor asperities interacting via transient creep (local afterslip fronts). We study these two end-member models through 2D quasi-dynamic multi-cycle simulations of faults governed by rate-and-state friction with heterogeneous frictional properties and effective normal stress, using the earthquake simulation software QDYN (https://zenodo.org/record/322459). We find that both models reproduce first-order observations of SSE and tremor and have very low seismic to aseismic moment ratio. However, the Tremor-driven-SSE model assumes a simpler rheology than the SSE-driven-tremor model and matches key observations better and without fine tuning, including the ratio of propagation speeds of forward SSE and rapid tremor reversals and the decay of inter-event times of Low Frequency Earthquakes. These modeling results indicate that, in contrast to a common view, SSE could be a result

  11. Hole-ness of point clouds

    NASA Astrophysics Data System (ADS)

    Gronz, Oliver; Seeger, Manuel; Klaes, Björn; Casper, Markus C.; Ries, Johannes B.

    2015-04-01

    Accurate and dense 3D models of soil surfaces can be used in various ways: They can be used as initial shapes for erosion models. They can be used as benchmark shapes for erosion model outputs. They can be used to derive metrics, such as random roughness... One easy and low-cost method to produce these models is structure from motion (SfM). Using this method, two questions arise: Does the soil moisture, which changes the colour, albedo and reflectivity of the soil, influence the model quality? How can the model quality be evaluated? To answer these questions, a suitable data set has been produced: soil has been placed on a tray and areas with different roughness structures have been formed. For different moisture states - dry, medium, saturated - and two different lighting conditions - direct and indirect - sets of high-resolution images at the same camera positions have been taken. From the six image sets, 3D point clouds have been produced using VisualSfM. The visual inspection of the 3D models showed that all models have different areas, where holes of different sizes occur. But it is obviously a subjective task to determine the model's quality by visual inspection. One typical approach to evaluate model quality objectively is to estimate the point density on a regular, two-dimensional grid: the number of 3D points in each grid cell projected on a plane is calculated. This works well for surfaces that do not show vertical structures. Along vertical structures, many points will be projected on the same grid cell and thus the point density rather depends on the shape of the surface but less on the quality of the model. Another approach has been applied by using the points resulting from Poisson Surface Reconstructions. One of this algorithm's properties is the filling of holes: new points are interpolated inside the holes. Using the original 3D point cloud and the interpolated Poisson point set, two analyses have been performed: For all Poisson points, the

  12. Equilibrium points of the tilted perfect fluid Bianchi VIh state space

    NASA Astrophysics Data System (ADS)

    Apostolopoulos, Pantelis S.

    2005-05-01

    We present the full set of evolution equations for the spatially homogeneous cosmologies of type VIh filled with a tilted perfect fluid and we provide the corresponding equilibrium points of the resulting dynamical state space. It is found that only when the group parameter satisfies h > -1 a self-similar solution exists. In particular we show that for h > -{1/9} there exists a self-similar equilibrium point provided that γ ∈ ({2(3+sqrt{-h})/5+3sqrt{-h}},{3/2}) whereas for h < -{frac 19} the state parameter belongs to the interval γ ∈(1,{2(3+sqrt{-h})/5+3sqrt{-h}}). This family of new exact self-similar solutions belongs to the subclass nαα = 0 having non-zero vorticity. In both cases the equilibrium points have a six-dimensional stable manifold and may act as future attractors at least for the models satisfying nαα = 0. Also we give the exact form of the self-similar metrics in terms of the state and group parameter. As an illustrative example we provide the explicit form of the corresponding self-similar radiation model (γ = {frac 43}), parametrised by the group parameter h. Finally we show that there are no tilted self-similar models of type III and irrotational models of type VIh.

  13. Fractal modeling of fluidic leakage through metal sealing surfaces

    NASA Astrophysics Data System (ADS)

    Zhang, Qiang; Chen, Xiaoqian; Huang, Yiyong; Chen, Yong

    2018-04-01

    This paper investigates the fluidic leak rate through metal sealing surfaces by developing fractal models for the contact process and leakage process. An improved model is established to describe the seal-contact interface of two metal rough surface. The contact model divides the deformed regions by classifying the asperities of different characteristic lengths into the elastic, elastic-plastic and plastic regimes. Using the improved contact model, the leakage channel under the contact surface is mathematically modeled based on the fractal theory. The leakage model obtains the leak rate using the fluid transport theory in porous media, considering that the pores-forming percolation channels can be treated as a combination of filled tortuous capillaries. The effects of fractal structure, surface material and gasket size on the contact process and leakage process are analyzed through numerical simulations for sealed ring gaskets.

  14. Cubature/ Unscented/ Sigma Point Kalman Filtering with Angular Measurement Models

    DTIC Science & Technology

    2015-07-06

    Cubature/ Unscented/ Sigma Point Kalman Filtering with Angular Measurement Models David Frederic Crouse Naval Research Laboratory 4555 Overlook Ave...measurement and process non- linearities, such as the cubature Kalman filter , can perform ex- tremely poorly in many applications involving angular... Kalman filtering is a realization of the best linear unbiased estimator (BLUE) that evaluates certain integrals for expected values using different forms

  15. Modeling and Visualization Process of the Curve of Pen Point by GeoGebra

    ERIC Educational Resources Information Center

    Aktümen, Muharem; Horzum, Tugba; Ceylan, Tuba

    2013-01-01

    This study describes the mathematical construction of a real-life model by means of parametric equations, as well as the two- and three-dimensional visualization of the model using the software GeoGebra. The model was initially considered as "determining the parametric equation of the curve formed on a plane by the point of a pen, positioned…

  16. Accuracy analysis of point cloud modeling for evaluating concrete specimens

    NASA Astrophysics Data System (ADS)

    D'Amico, Nicolas; Yu, Tzuyang

    2017-04-01

    Photogrammetric methods such as structure from motion (SFM) have the capability to acquire accurate information about geometric features, surface cracks, and mechanical properties of specimens and structures in civil engineering. Conventional approaches to verify the accuracy in photogrammetric models usually require the use of other optical techniques such as LiDAR. In this paper, geometric accuracy of photogrammetric modeling is investigated by studying the effects of number of photos, radius of curvature, and point cloud density (PCD) on estimated lengths, areas, volumes, and different stress states of concrete cylinders and panels. Four plain concrete cylinders and two plain mortar panels were used for the study. A commercially available mobile phone camera was used in collecting all photographs. Agisoft PhotoScan software was applied in photogrammetric modeling of all concrete specimens. From our results, it was found that the increase of number of photos does not necessarily improve the geometric accuracy of point cloud models (PCM). It was also found that the effect of radius of curvature is not significant when compared with the ones of number of photos and PCD. A PCD threshold of 15.7194 pts/cm3 is proposed to construct reliable and accurate PCM for condition assessment. At this PCD threshold, all errors for estimating lengths, areas, and volumes were less than 5%. Finally, from the study of mechanical property of a plain concrete cylinder, we have found that the increase of stress level inside the concrete cylinder can be captured by the increase of radial strain in its PCM.

  17. A model of the 8-25 micron point source infrared sky

    NASA Technical Reports Server (NTRS)

    Wainscoat, Richard J.; Cohen, Martin; Volk, Kevin; Walker, Helen J.; Schwartz, Deborah E.

    1992-01-01

    We present a detailed model for the IR point-source sky that comprises geometrically and physically realistic representations of the Galactic disk, bulge, stellar halo, spiral arms (including the 'local arm'), molecular ring, and the extragalactic sky. We represent each of the distinct Galactic components by up to 87 types of Galactic source, each fully characterized by scale heights, space densities, and absolute magnitudes at BVJHK, 12, and 25 microns. The model is guided by a parallel Monte Carlo simulation of the Galaxy at 12 microns. The content of our Galactic source table constitutes a good match to the 12 micron luminosity function in the simulation, as well as to the luminosity functions at V and K. We are able to produce differential and cumulative IR source counts for any bandpass lying fully within the IRAS Low-Resolution Spectrometer's range (7.7-22.7 microns as well as for the IRAS 12 and 25 micron bands. These source counts match the IRAS observations well. The model can be used to predict the character of the point source sky expected for observations from IR space experiments.

  18. Novel Monitoring Techniques for Characterizing Frictional Interfaces in the Laboratory

    PubMed Central

    Selvadurai, Paul A.; Glaser, Steven D.

    2015-01-01

    A pressure-sensitive film was used to characterize the asperity contacts along a polymethyl methacrylate (PMMA) interface in the laboratory. The film has structural health monitoring (SHM) applications for flanges and other precision fittings and train rail condition monitoring. To calibrate the film, simple spherical indentation tests were performed and validated against a finite element model (FEM) to compare normal stress profiles. Experimental measurements of the normal stress profiles were within −7.7% to 6.6% of the numerical calculations between 12 and 50 MPa asperity normal stress. The film also possessed the capability of quantifying surface roughness, an important parameter when examining wear and attrition in SHM applications. A high definition video camera supplied data for photometric analysis (i.e., the measure of visible light) of asperities along the PMMA-PMMA interface in a direct shear configuration, taking advantage of the transparent nature of the sample material. Normal stress over individual asperities, calculated with the pressure-sensitive film, was compared to the light intensity transmitted through the interface. We found that the luminous intensity transmitted through individual asperities linearly increased 0.05643 ± 0.0012 candelas for an increase of 1 MPa in normal stress between normal stresses ranging from 23 to 33 MPa. PMID:25923930

  19. Finite element based N-Port model for preliminary design of multibody systems

    NASA Astrophysics Data System (ADS)

    Sanfedino, Francesco; Alazard, Daniel; Pommier-Budinger, Valérie; Falcoz, Alexandre; Boquet, Fabrice

    2018-02-01

    This article presents and validates a general framework to build a linear dynamic Finite Element-based model of large flexible structures for integrated Control/Structure design. An extension of the Two-Input Two-Output Port (TITOP) approach is here developed. The authors had already proposed such framework for simple beam-like structures: each beam was considered as a TITOP sub-system that could be interconnected to another beam thanks to the ports. The present work studies bodies with multiple attaching points by allowing complex interconnections among several sub-structures in tree-like assembly. The TITOP approach is extended to generate NINOP (N-Input N-Output Port) models. A Matlab toolbox is developed integrating beam and bending plate elements. In particular a NINOP formulation of bending plates is proposed to solve analytic two-dimensional problems. The computation of NINOP models using the outputs of a MSC/Nastran modal analysis is also investigated in order to directly use the results provided by a commercial finite element software. The main advantage of this tool is to provide a model of a multibody system under the form of a block diagram with a minimal number of states. This model is easy to operate for preliminary design and control. An illustrative example highlights the potential of the proposed approach: the synthesis of the dynamical model of a spacecraft with two deployable and flexible solar arrays.

  20. A maximally particle-hole asymmetric spectrum emanating from a semi-Dirac point

    NASA Astrophysics Data System (ADS)

    Quan, Yundi; Pickett, Warren E.

    2018-02-01

    Tight binding models have proven an effective means of revealing Dirac (massless) dispersion, flat bands (infinite mass), and intermediate cases such as the semi-Dirac (sD) dispersion. This approach is extended to a three band model that yields, with chosen parameters in a two-band limit, a closed line with maximally asymmetric particle-hole dispersion: infinite mass holes, zero mass particles. The model retains the sD points for a general set of parameters. Adjacent to this limiting case, hole Fermi surfaces are tiny and needle-like. A pair of large electron Fermi surfaces at low doping merge and collapse at half filling to a flat (zero energy) closed contour with infinite mass along the contour and enclosing no carriers on either side, while the hole Fermi surface has shrunk to a point at zero energy, also containing no carriers. The tight binding model is used to study several characteristics of the dispersion and density of states. The model inspired generalization of sD dispersion to a general  ± \\sqrt{k_x2n +k_y2m} form, for which analysis reveals that both n and m must be odd to provide a diabolical point with topological character. Evolution of the Hofstadter spectrum of this three band system with interband coupling strength is presented and discussed.

  1. van der Waals model for the surface tension of liquid 4He near the λ point

    NASA Astrophysics Data System (ADS)

    Tavan, Paul; Widom, B.

    1983-01-01

    We develop a phenomenological model of the 4He liquid-vapor interface. With it we calculate the surface tension of liquid helium near the λ point and compare with the experimental measurements by Magerlein and Sanders. The model is a form of the van der Waals surface-tension theory, extended to apply to a phase equilibrium in which the simultaneous variation of two order parameters-here the superfluid order parameter and the total density-is essential. The properties of the model are derived analytically above the λ point and numerically below it. Just below the λ point the superfluid order parameter is found to approach its bulk-superfluid-phase value very slowly with distance on the liquid side of the interface (the characteristic distance being the superfluid coherence length), and to vanish rapidly with distance on the vapor side, while the total density approaches its bulk-phase values rapidly and nearly symmetrically on the two sides. Below the λ point the surface tension has a |ɛ|32 singularity (ɛ~T-Tλ) arising from the temperature dependence of the spatially varying superfluid order parameter. This is the mean-field form of the more general |ɛ|μ singularity predicted by Sobyanin and by Hohenberg, in which μ (which is in reality close to 1.35 at the λ point of helium) is the exponent with which the interfacial tension between two critical phases vanishes. Above the λ point the surface tension in this model is analytic in ɛ. A singular term |ɛ|μ may in reality be present in the surface tension above as well as below the λ point, although there should still be a pronounced asymmetry. The variation with temperature of the model surface tension is overall much like that in experiment.

  2. Estimating animal resource selection from telemetry data using point process models

    USGS Publications Warehouse

    Johnson, Devin S.; Hooten, Mevin B.; Kuhn, Carey E.

    2013-01-01

    To demonstrate the analysis of telemetry data with the point process approach, we analysed a data set of telemetry locations from northern fur seals (Callorhinus ursinus) in the Pribilof Islands, Alaska. Both a space–time and an aggregated space-only model were fitted. At the individual level, the space–time analysis showed little selection relative to the habitat covariates. However, at the study area level, the space-only model showed strong selection relative to the covariates.

  3. Point model equations for neutron correlation counting: Extension of Böhnel's equations to any order

    DOE PAGES

    Favalli, Andrea; Croft, Stephen; Santi, Peter

    2015-06-15

    Various methods of autocorrelation neutron analysis may be used to extract information about a measurement item containing spontaneously fissioning material. The two predominant approaches being the time correlation analysis (that make use of a coincidence gate) methods of multiplicity shift register logic and Feynman sampling. The common feature is that the correlated nature of the pulse train can be described by a vector of reduced factorial multiplet rates. We call these singlets, doublets, triplets etc. Within the point reactor model the multiplet rates may be related to the properties of the item, the parameters of the detector, and basic nuclearmore » data constants by a series of coupled algebraic equations – the so called point model equations. Solving, or inverting, the point model equations using experimental calibration model parameters is how assays of unknown items is performed. Currently only the first three multiplets are routinely used. In this work we develop the point model equations to higher order multiplets using the probability generating functions approach combined with the general derivative chain rule, the so called Faà di Bruno Formula. Explicit expression up to 5th order are provided, as well the general iterative formula to calculate any order. This study represents the first necessary step towards determining if higher order multiplets can add value to nondestructive measurement practice for nuclear materials control and accountancy.« less

  4. SPY: A new scission point model based on microscopic ingredients to predict fission fragments properties

    NASA Astrophysics Data System (ADS)

    Lemaître, J.-F.; Dubray, N.; Hilaire, S.; Panebianco, S.; Sida, J.-L.

    2013-12-01

    Our purpose is to determine fission fragments characteristics in a framework of a scission point model named SPY for Scission Point Yields. This approach can be considered as a theoretical laboratory to study fission mechanism since it gives access to the correlation between the fragments properties and their nuclear structure, such as shell correction, pairing, collective degrees of freedom, odd-even effects. Which ones are dominant in final state? What is the impact of compound nucleus structure? The SPY model consists in a statistical description of the fission process at the scission point where fragments are completely formed and well separated with fixed properties. The most important property of the model relies on the nuclear structure of the fragments which is derived from full quantum microscopic calculations. This approach allows computing the fission final state of extremely exotic nuclei which are inaccessible by most of the fission model available on the market.

  5. Modeling fixation locations using spatial point processes.

    PubMed

    Barthelmé, Simon; Trukenbrod, Hans; Engbert, Ralf; Wichmann, Felix

    2013-10-01

    Whenever eye movements are measured, a central part of the analysis has to do with where subjects fixate and why they fixated where they fixated. To a first approximation, a set of fixations can be viewed as a set of points in space; this implies that fixations are spatial data and that the analysis of fixation locations can be beneficially thought of as a spatial statistics problem. We argue that thinking of fixation locations as arising from point processes is a very fruitful framework for eye-movement data, helping turn qualitative questions into quantitative ones. We provide a tutorial introduction to some of the main ideas of the field of spatial statistics, focusing especially on spatial Poisson processes. We show how point processes help relate image properties to fixation locations. In particular we show how point processes naturally express the idea that image features' predictability for fixations may vary from one image to another. We review other methods of analysis used in the literature, show how they relate to point process theory, and argue that thinking in terms of point processes substantially extends the range of analyses that can be performed and clarify their interpretation.

  6. Improving Gastric Cancer Outcome Prediction Using Single Time-Point Artificial Neural Network Models

    PubMed Central

    Nilsaz-Dezfouli, Hamid; Abu-Bakar, Mohd Rizam; Arasan, Jayanthi; Adam, Mohd Bakri; Pourhoseingholi, Mohamad Amin

    2017-01-01

    In cancer studies, the prediction of cancer outcome based on a set of prognostic variables has been a long-standing topic of interest. Current statistical methods for survival analysis offer the possibility of modelling cancer survivability but require unrealistic assumptions about the survival time distribution or proportionality of hazard. Therefore, attention must be paid in developing nonlinear models with less restrictive assumptions. Artificial neural network (ANN) models are primarily useful in prediction when nonlinear approaches are required to sift through the plethora of available information. The applications of ANN models for prognostic and diagnostic classification in medicine have attracted a lot of interest. The applications of ANN models in modelling the survival of patients with gastric cancer have been discussed in some studies without completely considering the censored data. This study proposes an ANN model for predicting gastric cancer survivability, considering the censored data. Five separate single time-point ANN models were developed to predict the outcome of patients after 1, 2, 3, 4, and 5 years. The performance of ANN model in predicting the probabilities of death is consistently high for all time points according to the accuracy and the area under the receiver operating characteristic curve. PMID:28469384

  7. Hepatoprotective activity of Sonchus asper against carbon tetrachloride-induced injuries in male rats: a randomized controlled trial

    PubMed Central

    2012-01-01

    Abstract Background Sonchus asper (SAME) is used as a folk medicine in hepatic disorders. In this study, the hepatoprotective effects of the methanol extract of SAME was evaluated against carbon tetrachloride (CCl4)-induced liver injuries in rats. Methods To evaluate the hepatoprotective effects of SAME, 36 male Sprague–Dawley rats were equally divided into 6 groups. Rats of Group I (control) were given free access to approved feed and water. Rats of Group II were injected intraperitoneally with CCl4 (3 ml/kg) as a 30% solution in olive oil (v/v) twice a week for 4 weeks. Animals of Groups III (100 mg/kg) and IV (200 mg/kg) received SAME, whereas those of Group V were given silymarin via gavage (100 mg/kg) after 48 h of CCl4 treatment. Group VI received SAME (200 mg/kg) twice a week for 4 weeks without CCl4 treatment. Various parameters, such as the serum enzyme levels, serum biochemical marker levels, antioxidant enzyme activities, and liver histopathology were used to estimate the hepatoprotective efficacy of SAME. Results The administration of SAME and silymarin significantly lowered the CCl4-induced serum levels of hepatic marker enzymes (aspartate aminotransferase, alanine aminotransferase, and lactate dehydrogenase), cholesterol, low-density lipoprotein, and triglycerides while elevating high-density lipoprotein levels. The hepatic contents of glutathione and activities of catalase, superoxide dismutase, glutathione peroxidase, glutathione S-transferase, and glutathione reductase were reduced. The levels of thiobarbituric acid-reactive substances that were increased by CCl4 were brought back to control levels by the administration of SAME and silymarin. Liver histopathology showed that SAME reduced the incidence of hepatic lesions induced by CCl4 in rats. Conclusion SAME may protect the liver against CCl4-induced oxidative damage in rats. PMID:22776436

  8. Modeling of N2 and O optical emissions for ionosphere HF powerful heating experiments

    NASA Astrophysics Data System (ADS)

    Sergienko, T.; Gustavsson, B.

    Analyses of experiments of F region ionosphere modification by HF powerful radio waves show that optical observations are very useful tools for diagnosing of the interaction of the probing radio wave with the ionospheric plasma Hitherto the emissions usually measured in the heating experiment have been the 630 0 nm and the 557 7 nm lines of atomic oxygen Other emissions for instance O 844 8 nm and N2 427 8 nm have been measured episodically in only a few experiments although the very rich optical spectrum of molecular nitrogen potentially involves important information about ionospheric plasma in the heated region This study addresses the modeling of optical emissions from the O and the N2 triplet states first positive second positive Vegard-Kaplan infrared afterglow and Wu-Benesch band systems excited under a condition of the ionosphere heating experiment The auroral triplet state population distribution model was modified for the ionosphere heating conditions by using the different electron distribution functions suggested by Mishin et al 2000 2003 and Gustavsson at al 2004 2005 Modeling results are discussed from the point of view of efficiency of measurements of the N2 emissions in future experiments

  9. A comparative analysis of speed profile models for wrist pointing movements.

    PubMed

    Vaisman, Lev; Dipietro, Laura; Krebs, Hermano Igo

    2013-09-01

    Following two decades of design and clinical research on robot-mediated therapy for the shoulder and elbow, therapeutic robotic devices for other joints are being proposed: several research groups including ours have designed robots for the wrist, either to be used as stand-alone devices or in conjunction with shoulder and elbow devices. However, in contrast with robots for the shoulder and elbow which were able to take advantage of descriptive kinematic models developed in neuroscience for the past 30 years, design of wrist robots controllers cannot rely on similar prior art: wrist movement kinematics has been largely unexplored. This study aimed at examining speed profiles of fast, visually evoked, visually guided, target-directed human wrist pointing movements. One thousand three-hundred ninety-eight (1398) trials were recorded from seven unimpaired subjects who performed center-out flexion/extension and abduction/adduction wrist movements and fitted with 19 models previously proposed for describing reaching speed profiles. A nonlinear, least squares optimization procedure extracted parameters' sets that minimized error between experimental and reconstructed data. Models' performances were compared based on their ability to reconstruct experimental data. Results suggest that the support-bounded lognormal is the best model for speed profiles of fast, wrist pointing movements. Applications include design of control algorithms for therapeutic wrist robots and quantitative metrics of motor recovery.

  10. Optimal Number and Allocation of Data Collection Points for Linear Spline Growth Curve Modeling: A Search for Efficient Designs

    ERIC Educational Resources Information Center

    Wu, Wei; Jia, Fan; Kinai, Richard; Little, Todd D.

    2017-01-01

    Spline growth modelling is a popular tool to model change processes with distinct phases and change points in longitudinal studies. Focusing on linear spline growth models with two phases and a fixed change point (the transition point from one phase to the other), we detail how to find optimal data collection designs that maximize the efficiency…

  11. Model reference adaptive control for the azimuth-pointing system of a balloon-borne stabilized platform

    NASA Technical Reports Server (NTRS)

    Lubin, Philip M.; Tomizuka, Masayoshi; Chingcuanco, Alfredo O.; Meinhold, Peter R.

    1991-01-01

    A balloon-born stabilized platform has been developed for the remotely operated altitude-azimuth pointing of a millimeter wave telescope system. This paper presents a development and implementation of model reference adaptive control (MRAC) for the azimuth-pointing system of the stabilized platform. The primary goal of the controller is to achieve pointing rms better than 0.1 deg. Simulation results indicate that MRAC can achieve pointing rms better than 0.1 deg. Ground test results show pointing rms better than 0.03 deg. Data from the first flight at the National Scientific Balloon Facility (NSBF) Palestine, Texas show pointing rms better than 0.02 deg.

  12. A Corner-Point-Grid-Based Voxelization Method for Complex Geological Structure Model with Folds

    NASA Astrophysics Data System (ADS)

    Chen, Qiyu; Mariethoz, Gregoire; Liu, Gang

    2017-04-01

    3D voxelization is the foundation of geological property modeling, and is also an effective approach to realize the 3D visualization of the heterogeneous attributes in geological structures. The corner-point grid is a representative data model among all voxel models, and is a structured grid type that is widely applied at present. When carrying out subdivision for complex geological structure model with folds, we should fully consider its structural morphology and bedding features to make the generated voxels keep its original morphology. And on the basis of which, they can depict the detailed bedding features and the spatial heterogeneity of the internal attributes. In order to solve the shortage of the existing technologies, this work puts forward a corner-point-grid-based voxelization method for complex geological structure model with folds. We have realized the fast conversion from the 3D geological structure model to the fine voxel model according to the rule of isocline in Ramsay's fold classification. In addition, the voxel model conforms to the spatial features of folds, pinch-out and other complex geological structures, and the voxels of the laminas inside a fold accords with the result of geological sedimentation and tectonic movement. This will provide a carrier and model foundation for the subsequent attribute assignment as well as the quantitative analysis and evaluation based on the spatial voxels. Ultimately, we use examples and the contrastive analysis between the examples and the Ramsay's description of isoclines to discuss the effectiveness and advantages of the method proposed in this work when dealing with the voxelization of 3D geologic structural model with folds based on corner-point grids.

  13. A Stochastic Point Cloud Sampling Method for Multi-Template Protein Comparative Modeling.

    PubMed

    Li, Jilong; Cheng, Jianlin

    2016-05-10

    Generating tertiary structural models for a target protein from the known structure of its homologous template proteins and their pairwise sequence alignment is a key step in protein comparative modeling. Here, we developed a new stochastic point cloud sampling method, called MTMG, for multi-template protein model generation. The method first superposes the backbones of template structures, and the Cα atoms of the superposed templates form a point cloud for each position of a target protein, which are represented by a three-dimensional multivariate normal distribution. MTMG stochastically resamples the positions for Cα atoms of the residues whose positions are uncertain from the distribution, and accepts or rejects new position according to a simulated annealing protocol, which effectively removes atomic clashes commonly encountered in multi-template comparative modeling. We benchmarked MTMG on 1,033 sequence alignments generated for CASP9, CASP10 and CASP11 targets, respectively. Using multiple templates with MTMG improves the GDT-TS score and TM-score of structural models by 2.96-6.37% and 2.42-5.19% on the three datasets over using single templates. MTMG's performance was comparable to Modeller in terms of GDT-TS score, TM-score, and GDT-HA score, while the average RMSD was improved by a new sampling approach. The MTMG software is freely available at: http://sysbio.rnet.missouri.edu/multicom_toolbox/mtmg.html.

  14. A Stochastic Point Cloud Sampling Method for Multi-Template Protein Comparative Modeling

    PubMed Central

    Li, Jilong; Cheng, Jianlin

    2016-01-01

    Generating tertiary structural models for a target protein from the known structure of its homologous template proteins and their pairwise sequence alignment is a key step in protein comparative modeling. Here, we developed a new stochastic point cloud sampling method, called MTMG, for multi-template protein model generation. The method first superposes the backbones of template structures, and the Cα atoms of the superposed templates form a point cloud for each position of a target protein, which are represented by a three-dimensional multivariate normal distribution. MTMG stochastically resamples the positions for Cα atoms of the residues whose positions are uncertain from the distribution, and accepts or rejects new position according to a simulated annealing protocol, which effectively removes atomic clashes commonly encountered in multi-template comparative modeling. We benchmarked MTMG on 1,033 sequence alignments generated for CASP9, CASP10 and CASP11 targets, respectively. Using multiple templates with MTMG improves the GDT-TS score and TM-score of structural models by 2.96–6.37% and 2.42–5.19% on the three datasets over using single templates. MTMG’s performance was comparable to Modeller in terms of GDT-TS score, TM-score, and GDT-HA score, while the average RMSD was improved by a new sampling approach. The MTMG software is freely available at: http://sysbio.rnet.missouri.edu/multicom_toolbox/mtmg.html. PMID:27161489

  15. Evaluating Change in Behavioral Preferences: Multidimensional Scaling Single-Ideal Point Model

    ERIC Educational Resources Information Center

    Ding, Cody

    2016-01-01

    The purpose of the article is to propose a multidimensional scaling single-ideal point model as a method to evaluate changes in individuals' preferences under the explicit methodological framework of behavioral preference assessment. One example is used to illustrate the approach for a clear idea of what this approach can accomplish.

  16. A covariant model for the gamma N -> N(1535) transition at high momentum transfer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    G. Ramalho, M.T. Pena

    2011-08-01

    A relativistic constituent quark model is applied to the gamma N -> N(1535) transition. The N(1535) wave function is determined by extending the covariant spectator quark model, previously developed for the nucleon, to the S11 resonance. The model allows us to calculate the valence quark contributions to the gamma N -> N(1535) transition form factors. Because of the nucleon and N(1535) structure the model is valid only for Q^2> 2.3 GeV^2. The results are compared with the experimental data for the electromagnetic form factors F1* and F2* and the helicity amplitudes A_1/2 and S_1/2, at high Q^2.

  17. Bootstrapping the O(N) archipelago

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kos, Filip; Poland, David; Simmons-Duffin, David

    2015-11-17

    We study 3d CFTs with an O(N) global symmetry using the conformal bootstrap for a system of mixed correlators. Specifically, we consider all nonvanishing scalar four-point functions containing the lowest dimension O(N) vector Φ i and the lowest dimension O(N) singlet s, assumed to be the only relevant operators in their symmetry representations. The constraints of crossing symmetry and unitarity for these four-point functions force the scaling dimensions (Δ Φ , Δ s ) to lie inside small islands. Here, we also make rigorous determinations of current two-point functions in the O(2) and O(3) models, with applications to transport inmore » condensed matter systems.« less

  18. Higher-order triangular spectral element method with optimized cubature points for seismic wavefield modeling

    NASA Astrophysics Data System (ADS)

    Liu, Youshan; Teng, Jiwen; Xu, Tao; Badal, José

    2017-05-01

    The mass-lumped method avoids the cost of inverting the mass matrix and simultaneously maintains spatial accuracy by adopting additional interior integration points, known as cubature points. To date, such points are only known analytically in tensor domains, such as quadrilateral or hexahedral elements. Thus, the diagonal-mass-matrix spectral element method (SEM) in non-tensor domains always relies on numerically computed interpolation points or quadrature points. However, only the cubature points for degrees 1 to 6 are known, which is the reason that we have developed a p-norm-based optimization algorithm to obtain higher-order cubature points. In this way, we obtain and tabulate new cubature points with all positive integration weights for degrees 7 to 9. The dispersion analysis illustrates that the dispersion relation determined from the new optimized cubature points is comparable to that of the mass and stiffness matrices obtained by exact integration. Simultaneously, the Lebesgue constant for the new optimized cubature points indicates its surprisingly good interpolation properties. As a result, such points provide both good interpolation properties and integration accuracy. The Courant-Friedrichs-Lewy (CFL) numbers are tabulated for the conventional Fekete-based triangular spectral element (TSEM), the TSEM with exact integration, and the optimized cubature-based TSEM (OTSEM). A complementary study demonstrates the spectral convergence of the OTSEM. A numerical example conducted on a half-space model demonstrates that the OTSEM improves the accuracy by approximately one order of magnitude compared to the conventional Fekete-based TSEM. In particular, the accuracy of the 7th-order OTSEM is even higher than that of the 14th-order Fekete-based TSEM. Furthermore, the OTSEM produces a result that can compete in accuracy with the quadrilateral SEM (QSEM). The high accuracy of the OTSEM is also tested with a non-flat topography model. In terms of computational

  19. Stochastic modelling for biodosimetry: Predicting the chromosomal response to radiation at different time points after exposure

    NASA Astrophysics Data System (ADS)

    Deperas-Standylo, Joanna; Gudowska-Nowak, Ewa; Ritter, Sylvia

    2014-07-01

    Cytogenetic data accumulated from the experiments with peripheral blood lymphocytes exposed to densely ionizing radiation clearly demonstrate that for particles with linear energy transfer (LET) >100 keV/ μm the derived relative biological effectiveness (RBE) will strongly depend on the time point chosen for the analysis. A reasonable prediction of radiation-induced chromosome damage and its distribution among cells can be achieved by exploiting Monte Carlo methodology along with the information about the radius of the penetrating ion-track and the LET of the ion beam. In order to examine the relationship between the track structure and the distribution of aberrations induced in human lymphocytes and to clarify the correlation between delays in the cell cycle progression and the aberration burden visible at the first post-irradiation mitosis, we have analyzed chromosome aberrations in lymphocytes exposed to Fe-ions with LET values of 335 keV/ μm and formulated a Monte Carlo model which reflects time-delay in mitosis of aberrant cells. Within the model the frequency distributions of aberrations among cells follow the pattern of local energy distribution and are well approximated by a time-dependent compound Poisson statistics. The cell-division cycle of undamaged and aberrant cells and chromosome aberrations are modelled as a renewal process represented by a random sum of (independent and identically distributed) random elements S N = ∑ N i=0 X i . Here N stands for the number of particle traversals of cell nucleus, each leading to a statistically independent formation of X i aberrations. The parameter N is itself a random variable and reflects the cell cycle delay of heavily damaged cells. The probability distribution of S N follows a general law for which the moment generating function satisfies the relation Φ S N = Φ N ( Φ X i ). Formulation of the Monte Carlo model which allows to predict expected fluxes of aberrant and non-aberrant cells has been based

  20. Hydrodynamic Modeling for Channel and Shoreline Stabilization at Rhodes Point, Smith Island, MD

    DTIC Science & Technology

    2016-11-01

    shorelines. Both Alternatives included the same revetment structure for protecting the south shoreline. The Coastal Modeling System (CMS, including CMS...ER D C/ CH L TR -1 6- 17 Coastal Inlets Research Program Hydrodynamic Modeling for Channel and Shoreline Stabilization at Rhodes Point...acwc.sdp.sirsi.net/client/default. Coastal Inlets Research Program ERDC/CHL TR-16-17 November 2016 Hydrodynamic Modeling for Channel and Shoreline

  1. a Modeling Method of Fluttering Leaves Based on Point Cloud

    NASA Astrophysics Data System (ADS)

    Tang, J.; Wang, Y.; Zhao, Y.; Hao, W.; Ning, X.; Lv, K.; Shi, Z.; Zhao, M.

    2017-09-01

    Leaves falling gently or fluttering are common phenomenon in nature scenes. The authenticity of leaves falling plays an important part in the dynamic modeling of natural scenes. The leaves falling model has a widely applications in the field of animation and virtual reality. We propose a novel modeling method of fluttering leaves based on point cloud in this paper. According to the shape, the weight of leaves and the wind speed, three basic trajectories of leaves falling are defined, which are the rotation falling, the roll falling and the screw roll falling. At the same time, a parallel algorithm based on OpenMP is implemented to satisfy the needs of real-time in practical applications. Experimental results demonstrate that the proposed method is amenable to the incorporation of a variety of desirable effects.

  2. Modeling an enhanced ridesharing system with meet points and time windows

    PubMed Central

    Li, Xin; Hu, Sangen; Deng, Kai

    2018-01-01

    With the rising of e-hailing services in urban areas, ride sharing is becoming a common mode of transportation. This paper presents a mathematical model to design an enhanced ridesharing system with meet points and users’ preferable time windows. The introduction of meet points allows ridesharing operators to trade off the benefits of saving en-route delays and the cost of additional walking for some passengers to be collectively picked up or dropped off. This extension to the traditional door-to-door ridesharing problem brings more operation flexibility in urban areas (where potential requests may be densely distributed in neighborhood), and thus could achieve better system performance in terms of reducing the total travel time and increasing the served passengers. We design and implement a Tabu-based meta-heuristic algorithm to solve the proposed mixed integer linear program (MILP). To evaluate the validation and effectiveness of the proposed model and solution algorithm, several scenarios are designed and also resolved to optimality by CPLEX. Results demonstrate that (i) detailed route plan associated with passenger assignment to meet points can be obtained with en-route delay savings; (ii) as compared to CPLEX, the meta-heuristic algorithm bears the advantage of higher computation efficiency and produces good quality solutions with 8%~15% difference from the global optima; and (iii) introducing meet points to ridesharing system saves the total travel time by 2.7%-3.8% for small-scale ridesharing systems. More benefits are expected for ridesharing systems with large size of fleet. This study provides a new tool to efficiently operate the ridesharing system, particularly when the ride sharing vehicles are in short supply during peak hours. Traffic congestion mitigation will also be expected. PMID:29715302

  3. A “re-vitalized” Greenwood and Williamson model of elastic contact between fractal surfaces

    NASA Astrophysics Data System (ADS)

    Ciavarella, M.; Delfine, V.; Demelio, G.

    2006-12-01

    Greenwood and Williamson in 1966 (GW) proposed a theory of elastic contact mechanics of rough surfaces which is today the foundation of many theories in tribology (friction, adhesion, thermal and electrical conductance, wear, etc.). However, the theory has periodically received criticisms for the "resolution-dependence" of the asperity features. Greenwood himself has recently concluded that: "The introduction by Greenwood and Williamson in 1966 of the definition of a 'peak' as a point higher than its neighbours on a profile sampled at a finite sampling interval was, in retrospect, a mistake, although it is possible that it was a necessary mistake" [Greenwood and Wu, 2001. Surface roughness and contact: an apology. Meccanica 36 (6), 617-630]. We propose a "discrete" version of the GW model, keeping the approximation of a surface by quadratic functions near summits, where the summit arrangement is found from numerical realizations or real surfaces scans. The contact is then solved either summing the Hertzian relationships, or considering interaction effects to the first-order in a very efficient algorithm. We conduct experiments on Weierstrass-Mandelbrot fractal surfaces, concluding that: the real contact area-load relationship is well captured by the original GW theoretical model, once the correct mean radius is used. The relationship is robust and shows relatively little scatter; the conductance-load relationship is vice versa only approximately given by the original GW theoretical model. Significant deviations from linearity and significant scatter seem to be found, particularly at low fractal dimensions; the load, area and conductance dependences with separation show significant dependence on the actual phase arrangements, and hence significant scatter at large separations. Effect of interaction is seen strongly at low separations, where scatter is minimal. The discrete GW model permits to include these effects, except when the asperity description breaks down

  4. A Multidimensional Ideal Point Item Response Theory Model for Binary Data.

    PubMed

    Maydeu-Olivares, Albert; Hernández, Adolfo; McDonald, Roderick P

    2006-12-01

    We introduce a multidimensional item response theory (IRT) model for binary data based on a proximity response mechanism. Under the model, a respondent at the mode of the item response function (IRF) endorses the item with probability one. The mode of the IRF is the ideal point, or in the multidimensional case, an ideal hyperplane. The model yields closed form expressions for the cell probabilities. We estimate and test the goodness of fit of the model using only information contained in the univariate and bivariate moments of the data. Also, we pit the new model against the multidimensional normal ogive model estimated using NOHARM in four applications involving (a) attitudes toward censorship, (b) satisfaction with life, (c) attitudes of morality and equality, and (d) political efficacy. The normal PDF model is not invariant to simple operations such as reverse scoring. Thus, when there is no natural category to be modeled, as in many personality applications, it should be fit separately with and without reverse scoring for comparisons.

  5. Universality away from critical points in a thermostatistical model

    NASA Astrophysics Data System (ADS)

    Lapilli, C. M.; Wexler, C.; Pfeifer, P.

    Nature uses phase transitions as powerful regulators of processes ranging from climate to the alteration of phase behavior of cell membranes to protect cells from cold, building on the fact that thermodynamic properties of a solid, liquid, or gas are sensitive fingerprints of intermolecular interactions. The only known exceptions from this sensitivity are critical points. At a critical point, two phases become indistinguishable and thermodynamic properties exhibit universal behavior: systems with widely different intermolecular interactions behave identically. Here we report a major counterexample. We show that different members of a family of two-dimensional systems —the discrete p-state clock model— with different Hamiltonians describing different microscopic interactions between molecules or spins, may exhibit identical thermodynamic behavior over a wide range of temperatures. The results generate a comprehensive map of the phase diagram of the model and, by virtue of the discrete rotors behaving like continuous rotors, an emergent symmetry, not present in the Hamiltonian. This symmetry, or many-to-one map of intermolecular interactions onto thermodynamic states, demonstrates previously unknown limits for macroscopic distinguishability of different microscopic interactions.

  6. Point-Defect Nature of the Ultraviolet Absorption Band in AlN

    NASA Astrophysics Data System (ADS)

    Alden, D.; Harris, J. S.; Bryan, Z.; Baker, J. N.; Reddy, P.; Mita, S.; Callsen, G.; Hoffmann, A.; Irving, D. L.; Collazo, R.; Sitar, Z.

    2018-05-01

    We present an approach where point defects and defect complexes are identified using power-dependent photoluminescence excitation spectroscopy, impurity data from SIMS, and density-functional-theory (DFT)-based calculations accounting for the total charge balance in the crystal. Employing the capabilities of such an experimental computational approach, in this work, the ultraviolet-C absorption band at 4.7 eV, as well as the 2.7- and 3.9-eV luminescence bands in AlN single crystals grown via physical vapor transport (PVT) are studied in detail. Photoluminescence excitation spectroscopy measurements demonstrate the relationship between the defect luminescent bands centered at 3.9 and 2.7 eV to the commonly observed absorption band centered at 4.7 eV. Accordingly, the thermodynamic transition energy for the absorption band at 4.7 eV and the luminescence band at 3.9 eV is estimated at 4.2 eV, in agreement with the thermodynamic transition energy for the CN- point defect. Finally, the 2.7-eV PL band is the result of a donor-acceptor pair transition between the VN and CN point defects since nitrogen vacancies are predicted to be present in the crystal in concentrations similar to carbon-employing charge-balance-constrained DFT calculations. Power-dependent photoluminescence measurements reveal the presence of the deep donor state with a thermodynamic transition energy of 5.0 eV, which we hypothesize to be nitrogen vacancies in agreement with predictions based on theory. The charge state, concentration, and type of impurities in the crystal are calculated considering a fixed amount of impurities and using a DFT-based defect solver, which considers their respective formation energies and the total charge balance in the crystal. The presented results show that nitrogen vacancies are the most likely candidate for the deep donor state involved in the donor-acceptor pair transition with peak emission at 2.7 eV for the conditions relevant to PVT growth.

  7. Comparison and validation of point spread models for imaging in natural waters.

    PubMed

    Hou, Weilin; Gray, Deric J; Weidemann, Alan D; Arnone, Robert A

    2008-06-23

    It is known that scattering by particulates within natural waters is the main cause of the blur in underwater images. Underwater images can be better restored or enhanced with knowledge of the point spread function (PSF) of the water. This will extend the performance range as well as the information retrieval from underwater electro-optical systems, which is critical in many civilian and military applications, including target and especially mine detection, search and rescue, and diver visibility. A better understanding of the physical process involved also helps to predict system performance and simulate it accurately on demand. The presented effort first reviews several PSF models, including the introduction of a semi-analytical PSF given optical properties of the medium, including scattering albedo, mean scattering angles and the optical range. The models under comparison include the empirical model of Duntley, a modified PSF model by Dolin et al, as well as the numerical integration of analytical forms from Wells, as a benchmark of theoretical results. For experimental results, in addition to that of Duntley, we validate the above models with measured point spread functions by applying field measured scattering properties with Monte Carlo simulations. Results from these comparisons suggest it is sufficient but necessary to have the three parameters listed above to model PSFs. The simplified approach introduced also provides adequate accuracy and flexibility for imaging applications, as shown by examples of restored underwater images.

  8. Transition point prediction in a multicomponent lattice Boltzmann model: Forcing scheme dependencies

    NASA Astrophysics Data System (ADS)

    Küllmer, Knut; Krämer, Andreas; Joppich, Wolfgang; Reith, Dirk; Foysi, Holger

    2018-02-01

    Pseudopotential-based lattice Boltzmann models are widely used for numerical simulations of multiphase flows. In the special case of multicomponent systems, the overall dynamics are characterized by the conservation equations for mass and momentum as well as an additional advection diffusion equation for each component. In the present study, we investigate how the latter is affected by the forcing scheme, i.e., by the way the underlying interparticle forces are incorporated into the lattice Boltzmann equation. By comparing two model formulations for pure multicomponent systems, namely the standard model [X. Shan and G. D. Doolen, J. Stat. Phys. 81, 379 (1995), 10.1007/BF02179985] and the explicit forcing model [M. L. Porter et al., Phys. Rev. E 86, 036701 (2012), 10.1103/PhysRevE.86.036701], we reveal that the diffusion characteristics drastically change. We derive a generalized, potential function-dependent expression for the transition point from the miscible to the immiscible regime and demonstrate that it is shifted between the models. The theoretical predictions for both the transition point and the mutual diffusion coefficient are validated in simulations of static droplets and decaying sinusoidal concentration waves, respectively. To show the universality of our analysis, two common and one new potential function are investigated. As the shift in the diffusion characteristics directly affects the interfacial properties, we additionally show that phenomena related to the interfacial tension such as the modeling of contact angles are influenced as well.

  9. Reducing fertilizer-derived N2O emission: Point injection vs. surface application of ammonium-N fertilizer at a loamy sand site

    NASA Astrophysics Data System (ADS)

    Deppe, Marianna; Well, Reinhard; Giesemann, Anette; Kücke, Martin; Flessa, Heinz

    2013-04-01

    N2O emitted from soil originates either from denitrification of nitrate and/or nitrification of ammonium. N fertilization can have an important impact on N2O emission rates. Injection of nitrate-free ammonium-N fertilizer, in Germany also known as CULTAN (Controlled Uptake Long-Term Ammonium Nutrition), results in fertilizer depots with ammonium concentrations of up to 10 mg N g-1 soil-1. High concentrations of ammonium are known to inhibit nitrification. However, it has not yet been clarified how N2O fluxes are affected by CULTAN. In a field experiment, two application methods of nitrogen fertilizer were used at a loamy sand site: Ammonium sulphate was applied either by point injection or by surface application. 15N-ammonium sulphate was used to distinguish between N2O originating from either fertilizer-N or soil-N. Unfertilized plots and plots fertilized with unlabeled ammonium sulphate served as control. N2O emissions were measured using static chambers, nitrate and ammonium concentrations were determined in soil extracts. Stable isotope analysis of 15N in N2O, nitrate and ammonium was used to calculate the contribution of fertilizer N to N2O emissions and the fertilizer turnover in soil. 15N analysis clearly indicated that fertilizer derived N2O fluxes were higher from surface application plots. For the period of the growing season, about 24% of the flux measured in surface application treatment and less than 10% from injection treatment plots originated from the fertilizer. In addition, a lab experiment was conducted to gain insight into processes leading to N2O emission from fertilizer depots. One aim was to examine whether the ratio of N2O to nitrate formation differs depending on the ammonium concentration. Loamy sand soil was incubated in microcosms continuously flushed with air under conditions favouring nitrification. 15N-labeled nitrate was used to differentiate between nitrification and denitrification. Stable isotope analyses of 15N were performed on

  10. On the hydrophilicity of polyzwitterion poly (N,N-dimethyl-N-(3-(methacrylamido)propyl)ammoniopropane sulfonate) in water, deuterated water, and aqueous salt solutions.

    PubMed

    Hildebrand, Viet; Laschewsky, André; Zehm, Daniel

    2014-01-01

    A series of zwitterionic model polymers with defined molar masses up to 150,000 Da and defined end groups are prepared from sulfobetaine monomer N,N-dimethyl-N-(3-(methacrylamido)propyl)ammoniopropanesulfonate (SPP). Polymers are synthesized by reversible addition-fragmentation chain transfer polymerization (RAFT) using a functional chain transfer agent labeled with a fluorescent probe. Their upper critical solution temperature-type coil-to-globule phase transition in water, deuterated water, and various salt solutions is studied by turbidimetry. Cloud points increase with polyzwitterion concentration and molar mass, being considerably higher in D2O than in H2O. Moreover, cloud points are strongly affected by the amount and nature of added salts. Typically, they increase with increasing salt concentration up to a maximum value, whereas further addition of salt lowers the cloud points again, mostly down to below freezing point. The different salting-in and salting-out effects of the studied anions can be correlated with the Hofmeister series. In physiological sodium chloride solution and in phosphate buffered saline (PBS), the cloud point is suppressed even for high molar mass samples. Accordingly, SPP-polymers behave strongly hydrophilic under most conditions encountered in biomedical applications. However, the direct transfer of results from model studies in D2O, using, e.g. (1)H NMR or neutron scattering techniques, to 'normal' systems in H2O is not obvious.

  11. A model for pion-pion scattering in large- N QCD

    NASA Astrophysics Data System (ADS)

    Veneziano, G.; Yankielowicz, S.; Onofri, E.

    2017-04-01

    Following up on recent work by Caron-Huot et al. we consider a generalization of the old Lovelace-Shapiro model as a toy model for ππ scattering satisfying (most of) the properties expected to hold in ('t Hooft's) large- N limit of massless QCD. In particular, the model has asymptotically linear and parallel Regge trajectories at positive t, a positive leading Regge intercept α 0 < 1, and an effective bending of the trajectories in the negative- t region producing a fixed branch point at J = 0 for t < t 0 < 0. Fixed (physical) angle scattering can be tuned to match the power-like behavior (including logarithmic corrections) predicted by perturbative QCD: A( s, t) ˜ s - β log( s)-γ F ( θ). Tree-level unitarity (i.e. positivity of residues for all values of s and J ) imposes strong constraints on the allowed region in the α0- β-γ parameter space, which nicely includes a physically interesting region around α 0 = 0 .5, β = 2 and γ = 3. The full consistency of the model would require an extension to multi-pion processes, a program we do not undertake in this paper.

  12. Assessing accuracy of point fire intervals across landscapes with simulation modelling

    Treesearch

    Russell A. Parsons; Emily K. Heyerdahl; Robert E. Keane; Brigitte Dorner; Joseph Fall

    2007-01-01

    We assessed accuracy in point fire intervals using a simulation model that sampled four spatially explicit simulated fire histories. These histories varied in fire frequency and size and were simulated on a flat landscape with two forest types (dry versus mesic). We used three sampling designs (random, systematic grids, and stratified). We assessed the sensitivity of...

  13. Lieb-Thirring inequality for a model of particles with point interactions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frank, Rupert L.; Seiringer, Robert

    2012-09-15

    We consider a model of quantum-mechanical particles interacting via point interactions of infinite scattering length. In the case of fermions we prove a Lieb-Thirring inequality for the energy, i.e., we show that the energy is bounded from below by a constant times the integral of the particle density to the power (5/3).

  14. Sample size and classification error for Bayesian change-point models with unlabelled sub-groups and incomplete follow-up.

    PubMed

    White, Simon R; Muniz-Terrera, Graciela; Matthews, Fiona E

    2018-05-01

    Many medical (and ecological) processes involve the change of shape, whereby one trajectory changes into another trajectory at a specific time point. There has been little investigation into the study design needed to investigate these models. We consider the class of fixed effect change-point models with an underlying shape comprised two joined linear segments, also known as broken-stick models. We extend this model to include two sub-groups with different trajectories at the change-point, a change and no change class, and also include a missingness model to account for individuals with incomplete follow-up. Through a simulation study, we consider the relationship of sample size to the estimates of the underlying shape, the existence of a change-point, and the classification-error of sub-group labels. We use a Bayesian framework to account for the missing labels, and the analysis of each simulation is performed using standard Markov chain Monte Carlo techniques. Our simulation study is inspired by cognitive decline as measured by the Mini-Mental State Examination, where our extended model is appropriate due to the commonly observed mixture of individuals within studies who do or do not exhibit accelerated decline. We find that even for studies of modest size ( n = 500, with 50 individuals observed past the change-point) in the fixed effect setting, a change-point can be detected and reliably estimated across a range of observation-errors.

  15. Singularity free N-body simulations called 'Dynamic Universe Model' don't require dark matter

    NASA Astrophysics Data System (ADS)

    Naga Parameswara Gupta, Satyavarapu

    For finding trajectories of Pioneer satellite (Anomaly), New Horizons satellite going to Pluto, the Calculations of Dynamic Universe model can be successfully applied. No dark matter is assumed within solar system radius. The effect on the masses around SUN shows as though there is extra gravitation pull toward SUN. It solves the Dynamics of Extra-solar planets like Planet X, satellite like Pioneer and NH for 3-Position, 3-velocity 3-accelaration for their masses, considering the complex situation of Multiple planets, Stars, Galaxy parts and Galaxy centre and other Galaxies Using simple Newtonian Physics. It already solved problems Missing mass in Galaxies observed by galaxy circular velocity curves successfully. Singularity free Newtonian N-body simulations Historically, King Oscar II of Sweden an-nounced a prize to a solution of N-body problem with advice given by Güsta Mittag-Leffler in 1887. He announced `Given a system of arbitrarily many mass points that attract each according to Newton's law, under the assumption that no two points ever collide, try to find a representation of the coordinates of each point as a series in a variable that is some known function of time and for all of whose values the series converges uniformly.'[This is taken from Wikipedia]. The announced dead line that time was1st June 1888. And after that dead line, on 21st January 1889, Great mathematician Poincaré claimed that prize. Later he himself sent a telegram to journal Acta Mathematica to stop printing the special issue after finding the error in his solution. Yet for such a man of science reputation is important than money. [ Ref Book `Celestial mechanics: the waltz of the planets' By Alessandra Celletti, Ettore Perozzi, page 27]. He realized that he has been wrong in his general stability result! But till now nobody could solve that problem or claimed that prize. Later all solutions resulted in singularities and collisions of masses, given by many people

  16. Material point method of modelling and simulation of reacting flow of oxygen

    NASA Astrophysics Data System (ADS)

    Mason, Matthew; Chen, Kuan; Hu, Patrick G.

    2014-07-01

    Aerospace vehicles are continually being designed to sustain flight at higher speeds and higher altitudes than previously attainable. At hypersonic speeds, gases within a flow begin to chemically react and the fluid's physical properties are modified. It is desirable to model these effects within the Material Point Method (MPM). The MPM is a combined Eulerian-Lagrangian particle-based solver that calculates the physical properties of individual particles and uses a background grid for information storage and exchange. This study introduces chemically reacting flow modelling within the MPM numerical algorithm and illustrates a simple application using the AeroElastic Material Point Method (AEMPM) code. The governing equations of reacting flows are introduced and their direct application within an MPM code is discussed. A flow of 100% oxygen is illustrated and the results are compared with independently developed computational non-equilibrium algorithms. Observed trends agree well with results from an independently developed source.

  17. Once more on the equilibrium-point hypothesis (lambda model) for motor control.

    PubMed

    Feldman, A G

    1986-03-01

    The equilibrium control hypothesis (lambda model) is considered with special reference to the following concepts: (a) the length-force invariant characteristic (IC) of the muscle together with central and reflex systems subserving its activity; (b) the tonic stretch reflex threshold (lambda) as an independent measure of central commands descending to alpha and gamma motoneurons; (c) the equilibrium point, defined in terms of lambda, IC and static load characteristics, which is associated with the notion that posture and movement are controlled by a single mechanism; and (d) the muscle activation area (a reformulation of the "size principle")--the area of kinematic and command variables in which a rank-ordered recruitment of motor units takes place. The model is used for the interpretation of various motor phenomena, particularly electromyographic patterns. The stretch reflex in the lambda model has no mechanism to follow-up a certain muscle length prescribed by central commands. Rather, its task is to bring the system to an equilibrium, load-dependent position. Another currently popular version defines the equilibrium point concept in terms of alpha motoneuron activity alone (the alpha model). Although the model imitates (as does the lambda model) spring-like properties of motor performance, it nevertheless is inconsistent with a substantial data base on intact motor control. An analysis of alpha models, including their treatment of motor performance in deafferented animals, reveals that they suffer from grave shortcomings. It is concluded that parameterization of the stretch reflex is a basis for intact motor control. Muscle deafferentation impairs this graceful mechanism though it does not remove the possibility of movement.

  18. Inflection-point inflation in a hyper-charge oriented U ( 1 ) X model

    DOE PAGES

    Okada, Nobuchika; Okada, Satomi; Raut, Digesh

    2017-03-31

    Inflection-point inflation is an interesting possibility to realize a successful slow-roll inflation when inflation is driven by a single scalar field with its value during inflation below the Planck mass (ΦI≲M Pl). In order for a renormalization group (RG) improved effective λΦ 4 potential to develop an inflection-point, the running quartic coupling λ(Φ) must exhibit a minimum with an almost vanishing value in its RG evolution, namely λ(Φ I)≃0 and β λ(ΦI)≃0, where β λ is the beta-function of the quartic coupling. Here in this paper, we consider the inflection-point inflation in the context of the minimal gauged U(1) Xmore » extended Standard Model (SM), which is a generalization of the minimal U(1) B$-$L model, and is constructed as a linear combination of the SM U(1) Y and U(1) B$-$L gauge symmetries. We identify the U(1) X Higgs field with the inflaton field. For a successful inflection-point inflation to be consistent with the current cosmological observations, the mass ratios among the U(1) X gauge boson, the right-handed neutrinos and the U(1) X Higgs boson are fixed. Focusing on the case that the extra U(1) X gauge symmetry is mostly aligned along the SM U(1) Y direction, we investigate a consistency between the inflationary predictions and the latest LHC Run-2 results on the search for a narrow resonance with the di-lepton final state.« less

  19. Inflection-point inflation in a hyper-charge oriented U ( 1 ) X model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Okada, Nobuchika; Okada, Satomi; Raut, Digesh

    Inflection-point inflation is an interesting possibility to realize a successful slow-roll inflation when inflation is driven by a single scalar field with its value during inflation below the Planck mass (ΦI≲M Pl). In order for a renormalization group (RG) improved effective λΦ 4 potential to develop an inflection-point, the running quartic coupling λ(Φ) must exhibit a minimum with an almost vanishing value in its RG evolution, namely λ(Φ I)≃0 and β λ(ΦI)≃0, where β λ is the beta-function of the quartic coupling. Here in this paper, we consider the inflection-point inflation in the context of the minimal gauged U(1) Xmore » extended Standard Model (SM), which is a generalization of the minimal U(1) B$-$L model, and is constructed as a linear combination of the SM U(1) Y and U(1) B$-$L gauge symmetries. We identify the U(1) X Higgs field with the inflaton field. For a successful inflection-point inflation to be consistent with the current cosmological observations, the mass ratios among the U(1) X gauge boson, the right-handed neutrinos and the U(1) X Higgs boson are fixed. Focusing on the case that the extra U(1) X gauge symmetry is mostly aligned along the SM U(1) Y direction, we investigate a consistency between the inflationary predictions and the latest LHC Run-2 results on the search for a narrow resonance with the di-lepton final state.« less

  20. Modeling and Assessment of GPS/BDS Combined Precise Point Positioning.

    PubMed

    Chen, Junping; Wang, Jungang; Zhang, Yize; Yang, Sainan; Chen, Qian; Gong, Xiuqiang

    2016-07-22

    Precise Point Positioning (PPP) technique enables stand-alone receivers to obtain cm-level positioning accuracy. Observations from multi-GNSS systems can augment users with improved positioning accuracy, reliability and availability. In this paper, we present and evaluate the GPS/BDS combined PPP models, including the traditional model and a simplified model, where the inter-system bias (ISB) is treated in different way. To evaluate the performance of combined GPS/BDS PPP, kinematic and static PPP positions are compared to the IGS daily estimates, where 1 month GPS/BDS data of 11 IGS Multi-GNSS Experiment (MGEX) stations are used. The results indicate apparent improvement of GPS/BDS combined PPP solutions in both static and kinematic cases, where much smaller standard deviations are presented in the magnitude distribution of coordinates RMS statistics. Comparisons between the traditional and simplified combined PPP models show no difference in coordinate estimations, and the inter system biases between the GPS/BDS system are assimilated into receiver clock, ambiguities and pseudo-range residuals accordingly.

  1. Development of an Open Rotor Cycle Model in NPSS Using a Multi-Design Point Approach

    NASA Technical Reports Server (NTRS)

    Hendricks, Eric S.

    2011-01-01

    NASA's Environmentally Responsible Aviation Project and Subsonic Fixed Wing Project are focused on developing concepts and technologies which may enable dramatic reductions to the environmental impact of future generation subsonic aircraft (Refs. 1 and 2). The open rotor concept (also referred to as the Unducted Fan or advanced turboprop) may allow the achievement of this objective by reducing engine emissions and fuel consumption. To evaluate its potential impact, an open rotor cycle modeling capability is needed. This paper presents the initial development of an open rotor cycle model in the Numerical Propulsion System Simulation (NPSS) computer program which can then be used to evaluate the potential benefit of this engine. The development of this open rotor model necessitated addressing two modeling needs within NPSS. First, a method for evaluating the performance of counter-rotating propellers was needed. Therefore, a new counter-rotating propeller NPSS component was created. This component uses propeller performance maps developed from historic counter-rotating propeller experiments to determine the thrust delivered and power required. Second, several methods for modeling a counter-rotating power turbine within NPSS were explored. These techniques used several combinations of turbine components within NPSS to provide the necessary power to the propellers. Ultimately, a single turbine component with a conventional turbine map was selected. Using these modeling enhancements, an open rotor cycle model was developed in NPSS using a multi-design point approach. The multi-design point (MDP) approach improves the engine cycle analysis process by making it easier to properly size the engine to meet a variety of thrust targets throughout the flight envelope. A number of design points are considered including an aerodynamic design point, sea-level static, takeoff and top of climb. The development of this MDP model was also enabled by the selection of a simple power

  2. Adaptive genomic divergence under high gene flow between freshwater and brackish-water ecotypes of prickly sculpin (Cottus asper) revealed by Pool-Seq.

    PubMed

    Dennenmoser, Stefan; Vamosi, Steven M; Nolte, Arne W; Rogers, Sean M

    2017-01-01

    Understanding the genomic basis of adaptive divergence in the presence of gene flow remains a major challenge in evolutionary biology. In prickly sculpin (Cottus asper), an abundant euryhaline fish in northwestern North America, high genetic connectivity among brackish-water (estuarine) and freshwater (tributary) habitats of coastal rivers does not preclude the build-up of neutral genetic differentiation and emergence of different life history strategies. Because these two habitats present different osmotic niches, we predicted high genetic differentiation at known teleost candidate genes underlying salinity tolerance and osmoregulation. We applied whole-genome sequencing of pooled DNA samples (Pool-Seq) to explore adaptive divergence between two estuarine and two tributary habitats. Paired-end sequence reads were mapped against genomic contigs of European Cottus, and the gene content of candidate regions was explored based on comparisons with the threespine stickleback genome. Genes showing signals of repeated differentiation among brackish-water and freshwater habitats included functions such as ion transport and structural permeability in freshwater gills, which suggests that local adaptation to different osmotic niches might contribute to genomic divergence among habitats. Overall, the presence of both repeated and unique signatures of differentiation across many loci scattered throughout the genome is consistent with polygenic adaptation from standing genetic variation and locally variable selection pressures in the early stages of life history divergence. © 2016 John Wiley & Sons Ltd.

  3. Some Applications of the Model of the Partion Points on a One Dimensional Lattice

    NASA Astrophysics Data System (ADS)

    Mejdani, R.; Huseini, H.

    1996-02-01

    We have shown that by using a model of gas of partition points on a one-dimensional lattice, we can find some results about the saturation curves for enzyme kinetics or the average domain-size, which we have obtained before by using a correlated walks' theory or a probabilistic (combinatoric) way. We have studied, using the same model and the same technique, the denaturation process, i.e., the breaking of the hydrogen bonds connecting the two strands, under treatment by heat. Also, we have discussed, without entering in details, the problem related to the spread of an infections disease and the stochastic model of partition points. We think that this model, being simple and mathematically transparent, can be advantageous for the other theoratical investigations in chemistry or modern biology. PACS NOS.: 05.50. + q; 05.70.Ce; 64.10.+h; 87.10. +e; 87.15.Rn

  4. Direct Numerical Simulations of Reflection-Driven, Reduced MHD Turbulence from the Sun to the Alfvén Critical Point

    NASA Astrophysics Data System (ADS)

    Perez, J. C.; Chandran, B. D.

    2013-12-01

    We present direct numerical simulations of inhomogeneous reduced magnetohydrodynamic (RMHD) turbulence between the Sun and the Alfvén critical point. These are the first such simulations that take into account the solar-wind outflow velocity and the radial inhomogeneity of the background solar wind without approximating the nonlinear terms in the governing equations. Our simulation domain is a narrow magnetic flux tube with a square cross section centered on a radial magnetic field line. We impose periodic boundary conditions in the plane perpendicular to the background magnetic field B0. RMHD turbulence is driven by outward-propagating Alfvén waves (z+ fluctuations) launched from the Sun, which undergo partial non-WKB reflection to produce sunward-propagating Alfvén waves (z- fluctuations). Nonlinear interactions between z+ and z- then cause fluctuation energy to cascade from large scales to small scales and dissipate. We present ten simulations with different values of the correlation time τ+c⊙ and perpendicular correlation length L⊥⊙ of outward-propagating Alfvén waves (AWs) at the coronal base. We find that between 15% and 33% of the z+ energy launched into the corona dissipates between the coronal base and Alfvén critical point, which is at rA = 11.1R⊙ in our model solar wind. Between 33% and 40% of this input energy goes into work on the solar-wind outflow, and between 22% and 36% escapes as z+ fluctuations through the simulation boundary at r=rA. Except in the immediate vicinity of r=R⊙, the z× power spectra scale like k⊥-α×, where k⊥ is the wavenumber in the plane perpendicular to B0. In our simulation with the smallest value of τ+c⊙ (~2 min) and largest value of L⊥⊙ (~2×104 km), we find that α+ decreases approximately linearly with increasing ln(r), reaching a value of~1.3 at r=11.1R⊙. Our simulations with larger values of τ+c⊙ exhibit alignment between the contours of constant Φ× and Ω×, where Φ× are the Els

  5. Modeling a distribution of point defects as misfitting inclusions in stressed solids

    NASA Astrophysics Data System (ADS)

    Cai, W.; Sills, R. B.; Barnett, D. M.; Nix, W. D.

    2014-05-01

    The chemical equilibrium distribution of point defects modeled as non-overlapping, spherical inclusions with purely positive dilatational eigenstrain in an isotropically elastic solid is derived. The compressive self-stress inside existing inclusions must be excluded from the stress dependence of the equilibrium concentration of the point defects, because it does no work when a new inclusion is introduced. On the other hand, a tensile image stress field must be included to satisfy the boundary conditions in a finite solid. Through the image stress, existing inclusions promote the introduction of additional inclusions. This is contrary to the prevailing approach in the literature in which the equilibrium point defect concentration depends on a homogenized stress field that includes the compressive self-stress. The shear stress field generated by the equilibrium distribution of such inclusions is proved to be proportional to the pre-existing stress field in the solid, provided that the magnitude of the latter is small, so that a solid containing an equilibrium concentration of point defects can be described by a set of effective elastic constants in the small-stress limit.

  6. One-dimensional gravity in infinite point distributions.

    PubMed

    Gabrielli, A; Joyce, M; Sicard, F

    2009-10-01

    The dynamics of infinite asymptotically uniform distributions of purely self-gravitating particles in one spatial dimension provides a simple and interesting toy model for the analogous three dimensional problem treated in cosmology. In this paper we focus on a limitation of such models as they have been treated so far in the literature: the force, as it has been specified, is well defined in infinite point distributions only if there is a centre of symmetry (i.e., the definition requires explicitly the breaking of statistical translational invariance). The problem arises because naive background subtraction (due to expansion, or by "Jeans swindle" for the static case), applied as in three dimensions, leaves an unregulated contribution to the force due to surface mass fluctuations. Following a discussion by Kiessling of the Jeans swindle in three dimensions, we show that the problem may be resolved by defining the force in infinite point distributions as the limit of an exponentially screened pair interaction. We show explicitly that this prescription gives a well defined (finite) force acting on particles in a class of perturbed infinite lattices, which are the point processes relevant to cosmological N -body simulations. For identical particles the dynamics of the simplest toy model (without expansion) is equivalent to that of an infinite set of points with inverted harmonic oscillator potentials which bounce elastically when they collide. We discuss and compare with previous results in the literature and present new results for the specific case of this simplest (static) model starting from "shuffled lattice" initial conditions. These show qualitative properties of the evolution (notably its "self-similarity") like those in the analogous simulations in three dimensions, which in turn resemble those in the expanding universe.

  7. Kuramoto model with uniformly spaced frequencies: Finite-N asymptotics of the locking threshold.

    PubMed

    Ottino-Löffler, Bertrand; Strogatz, Steven H

    2016-06-01

    We study phase locking in the Kuramoto model of coupled oscillators in the special case where the number of oscillators, N, is large but finite, and the oscillators' natural frequencies are evenly spaced on a given interval. In this case, stable phase-locked solutions are known to exist if and only if the frequency interval is narrower than a certain critical width, called the locking threshold. For infinite N, the exact value of the locking threshold was calculated 30 years ago; however, the leading corrections to it for finite N have remained unsolved analytically. Here we derive an asymptotic formula for the locking threshold when N≫1. The leading correction to the infinite-N result scales like either N^{-3/2} or N^{-1}, depending on whether the frequencies are evenly spaced according to a midpoint rule or an end-point rule. These scaling laws agree with numerical results obtained by Pazó [D. Pazó, Phys. Rev. E 72, 046211 (2005)PLEEE81539-375510.1103/PhysRevE.72.046211]. Moreover, our analysis yields the exact prefactors in the scaling laws, which also match the numerics.

  8. Higher-order triangular spectral element method with optimized cubature points for seismic wavefield modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Youshan, E-mail: ysliu@mail.iggcas.ac.cn; Teng, Jiwen, E-mail: jwteng@mail.iggcas.ac.cn; Xu, Tao, E-mail: xutao@mail.iggcas.ac.cn

    2017-05-01

    The mass-lumped method avoids the cost of inverting the mass matrix and simultaneously maintains spatial accuracy by adopting additional interior integration points, known as cubature points. To date, such points are only known analytically in tensor domains, such as quadrilateral or hexahedral elements. Thus, the diagonal-mass-matrix spectral element method (SEM) in non-tensor domains always relies on numerically computed interpolation points or quadrature points. However, only the cubature points for degrees 1 to 6 are known, which is the reason that we have developed a p-norm-based optimization algorithm to obtain higher-order cubature points. In this way, we obtain and tabulate newmore » cubature points with all positive integration weights for degrees 7 to 9. The dispersion analysis illustrates that the dispersion relation determined from the new optimized cubature points is comparable to that of the mass and stiffness matrices obtained by exact integration. Simultaneously, the Lebesgue constant for the new optimized cubature points indicates its surprisingly good interpolation properties. As a result, such points provide both good interpolation properties and integration accuracy. The Courant–Friedrichs–Lewy (CFL) numbers are tabulated for the conventional Fekete-based triangular spectral element (TSEM), the TSEM with exact integration, and the optimized cubature-based TSEM (OTSEM). A complementary study demonstrates the spectral convergence of the OTSEM. A numerical example conducted on a half-space model demonstrates that the OTSEM improves the accuracy by approximately one order of magnitude compared to the conventional Fekete-based TSEM. In particular, the accuracy of the 7th-order OTSEM is even higher than that of the 14th-order Fekete-based TSEM. Furthermore, the OTSEM produces a result that can compete in accuracy with the quadrilateral SEM (QSEM). The high accuracy of the OTSEM is also tested with a non-flat topography model. In terms of

  9. Insilico modeling and molecular dynamic simulation of claudin-1 point mutations in HCV infection.

    PubMed

    Vipperla, Bhavaniprasad; Dass, J Febin Prabhu; Jayanthi, S

    2014-01-01

    Claudin-1 (CLDN1) in association with envelope glycoprotein (CD81) mediates the fusion of HCV into the cytosol. Recent studies have indicated that point mutations in CLDN1 are important for the entry of hepatitis C virus (HCV). To validate these findings, we employed a computational platform to investigate the structural effect of two point mutations (I32M and E48K). Initially, three-dimensional co-ordinates for CLDN1 receptor sequence were generated. Then, three mutant models were built using the point mutation including a double mutant (I32M/E48K) model from the native model structure. Finally, all the four model structures including the native and three mutant models were subjected to molecular dynamics (MD) simulation for a period of 25 ns to appreciate their dynamic behavior. The MD trajectory files were analyzed using cluster and principal component method. The analysis suggested that either of the single mutation has negligible effect on the overall structure of CLDN1 compared to the double mutant form. However, the double mutant model of CLDN1 shows significant negative impact through the impairment of H-bonds and the simultaneous increase in solvent accessible surface area. Our simulation results are visibly consistent with the experimental report suggesting that the CLDN1 receptor distortion is prominent due to the double mutation with large surface accessibility. This increase in accessible surface area due to the coexistence of double mutation may be presumed as one of the key factor that results in permissive action of HCV attachment and infection.

  10. A 3D Ginibre Point Field

    NASA Astrophysics Data System (ADS)

    Kargin, Vladislav

    2018-06-01

    We introduce a family of three-dimensional random point fields using the concept of the quaternion determinant. The kernel of each field is an n-dimensional orthogonal projection on a linear space of quaternionic polynomials. We find explicit formulas for the basis of the orthogonal quaternion polynomials and for the kernel of the projection. For number of particles n → ∞, we calculate the scaling limits of the point field in the bulk and at the center of coordinates. We compare our construction with the previously introduced Fermi-sphere point field process.

  11. Supervised Outlier Detection in Large-Scale Mvs Point Clouds for 3d City Modeling Applications

    NASA Astrophysics Data System (ADS)

    Stucker, C.; Richard, A.; Wegner, J. D.; Schindler, K.

    2018-05-01

    We propose to use a discriminative classifier for outlier detection in large-scale point clouds of cities generated via multi-view stereo (MVS) from densely acquired images. What makes outlier removal hard are varying distributions of inliers and outliers across a scene. Heuristic outlier removal using a specific feature that encodes point distribution often delivers unsatisfying results. Although most outliers can be identified correctly (high recall), many inliers are erroneously removed (low precision), too. This aggravates object 3D reconstruction due to missing data. We thus propose to discriminatively learn class-specific distributions directly from the data to achieve high precision. We apply a standard Random Forest classifier that infers a binary label (inlier or outlier) for each 3D point in the raw, unfiltered point cloud and test two approaches for training. In the first, non-semantic approach, features are extracted without considering the semantic interpretation of the 3D points. The trained model approximates the average distribution of inliers and outliers across all semantic classes. Second, semantic interpretation is incorporated into the learning process, i.e. we train separate inlieroutlier classifiers per semantic class (building facades, roof, ground, vegetation, fields, and water). Performance of learned filtering is evaluated on several large SfM point clouds of cities. We find that results confirm our underlying assumption that discriminatively learning inlier-outlier distributions does improve precision over global heuristics by up to ≍ 12 percent points. Moreover, semantically informed filtering that models class-specific distributions further improves precision by up to ≍ 10 percent points, being able to remove very isolated building, roof, and water points while preserving inliers on building facades and vegetation.

  12. GPU-accelerated Modeling and Element-free Reverse-time Migration with Gauss Points Partition

    NASA Astrophysics Data System (ADS)

    Zhen, Z.; Jia, X.

    2014-12-01

    Element-free method (EFM) has been applied to seismic modeling and migration. Compared with finite element method (FEM) and finite difference method (FDM), it is much cheaper and more flexible because only the information of the nodes and the boundary of the study area are required in computation. In the EFM, the number of Gauss points should be consistent with the number of model nodes; otherwise the accuracy of the intermediate coefficient matrices would be harmed. Thus when we increase the nodes of velocity model in order to obtain higher resolution, we find that the size of the computer's memory will be a bottleneck. The original EFM can deal with at most 81×81 nodes in the case of 2G memory, as tested by Jia and Hu (2006). In order to solve the problem of storage and computation efficiency, we propose a concept of Gauss points partition (GPP), and utilize the GPUs to improve the computation efficiency. Considering the characteristics of the Gaussian points, the GPP method doesn't influence the propagation of seismic wave in the velocity model. To overcome the time-consuming computation of the stiffness matrix (K) and the mass matrix (M), we also use the GPUs in our computation program. We employ the compressed sparse row (CSR) format to compress the intermediate sparse matrices and try to simplify the operations by solving the linear equations with the CULA Sparse's Conjugate Gradient (CG) solver instead of the linear sparse solver 'PARDISO'. It is observed that our strategy can significantly reduce the computational time of K and Mcompared with the algorithm based on CPU. The model tested is Marmousi model. The length of the model is 7425m and the depth is 2990m. We discretize the model with 595x298 nodes, 300x300 Gauss cells and 3x3 Gauss points in each cell. In contrast to the computational time of the conventional EFM, the GPUs-GPP approach can substantially improve the efficiency. The speedup ratio of time consumption of computing K, M is 120 and the

  13. Point Cloud and Digital Surface Model Generation from High Resolution Multiple View Stereo Satellite Imagery

    NASA Astrophysics Data System (ADS)

    Gong, K.; Fritsch, D.

    2018-05-01

    Nowadays, multiple-view stereo satellite imagery has become a valuable data source for digital surface model generation and 3D reconstruction. In 2016, a well-organized multiple view stereo publicly benchmark for commercial satellite imagery has been released by the John Hopkins University Applied Physics Laboratory, USA. This benchmark motivates us to explore the method that can generate accurate digital surface models from a large number of high resolution satellite images. In this paper, we propose a pipeline for processing the benchmark data to digital surface models. As a pre-procedure, we filter all the possible image pairs according to the incidence angle and capture date. With the selected image pairs, the relative bias-compensated model is applied for relative orientation. After the epipolar image pairs' generation, dense image matching and triangulation, the 3D point clouds and DSMs are acquired. The DSMs are aligned to a quasi-ground plane by the relative bias-compensated model. We apply the median filter to generate the fused point cloud and DSM. By comparing with the reference LiDAR DSM, the accuracy, the completeness and the robustness are evaluated. The results show, that the point cloud reconstructs the surface with small structures and the fused DSM generated by our pipeline is accurate and robust.

  14. Integrating UHPLC-MS/MS quantification and DAS analysis to investigate the effects of wine-processing on the tissue distributions of bioactive constituents of herbs in rats: Exemplarily shown for Dipsacus asper.

    PubMed

    Tao, Yi; Du, Yingshan; Li, Weidong; Cai, Baochang; Di, Liuqing; Shi, Liyun; Hu, Lihong

    2017-06-15

    Wine-processing, which is sauteing with rice wine, will change the inclination and direction of herbs' actions. After being wine-processed, the effects of nourishing liver and kidney of Dipsacus asper will be strengthened. However, the underlying mechanism remains elusive. The following study is to establish and validate an UHPLC-MS/MS approach to determine six bioactive constituents in tissue samples, including loganin, loganic acid, chlorogenic acid, 3,5-dicaffeoylquinic acid, 4-caffeoylquinic acid and asperosaponin VI and apply the approach to a comparative tissue distribution study of raw and wine-processed Dipsacus asper in rats. A Shimadzu UHPLC system coupled with triple quadrupole mass spectrometer was employed for analysis of the six analytes using multiple reaction monitoring (MRM) mode. A one-step protein precipitation by methanol was employed to extract the six analytes from tissues. Chloramphenicol and glycyrrhetinic acid were selected as internal standards. The proposed approach was fully validated in terms of linearity, sensitivity, precision, repeatability as well as recovery. Our results revealed that all of the calibration curves displayed good linear regression (r 2 >0.9991). Intra- and inter-assay variability for all analytes ranged from -4.62 to 4.93% and from -4.98 to 4.92%, respectively. The recovery rates for each analytes were determined to be 88.3-100.1%. All the samples showed satisfactory precision and accuracy after various stability tests, including storage at 25°C for 4h, -80°C for 30days, three-freeze-thaw cycles, and 4°C for 24h. Tissue pharmacokinetic parameters including AUC 0-t , t 1/2 , T max and C max were calculated. Collectively, the parameters of C max and AUC 0-t of the six analytes in wine-processed group were remarkably elevated (p<0.05) in the rat liver and kidney as compared with those of the raw group. But in the rat heart and spleen, the C max and AUC 0-t of asperosaponin VI was decreased as compared with those of

  15. Modelling field scale spatial variation in water run-off, soil moisture, N2O emissions and herbage biomass of a grazed pasture using the SPACSYS model.

    PubMed

    Liu, Yi; Li, Yuefen; Harris, Paul; Cardenas, Laura M; Dunn, Robert M; Sint, Hadewij; Murray, Phil J; Lee, Michael R F; Wu, Lianhai

    2018-04-01

    In this study, we evaluated the ability of the SPACSYS model to simulate water run-off, soil moisture, N 2 O fluxes and grass growth using data generated from a field of the North Wyke Farm Platform. The field-scale model is adapted via a linked and grid-based approach (grid-to-grid) to account for not only temporal dynamics but also the within-field spatial variation in these key ecosystem indicators. Spatial variability in nutrient and water presence at the field-scale is a key source of uncertainty when quantifying nutrient cycling and water movement in an agricultural system. Results demonstrated that the new spatially distributed version of SPACSYS provided a worthy improvement in accuracy over the standard (single-point) version for biomass productivity. No difference in model prediction performance was observed for water run-off, reflecting the closed-system nature of this variable. Similarly, no difference in model prediction performance was found for N 2 O fluxes, but here the N 2 O predictions were noticeably poor in both cases. Further developmental work, informed by this study's findings, is proposed to improve model predictions for N 2 O. Soil moisture results with the spatially distributed version appeared promising but this promise could not be objectively verified.

  16. Self-Exciting Point Process Models of Civilian Deaths in Iraq

    DTIC Science & Technology

    2010-01-01

    Tita , 2009), we propose that violence in Iraq arises from a combination of exogenous and en- dogenous effects. Spatial heterogeneity in background...Schoenberg, and Tita (2010) where they analyze burgarly and robbery data in Los Angeles. Related work has also been done 2 in Short et al. (2009) where...Control , 4 , 215–240. Mohler, G. O., Short, M. B., Brantingham, P. J., Schoenberg, F. P., & Tita , G. E. (2010). Self- exciting point process modeling of

  17. Role of point defects and HfO2/TiN interface stoichiometry on effective work function modulation in ultra-scaled complementary metal-oxide-semiconductor devices

    NASA Astrophysics Data System (ADS)

    Pandey, R. K.; Sathiyanarayanan, Rajesh; Kwon, Unoh; Narayanan, Vijay; Murali, K. V. R. M.

    2013-07-01

    We investigate the physical properties of a portion of the gate stack of an ultra-scaled complementary metal-oxide-semiconductor (CMOS) device. The effects of point defects, such as oxygen vacancy, oxygen, and aluminum interstitials at the HfO2/TiN interface, on the effective work function of TiN are explored using density functional theory. We compute the diffusion barriers of such point defects in the bulk TiN and across the HfO2/TiN interface. Diffusion of these point defects across the HfO2/TiN interface occurs during the device integration process. This results in variation of the effective work function and hence in the threshold voltage variation in the devices. Further, we simulate the effects of varying the HfO2/TiN interface stoichiometry on the effective work function modulation in these extremely-scaled CMOS devices. Our results show that the interface rich in nitrogen gives higher effective work function, whereas the interface rich in titanium gives lower effective work function, compared to a stoichiometric HfO2/TiN interface. This theoretical prediction is confirmed by the experiment, demonstrating over 700 meV modulation in the effective work function.

  18. Experimental and computational models of neurite extension at a choice point in response to controlled diffusive gradients

    NASA Astrophysics Data System (ADS)

    Catig, G. C.; Figueroa, S.; Moore, M. J.

    2015-08-01

    Ojective. Axons are guided toward desired targets through a series of choice points that they navigate by sensing cues in the cellular environment. A better understanding of how microenvironmental factors influence neurite growth during development can inform strategies to address nerve injury. Therefore, there is a need for biomimetic models to systematically investigate the influence of guidance cues at such choice points. Approach. We ran an adapted in silico biased turning axon growth model under the influence of nerve growth factor (NGF) and compared the results to corresponding in vitro experiments. We examined if growth simulations were predictive of neurite population behavior at a choice point. We used a biphasic micropatterned hydrogel system consisting of an outer cell restrictive mold that enclosed a bifurcated cell permissive region and placed a well near a bifurcating end to allow proteins to diffuse and form a gradient. Experimental diffusion profiles in these constructs were used to validate a diffusion computational model that utilized experimentally measured diffusion coefficients in hydrogels. The computational diffusion model was then used to establish defined soluble gradients within the permissive region of the hydrogels and maintain the profiles in physiological ranges for an extended period of time. Computational diffusion profiles informed the neurite growth model, which was compared with neurite growth experiments in the bifurcating hydrogel constructs. Main results. Results indicated that when applied to the constrained choice point geometry, the biased turning model predicted experimental behavior closely. Results for both simulated and in vitro neurite growth studies showed a significant chemoattractive response toward the bifurcated end containing an NGF gradient compared to the control, though some neurites were found in the end with no NGF gradient. Significance. The integrated model of neurite growth we describe will allow

  19. Reconstruction of forest geometries from terrestrial laser scanning point clouds for canopy radiative transfer modelling

    NASA Astrophysics Data System (ADS)

    Bremer, Magnus; Schmidtner, Korbinian; Rutzinger, Martin

    2015-04-01

    The architecture of forest canopies is a key parameter for forest ecological issues helping to model the variability of wood biomass and foliage in space and time. In order to understand the nature of subpixel effects of optical space-borne sensors with coarse spatial resolution, hypothetical 3D canopy models are widely used for the simulation of radiative transfer in forests. Thereby, radiation is traced through the atmosphere and canopy geometries until it reaches the optical sensor. For a realistic simulation scene we decompose terrestrial laser scanning point cloud data of leaf-off larch forest plots in the Austrian Alps and reconstruct detailed model ready input data for radiative transfer simulations. The point clouds are pre-classified into primitive classes using Principle Component Analysis (PCA) using scale adapted radius neighbourhoods. Elongated point structures are extracted as tree trunks. The tree trunks are used as seeds for a Dijkstra-growing procedure, in order to obtain single tree segmentation in the interlinked canopies. For the optimized reconstruction of branching architectures as vector models, point cloud skeletonisation is used in combination with an iterative Dijkstra-growing and by applying distance constraints. This allows conducting a hierarchical reconstruction preferring the tree trunk and higher order branches and avoiding over-skeletonization effects. Based on the reconstructed branching architectures, larch needles are modelled based on the hierarchical level of branches and the geometrical openness of the canopy. For radiative transfer simulations, branch architectures are used as mesh geometries representing branches as cylindrical pipes. Needles are either used as meshes or as voxel-turbids. The presented workflow allows an automatic classification and single tree segmentation in interlinked canopies. The iterative Dijkstra-growing using distance constraints generated realistic reconstruction results. As the mesh representation

  20. Micromechanics of sea ice frictional slip from test basin scale experiments

    NASA Astrophysics Data System (ADS)

    Sammonds, Peter R.; Hatton, Daniel C.; Feltham, Daniel L.

    2017-02-01

    We have conducted a series of high-resolution friction experiments on large floating saline ice floes in an environmental test basin. In these experiments, a central ice floe was pushed between two other floes, sliding along two interfacial faults. The frictional motion was predominantly stick-slip. Shear stresses, normal stresses, local strains and slip displacement were measured along the sliding faults, and acoustic emissions were monitored. High-resolution measurements during a single stick-slip cycle at several positions along the fault allowed us to identify two phases of frictional slip: a nucleation phase, where a nucleation zone begins to slip before the rest of the fault, and a propagation phase when the entire fault is slipping. This is slip-weakening behaviour. We have therefore characterized what we consider to be a key deformation mechanism in Arctic Ocean dynamics. In order to understand the micromechanics of sea ice friction, we have employed a theoretical constitutive relation (i.e. an equation for shear stress in terms of temperature, normal load, acceleration, velocity and slip displacement) derived from the physics of asperity-asperity contact and sliding (Hatton et al. 2009 Phil. Mag. 89, 2771-2799 (doi:10.1080/14786430903113769)). We find that our experimental data conform reasonably with this frictional law once slip weakening is introduced. We find that the constitutive relation follows Archard's law rather than Amontons' law, with ? (where τ is the shear stress and σn is the normal stress) and n = 26/27, with a fractal asperity distribution, where the frictional shear stress, τ = ffractal Tmlws, where ffractal is the fractal asperity height distribution, Tml is the shear strength for frictional melting and lubrication and ws is the slip weakening. We can therefore deduce that the interfacial faults failed in shear for these experimental conditions through processes of brittle failure of asperities in shear, and, at higher velocities

  1. Instantaneous nonlinear assessment of complex cardiovascular dynamics by Laguerre-Volterra point process models.

    PubMed

    Valenza, Gaetano; Citi, Luca; Barbieri, Riccardo

    2013-01-01

    We report an exemplary study of instantaneous assessment of cardiovascular dynamics performed using point-process nonlinear models based on Laguerre expansion of the linear and nonlinear Wiener-Volterra kernels. As quantifiers, instantaneous measures such as high order spectral features and Lyapunov exponents can be estimated from a quadratic and cubic autoregressive formulation of the model first order moment, respectively. Here, these measures are evaluated on heartbeat series coming from 16 healthy subjects and 14 patients with Congestive Hearth Failure (CHF). Data were gathered from the on-line repository PhysioBank, which has been taken as landmark for testing nonlinear indices. Results show that the proposed nonlinear Laguerre-Volterra point-process methods are able to track the nonlinear and complex cardiovascular dynamics, distinguishing significantly between CHF and healthy heartbeat series.

  2. Modeling of Aerobrake Ballute Stagnation Point Temperature and Heat Transfer to Inflation Gas

    NASA Technical Reports Server (NTRS)

    Bahrami, Parviz A.

    2012-01-01

    A trailing Ballute drag device concept for spacecraft aerocapture is considered. A thermal model for calculation of the Ballute membrane temperature and the inflation gas temperature is developed. An algorithm capturing the most salient features of the concept is implemented. In conjunction with the thermal model, trajectory calculations for two candidate missions, Titan Explorer and Neptune Orbiter missions, are used to estimate the stagnation point temperature and the inflation gas temperature. Radiation from both sides of the membrane at the stagnation point and conduction to the inflating gas is included. The results showed that the radiation from the membrane and to a much lesser extent conduction to the inflating gas, are likely to be the controlling heat transfer mechanisms and that the increase in gas temperature due to aerodynamic heating is of secondary importance.

  3. A comparison of neural network-based predictions of foF2 with the IRI-2012 model at conjugate points in Southeast Asia

    NASA Astrophysics Data System (ADS)

    Wichaipanich, Noraset; Hozumi, Kornyanat; Supnithi, Pornchai; Tsugawa, Takuya

    2017-06-01

    This paper presents the development of Neural Network (NN) model for the prediction of the F2 layer critical frequency (foF2) at three ionosonde stations near the magnetic equator of Southeast Asia. Two of these stations including Chiang Mai (18.76°N, 98.93°E, dip angle 12.7°N) and Kototabang (0.2°S, 100.3°E, dip angle 10.1°S) are at the conjugate points while Chumphon (10.72°N, 99.37°E, dip angle 3.0°N) station is near the equator. To produce the model, the feed forward network with backpropagation algorithm is applied. The NN is trained with the daily hourly values of foF2 during 2004-2012, except 2009, and the selected input parameters, which affect the foF2 variability, include day number (DN), hour number (HR), solar zenith angle (C), geographic latitude (θ), magnetic inclination (I), magnetic declination (D) and angle of meridian (M) relative to the sub-solar point, the 7-day mean of F10.7 (F10.7_7), the 81-day mean of SSN (SSN_81) and the 2-day mean of Ap (Ap_2). The foF2 data of 2009 and 2013 are then used for testing the NN model during the foF2 interpolation and extrapolation, respectively. To examine the performance of the proposed NN, the root mean square error (RMSE) of the observed foF2, the proposed NN model and the IRI-2012 (CCIR and URSI options) model are compared. In general, the results show the same trends in foF2 variation between the models (NN and IRI-2012) and the observations in that they are higher during the day and lower at night. Besides, the results demonstrate that the proposed NN model can predict the foF2 values more closely during daytime than during nighttime as supported by the lower RMSE values during daytime (0.5 ≤ RMSE ≤ 1.0 for Chumphon and Kototabang, 0.7 ≤ RMSE ≤ 1.2 at Chiang Mai) and with the highest levels during nighttime (0.8 ≤ RMSE ≤ 1.5 for Chumphon and Kototabang, 1.2 ≤ RMSE ≤ 2.0 at Chiang Mai). Furthermore, the NN model predicts the foF2 values more accurately than the IRI model at the

  4. Dynamic Effective Mass of Granular Media

    NASA Astrophysics Data System (ADS)

    Hsu, Chaur-Jian; Johnson, David L.; Ingale, Rohit A.; Valenza, John J.; Gland, Nicolas; Makse, Hernán A.

    2009-02-01

    We develop the concept of frequency dependent effective mass, Mtilde (ω), of jammed granular materials which occupy a rigid cavity to a filling fraction of 48%, the remaining volume being air of normal room condition or controlled humidity. The dominant features of Mtilde (ω) provide signatures of the dissipation of acoustic modes, elasticity, and aging effects in the granular medium. We perform humidity controlled experiments and interpret the data in terms of a continuum model and a “trap” model of thermally activated capillary bridges at the contact points. The results suggest that attenuation of acoustic waves in granular materials can be influenced significantly by the kinetics of capillary condensation between the asperities at the contacts.

  5. Coronary risk assessment by point-based vs. equation-based Framingham models: significant implications for clinical care.

    PubMed

    Gordon, William J; Polansky, Jesse M; Boscardin, W John; Fung, Kathy Z; Steinman, Michael A

    2010-11-01

    US cholesterol guidelines use original and simplified versions of the Framingham model to estimate future coronary risk and thereby classify patients into risk groups with different treatment strategies. We sought to compare risk estimates and risk group classification generated by the original, complex Framingham model and the simplified, point-based version. We assessed 2,543 subjects age 20-79 from the 2001-2006 National Health and Nutrition Examination Surveys (NHANES) for whom Adult Treatment Panel III (ATP-III) guidelines recommend formal risk stratification. For each subject, we calculated the 10-year risk of major coronary events using the original and point-based Framingham models, and then compared differences in these risk estimates and whether these differences would place subjects into different ATP-III risk groups (<10% risk, 10-20% risk, or >20% risk). Using standard procedures, all analyses were adjusted for survey weights, clustering, and stratification to make our results nationally representative. Among 39 million eligible adults, the original Framingham model categorized 71% of subjects as having "moderate" risk (<10% risk of a major coronary event in the next 10 years), 22% as having "moderately high" (10-20%) risk, and 7% as having "high" (>20%) risk. Estimates of coronary risk by the original and point-based models often differed substantially. The point-based system classified 15% of adults (5.7 million) into different risk groups than the original model, with 10% (3.9 million) misclassified into higher risk groups and 5% (1.8 million) into lower risk groups, for a net impact of classifying 2.1 million adults into higher risk groups. These risk group misclassifications would impact guideline-recommended drug treatment strategies for 25-46% of affected subjects. Patterns of misclassifications varied significantly by gender, age, and underlying CHD risk. Compared to the original Framingham model, the point-based version misclassifies millions

  6. Extended Fitts' model of pointing time in eye-gaze input system - Incorporating effects of target shape and movement direction into modeling.

    PubMed

    Murata, Atsuo; Fukunaga, Daichi

    2018-04-01

    This study attempted to investigate the effects of the target shape and the movement direction on the pointing time using an eye-gaze input system and extend Fitts' model so that these factors are incorporated into the model and the predictive power of Fitts' model is enhanced. The target shape, the target size, the movement distance, and the direction of target presentation were set as within-subject experimental variables. The target shape included: a circle, and rectangles with an aspect ratio of 1:1, 1:2, 1:3, and 1:4. The movement direction included eight directions: upper, lower, left, right, upper left, upper right, lower left, and lower right. On the basis of the data for identifying the effects of the target shape and the movement direction on the pointing time, an attempt was made to develop a generalized and extended Fitts' model that took into account the movement direction and the target shape. As a result, the generalized and extended model was found to fit better to the experimental data, and be more effective for predicting the pointing time for a variety of human-computer interaction (HCI) task using an eye-gaze input system. Copyright © 2017. Published by Elsevier Ltd.

  7. A study of the wear behaviour of ion implanted pure iron

    NASA Astrophysics Data System (ADS)

    Goode, P. D.; Peacock, A. T.; Asher, J.

    1983-05-01

    The technique of Thin Layer Activation (TLA) has been used to monitor disc wear in pin-on-disc wear tests. By simultaneously monitoring the pin wear the relationship between the wear rates of the two components of the wear couple has been studied. Tests were carried out using untreated pins wearing against ion implanted and untreated pure iron discs. The ratio of pin/disc volumetric wear rates was found to be constant in tests with unimplanted discs. In the implanted case the ratio was 8 initially, rising to the unimplatned value of 24 by a sliding distance of 25 km. The relationship between pin and disc wear after nitrogen implantation of the disc was approximately independent of dose between values of 7×10 16 and 1.2×10 18 N atoms cm -2. The actual wear rates of both pin and disc were significantly lower after implantation with the greater effects being observed om the unimplanted pin. The effects are explained in terms of the model of oxidative wear. In the unimplanted case the high pin wear relative to disc wear is considered to result from the higher mean temperature of pin asperities. Implantation appears to alter the mean asperity temperatures in such a way as to reduce the oxidation rate of the pin preferentially. Alternatively the effect of the implantation could be to reduce the critical thickness for removal of oxide formed on disc asperities.

  8. Relativistic Hamiltonian dynamics for N point particles

    NASA Astrophysics Data System (ADS)

    King, M. J.

    1980-08-01

    The theory is quantized canonically to give a relativistic quantum mechanics for N particles. The existence of such a theory has been in doubt since the proof of the No-interaction theorem. However, such a theory does exist and was generalized. This dynamics is expressed in terms of N + 1 pairs of canonical fourvectors (center-of-momentum variables or CMV). A gauge independent reduction due to N + 3 first class kinematic constraints leads to a 6N + 2 dimensional minimum kinematic phase space, K. The kinematics and dynamics of particles with intrinsic spin were also considered. To this end known constraint techniques were generalized to make use of graded Lie algebras. The (Poincare) invariant Hamiltonian is specified in terms of the gauge invarient variables of K. The covariant worldline variables of each particle were found to be gauge dependent. As such they will usually not satisfy a canonical algebra. An exception exists for free particles. The No-interaction theorem therefore is not violated.

  9. Low-temperature wafer-level gold thermocompression bonding: modeling of flatness deviations and associated process optimization for high yield and tough bonds

    NASA Astrophysics Data System (ADS)

    Stamoulis, Konstantinos; Tsau, Christine H.; Spearing, S. Mark

    2005-01-01

    Wafer-level, thermocompression bonding is a promising technique for MEMS packaging. The quality of the bond is critically dependent on the interaction between flatness deviations, the gold film properties and the process parameters and tooling used to achieve the bonds. The effect of flatness deviations on the resulting bond is investigated in the current work. The strain energy release rate associated with the elastic deformation required to overcome wafer bow is calculated. A contact yield criterion is used to examine the pressure and temperature conditions required to flatten surface roughness asperities in order to achieve bonding over the full apparent area. The results are compared to experimental data of bond yield and toughness obtained from four-point bend delamination testing and microscopic observations of the fractured surfaces. Conclusions from the modeling and experiments indicate that wafer bow has negligible effect on determining the variability of bond quality and that the well-bonded area is increased with increasing bonding pressure. The enhanced understanding of the underlying deformation mechanisms allows for a better controlled trade-off between the bonding pressure and temperature.

  10. Low-temperature wafer-level gold thermocompression bonding: modeling of flatness deviations and associated process optimization for high yield and tough bonds

    NASA Astrophysics Data System (ADS)

    Stamoulis, Konstantinos; Tsau, Christine H.; Spearing, S. Mark

    2004-12-01

    Wafer-level, thermocompression bonding is a promising technique for MEMS packaging. The quality of the bond is critically dependent on the interaction between flatness deviations, the gold film properties and the process parameters and tooling used to achieve the bonds. The effect of flatness deviations on the resulting bond is investigated in the current work. The strain energy release rate associated with the elastic deformation required to overcome wafer bow is calculated. A contact yield criterion is used to examine the pressure and temperature conditions required to flatten surface roughness asperities in order to achieve bonding over the full apparent area. The results are compared to experimental data of bond yield and toughness obtained from four-point bend delamination testing and microscopic observations of the fractured surfaces. Conclusions from the modeling and experiments indicate that wafer bow has negligible effect on determining the variability of bond quality and that the well-bonded area is increased with increasing bonding pressure. The enhanced understanding of the underlying deformation mechanisms allows for a better controlled trade-off between the bonding pressure and temperature.

  11. Two-point modeling of SOL losses of HHFW power in NSTX

    NASA Astrophysics Data System (ADS)

    Kish, Ayden; Perkins, Rory; Ahn, Joon-Wook; Diallo, Ahmed; Gray, Travis; Hosea, Joel; Jaworski, Michael; Kramer, Gerrit; Leblanc, Benoit; Sabbagh, Steve

    2017-10-01

    High-harmonic fast-wave (HHFW) heating is a heating and current-drive scheme on the National Spherical Torus eXperiment (NSTX) complimentary to neutral beam injection. Previous experiments suggest that a significant fraction, up to 50%, of the HHFW power is promptly lost to the scrape-off layer (SOL). Research indicates that the lost power reaches the divertor via wave propagation and is converted to a heat flux at the divertor through RF rectification rather than heating the SOL plasma at the midplane. This counter-intuitive hypothesis is investigated using a simplified two-point model, relating plasma parameters at the divertor to those at the midplane. Taking measurements at the divertor region of NSTX as input, this two-point model is used to predict midplane parameters, using the predicted heat flux as an indicator of power input to the SOL. These predictions are compared to measurements at the midplane to evaluate the extent to which they are consistent with experiment. This work was made possible by funding from the Department of Energy for the Summer Undergraduate Laboratory Internship (SULI) program. This work is supported by the US DOE Contract No. DE-AC02-09CH11466.

  12. Augmenting epidemiological models with point-of-care diagnostics data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pullum, Laura L.; Ramanathan, Arvind; Nutaro, James J.

    Although adoption of newer Point-of-Care (POC) diagnostics is increasing, there is a significant challenge using POC diagnostics data to improve epidemiological models. In this work, we propose a method to process zip-code level POC datasets and apply these processed data to calibrate an epidemiological model. We specifically develop a calibration algorithm using simulated annealing and calibrate a parsimonious equation-based model of modified Susceptible-Infected-Recovered (SIR) dynamics. The results show that parsimonious models are remarkably effective in predicting the dynamics observed in the number of infected patients and our calibration algorithm is sufficiently capable of predicting peak loads observed in POC diagnosticsmore » data while staying within reasonable and empirical parameter ranges reported in the literature. Additionally, we explore the future use of the calibrated values by testing the correlation between peak load and population density from Census data. Our results show that linearity assumptions for the relationships among various factors can be misleading, therefore further data sources and analysis are needed to identify relationships between additional parameters and existing calibrated ones. As a result, calibration approaches such as ours can determine the values of newly added parameters along with existing ones and enable policy-makers to make better multi-scale decisions.« less

  13. Augmenting epidemiological models with point-of-care diagnostics data

    DOE PAGES

    Pullum, Laura L.; Ramanathan, Arvind; Nutaro, James J.; ...

    2016-04-20

    Although adoption of newer Point-of-Care (POC) diagnostics is increasing, there is a significant challenge using POC diagnostics data to improve epidemiological models. In this work, we propose a method to process zip-code level POC datasets and apply these processed data to calibrate an epidemiological model. We specifically develop a calibration algorithm using simulated annealing and calibrate a parsimonious equation-based model of modified Susceptible-Infected-Recovered (SIR) dynamics. The results show that parsimonious models are remarkably effective in predicting the dynamics observed in the number of infected patients and our calibration algorithm is sufficiently capable of predicting peak loads observed in POC diagnosticsmore » data while staying within reasonable and empirical parameter ranges reported in the literature. Additionally, we explore the future use of the calibrated values by testing the correlation between peak load and population density from Census data. Our results show that linearity assumptions for the relationships among various factors can be misleading, therefore further data sources and analysis are needed to identify relationships between additional parameters and existing calibrated ones. As a result, calibration approaches such as ours can determine the values of newly added parameters along with existing ones and enable policy-makers to make better multi-scale decisions.« less

  14. Reconstruction of 3d Models from Point Clouds with Hybrid Representation

    NASA Astrophysics Data System (ADS)

    Hu, P.; Dong, Z.; Yuan, P.; Liang, F.; Yang, B.

    2018-05-01

    The three-dimensional (3D) reconstruction of urban buildings from point clouds has long been an active topic in applications related to human activities. However, due to the structures significantly differ in terms of complexity, the task of 3D reconstruction remains a challenging issue especially for the freeform surfaces. In this paper, we present a new reconstruction algorithm which allows the 3D-models of building as a combination of regular structures and irregular surfaces, where the regular structures are parameterized plane primitives and the irregular surfaces are expressed as meshes. The extraction of irregular surfaces starts with an over-segmented method for the unstructured point data, a region growing approach based the adjacent graph of super-voxels is then applied to collapse these super-voxels, and the freeform surfaces can be clustered from the voxels filtered by a thickness threshold. To achieve these regular planar primitives, the remaining voxels with a larger flatness will be further divided into multiscale super-voxels as basic units, and the final segmented planes are enriched and refined in a mutually reinforcing manner under the framework of a global energy optimization. We have implemented the proposed algorithms and mainly tested on two point clouds that differ in point density and urban characteristic, and experimental results on complex building structures illustrated the efficacy of the proposed framework.

  15. Mathematical model for calculation of the heat-hydraulic modes of heating points of heat-supplying systems

    NASA Astrophysics Data System (ADS)

    Shalaginova, Z. I.

    2016-03-01

    The mathematical model and calculation method of the thermal-hydraulic modes of heat points, based on the theory of hydraulic circuits, being developed at the Melentiev Energy Systems Institute are presented. The redundant circuit of the heat point was developed, in which all possible connecting circuits (CC) of the heat engineering equipment and the places of possible installation of control valve were inserted. It allows simulating the operating modes both at central heat points (CHP) and individual heat points (IHP). The configuration of the desired circuit is carried out automatically by removing the unnecessary links. The following circuits connecting the heating systems (HS) are considered: the dependent circuit (direct and through mixing elevator) and independent one (through the heater). The following connecting circuits of the load of hot water supply (HWS) were considered: open CC (direct water pumping from pipelines of heat networks) and a closed CC with connecting the HWS heaters on single-level (serial and parallel) and two-level (sequential and combined) circuits. The following connecting circuits of the ventilation systems (VS) were also considered: dependent circuit and independent one through a common heat exchanger with HS load. In the heat points, water temperature regulators for the hot water supply and ventilation and flow regulators for the heating system, as well as to the inlet as a whole, are possible. According to the accepted decomposition, the model of the heat point is an integral part of the overall heat-hydraulic model of the heat-supplying system having intermediate control stages (CHP and IHP), which allows to consider the operating modes of the heat networks of different levels connected with each other through CHP as well as connected through IHP of consumers with various connecting circuits of local systems of heat consumption: heating, ventilation and hot water supply. The model is implemented in the Angara data

  16. Bayesian Modeling for Identification and Estimation of the Learning Effects of Pointing Tasks

    NASA Astrophysics Data System (ADS)

    Kyo, Koki

    Recently, in the field of human-computer interaction, a model containing the systematic factor and human factor has been proposed to evaluate the performance of the input devices of a computer. This is called the SH-model. In this paper, in order to extend the range of application of the SH-model, we propose some new models based on the Box-Cox transformation and apply a Bayesian modeling method for identification and estimation of the learning effects of pointing tasks. We consider the parameters describing the learning effect as random variables and introduce smoothness priors for them. Illustrative results show that the newly-proposed models work well.

  17. Three-particle N π π state contribution to the nucleon two-point function in lattice QCD

    NASA Astrophysics Data System (ADS)

    Bär, Oliver

    2018-05-01

    The three-particle N π π state contribution to the QCD two-point function of standard nucleon interpolating fields is computed to leading order in chiral perturbation theory. Using the experimental values for two low-energy coefficients, the impact of this contribution on lattice QCD calculations of the nucleon mass is estimated. The impact is found to be at the per mille level at most and negligible in practice.

  18. Modelling of thermal field and point defect dynamics during silicon single crystal growth using CZ technique

    NASA Astrophysics Data System (ADS)

    Sabanskis, A.; Virbulis, J.

    2018-05-01

    Mathematical modelling is employed to numerically analyse the dynamics of the Czochralski (CZ) silicon single crystal growth. The model is axisymmetric, its thermal part describes heat transfer by conduction and thermal radiation, and allows to predict the time-dependent shape of the crystal-melt interface. Besides the thermal field, the point defect dynamics is modelled using the finite element method. The considered process consists of cone growth and cylindrical phases, including a short period of a reduced crystal pull rate, and a power jump to avoid large diameter changes. The influence of the thermal stresses on the point defects is also investigated.

  19. Nonextensive models for earthquakes.

    PubMed

    Silva, R; França, G S; Vilar, C S; Alcaniz, J S

    2006-02-01

    We have revisited the fragment-asperity interaction model recently introduced by Sotolongo-Costa and Posadas [Phy. Rev. Lett. 92, 048501 (2004)] by considering a different definition for mean values in the context of Tsallis nonextensive statistics and introducing a scale between the earthquake energy and the size of fragment epsilon proportional to r3. The energy-distribution function (EDF) deduced in our approach is considerably different from the one obtained in the above reference. We have also tested the viability of this EDF with data from two different catalogs (in three different areas), namely, the NEIC and the Bulletin Seismic of the Revista Brasileira de Geofísica. Although both approaches provide very similar values for the nonextensive parameter , other physical quantities, e.g., energy density, differ considerably by several orders of magnitude.

  20. Study of. lambda. parameters and crossover phenomena in SU(N) x SU(N) sigma models in two dimensions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shigemitsu, J; Kogut, J B

    1981-01-01

    The spin system analogues of recent studies of the string tension and ..lambda.. parameters of SU(N) gauge theories in 4 dimensions are carried out for the SU(N) x SU(N) and O(N) models in 2 dimensions. The relations between the ..lambda.. parameters of both the Euclidean and Hamiltonian formulation of the lattice models and the ..lambda.. parameter of the continuum models are obtained. The one loop finite renormalization of the speed of light in the lattice Hamiltonian formulations of the O(N) and SU(N) x SU(N) models is calculated. Strong coupling calculations of the mass gaps of these spin models are donemore » for all N and the constants of proportionality between the gap and the ..lambda.. parameter of the continuum models are obtained. These results are contrasted with similar calculations for the SU(N) gauge models in 3+1 dimensions. Identifying suitable coupling constants for discussing the N ..-->.. infinity limits, the numerical results suggest that the crossover from weak to strong coupling in the lattice O(N) models becomes less abrupt as N increases while the crossover for the SU(N) x SU(N) models becomes more abrupt. The crossover in SU(N) gauge theories also becomes more abrupt with increasing N, however, at an even greater rate than in the SU(N) x SU(N) spin models.« less

  1. An application of change-point recursive models to the relationship between litter size and number of stillborns in pigs.

    PubMed

    Ibáñez-Escriche, N; López de Maturana, E; Noguera, J L; Varona, L

    2010-11-01

    We developed and implemented change-point recursive models and compared them with a linear recursive model and a standard mixed model (SMM), in the scope of the relationship between litter size (LS) and number of stillborns (NSB) in pigs. The proposed approach allows us to estimate the point of change in multiple-segment modeling of a nonlinear relationship between phenotypes. We applied the procedure to a data set provided by a commercial Large White selection nucleus. The data file consisted of LS and NSB records of 4,462 parities. The results of the analysis clearly identified the location of the change points between different structural regression coefficients. The magnitude of these coefficients increased with LS, indicating an increasing incidence of LS on the NSB ratio. However, posterior distributions of correlations were similar across subpopulations (defined by the change points on LS), except for those between residuals. The heritability estimates of NSB did not present differences between recursive models. Nevertheless, these heritabilities were greater than those obtained for SMM (0.05) with a posterior probability of 85%. These results suggest a nonlinear relationship between LS and NSB, which supports the adequacy of a change-point recursive model for its analysis. Furthermore, the results from model comparisons support the use of recursive models. However, the adequacy of the different recursive models depended on the criteria used: the linear recursive model was preferred on account of its smallest deviance value, whereas nonlinear recursive models provided a better fit and predictive ability based on the cross-validation approach.

  2. CADASTER QSPR Models for Predictions of Melting and Boiling Points of Perfluorinated Chemicals.

    PubMed

    Bhhatarai, Barun; Teetz, Wolfram; Liu, Tao; Öberg, Tomas; Jeliazkova, Nina; Kochev, Nikolay; Pukalov, Ognyan; Tetko, Igor V; Kovarich, Simona; Papa, Ester; Gramatica, Paola

    2011-03-14

    Quantitative structure property relationship (QSPR) studies on per- and polyfluorinated chemicals (PFCs) on melting point (MP) and boiling point (BP) are presented. The training and prediction chemicals used for developing and validating the models were selected from Syracuse PhysProp database and literatures. The available experimental data sets were split in two different ways: a) random selection on response value, and b) structural similarity verified by self-organizing-map (SOM), in order to propose reliable predictive models, developed only on the training sets and externally verified on the prediction sets. Individual linear and non-linear approaches based models developed by different CADASTER partners on 0D-2D Dragon descriptors, E-state descriptors and fragment based descriptors as well as consensus model and their predictions are presented. In addition, the predictive performance of the developed models was verified on a blind external validation set (EV-set) prepared using PERFORCE database on 15 MP and 25 BP data respectively. This database contains only long chain perfluoro-alkylated chemicals, particularly monitored by regulatory agencies like US-EPA and EU-REACH. QSPR models with internal and external validation on two different external prediction/validation sets and study of applicability-domain highlighting the robustness and high accuracy of the models are discussed. Finally, MPs for additional 303 PFCs and BPs for 271 PFCs were predicted for which experimental measurements are unknown. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. Improved equivalent magnetic network modeling for analyzing working points of PMs in interior permanent magnet machine

    NASA Astrophysics Data System (ADS)

    Guo, Liyan; Xia, Changliang; Wang, Huimin; Wang, Zhiqiang; Shi, Tingna

    2018-05-01

    As is well known, the armature current will be ahead of the back electromotive force (back-EMF) under load condition of the interior permanent magnet (PM) machine. This kind of advanced armature current will produce a demagnetizing field, which may make irreversible demagnetization appeared in PMs easily. To estimate the working points of PMs more accurately and take demagnetization under consideration in the early design stage of a machine, an improved equivalent magnetic network model is established in this paper. Each PM under each magnetic pole is segmented, and the networks in the rotor pole shoe are refined, which makes a more precise model of the flux path in the rotor pole shoe possible. The working point of each PM under each magnetic pole can be calculated accurately by the established improved equivalent magnetic network model. Meanwhile, the calculated results are compared with those calculated by FEM. And the effects of d-axis component and q-axis component of armature current, air-gap length and flux barrier size on working points of PMs are analyzed by the improved equivalent magnetic network model.

  4. Development of discrete choice model considering internal reference points and their effects in travel mode choice context

    NASA Astrophysics Data System (ADS)

    Sarif; Kurauchi, Shinya; Yoshii, Toshio

    2017-06-01

    In the conventional travel behavior models such as logit and probit, decision makers are assumed to conduct the absolute evaluations on the attributes of the choice alternatives. On the other hand, many researchers in cognitive psychology and marketing science have been suggesting that the perceptions of attributes are characterized by the benchmark called “reference points” and the relative evaluations based on them are often employed in various choice situations. Therefore, this study developed a travel behavior model based on the mental accounting theory in which the internal reference points are explicitly considered. A questionnaire survey about the shopping trip to the CBD in Matsuyama city was conducted, and then the roles of reference points in travel mode choice contexts were investigated. The result showed that the goodness-of-fit of the developed model was higher than that of the conventional model, indicating that the internal reference points might play the major roles in the choice of travel mode. Also shown was that the respondents seem to utilize various reference points: some tend to adopt the lowest fuel price they have experienced, others employ fare price they feel in perceptions of the travel cost.

  5. Performance Analysis of Several GPS/Galileo Precise Point Positioning Models

    PubMed Central

    Afifi, Akram; El-Rabbany, Ahmed

    2015-01-01

    This paper examines the performance of several precise point positioning (PPP) models, which combine dual-frequency GPS/Galileo observations in the un-differenced and between-satellite single-difference (BSSD) modes. These include the traditional un-differenced model, the decoupled clock model, the semi-decoupled clock model, and the between-satellite single-difference model. We take advantage of the IGS-MGEX network products to correct for the satellite differential code biases and the orbital and satellite clock errors. Natural Resources Canada’s GPSPace PPP software is modified to handle the various GPS/Galileo PPP models. A total of six data sets of GPS and Galileo observations at six IGS stations are processed to examine the performance of the various PPP models. It is shown that the traditional un-differenced GPS/Galileo PPP model, the GPS decoupled clock model, and the semi-decoupled clock GPS/Galileo PPP model improve the convergence time by about 25% in comparison with the un-differenced GPS-only model. In addition, the semi-decoupled GPS/Galileo PPP model improves the solution precision by about 25% compared to the traditional un-differenced GPS/Galileo PPP model. Moreover, the BSSD GPS/Galileo PPP model improves the solution convergence time by about 50%, in comparison with the un-differenced GPS PPP model, regardless of the type of BSSD combination used. As well, the BSSD model improves the precision of the estimated parameters by about 50% and 25% when the loose and the tight combinations are used, respectively, in comparison with the un-differenced GPS-only model. Comparable results are obtained through the tight combination when either a GPS or a Galileo satellite is selected as a reference. PMID:26102495

  6. Performance Analysis of Several GPS/Galileo Precise Point Positioning Models.

    PubMed

    Afifi, Akram; El-Rabbany, Ahmed

    2015-06-19

    This paper examines the performance of several precise point positioning (PPP) models, which combine dual-frequency GPS/Galileo observations in the un-differenced and between-satellite single-difference (BSSD) modes. These include the traditional un-differenced model, the decoupled clock model, the semi-decoupled clock model, and the between-satellite single-difference model. We take advantage of the IGS-MGEX network products to correct for the satellite differential code biases and the orbital and satellite clock errors. Natural Resources Canada's GPSPace PPP software is modified to handle the various GPS/Galileo PPP models. A total of six data sets of GPS and Galileo observations at six IGS stations are processed to examine the performance of the various PPP models. It is shown that the traditional un-differenced GPS/Galileo PPP model, the GPS decoupled clock model, and the semi-decoupled clock GPS/Galileo PPP model improve the convergence time by about 25% in comparison with the un-differenced GPS-only model. In addition, the semi-decoupled GPS/Galileo PPP model improves the solution precision by about 25% compared to the traditional un-differenced GPS/Galileo PPP model. Moreover, the BSSD GPS/Galileo PPP model improves the solution convergence time by about 50%, in comparison with the un-differenced GPS PPP model, regardless of the type of BSSD combination used. As well, the BSSD model improves the precision of the estimated parameters by about 50% and 25% when the loose and the tight combinations are used, respectively, in comparison with the un-differenced GPS-only model. Comparable results are obtained through the tight combination when either a GPS or a Galileo satellite is selected as a reference.

  7. PIV study of the wake of a model wind turbine transitioning between operating set points

    NASA Astrophysics Data System (ADS)

    Houck, Dan; Cowen, Edwin (Todd)

    2016-11-01

    Wind turbines are ideally operated at their most efficient tip speed ratio for a given wind speed. There is increasing interest, however, in operating turbines at other set points to increase the overall power production of a wind farm. Specifically, Goit and Meyers (2015) used LES to examine a wind farm optimized by unsteady operation of its turbines. In this study, the wake of a model wind turbine is measured in a water channel using PIV. We measure the wake response to a change in operational set point of the model turbine, e.g., from low to high tip speed ratio or vice versa, to examine how it might influence a downwind turbine. A modified torque transducer after Kang et al. (2010) is used to calibrate in situ voltage measurements of the model turbine's generator operating across a resistance to the torque on the generator. Changes in operational set point are made by changing the resistance or the flow speed, which change the rotation rate measured by an encoder. Single camera PIV on vertical planes reveals statistics of the wake at various distances downstream as the turbine transitions from one set point to another. From these measurements, we infer how the unsteady operation of a turbine may affect the performance of a downwind turbine as its incoming flow. National Science Foundation and the Atkinson Center for a Sustainable Future.

  8. Modelling the association of dengue fever cases with temperature and relative humidity in Jeddah, Saudi Arabia-A generalised linear model with break-point analysis.

    PubMed

    Alkhaldy, Ibrahim

    2017-04-01

    The aim of this study was to examine the role of environmental factors in the temporal distribution of dengue fever in Jeddah, Saudi Arabia. The relationship between dengue fever cases and climatic factors such as relative humidity and temperature was investigated during 2006-2009 to determine whether there is any relationship between dengue fever cases and climatic parameters in Jeddah City, Saudi Arabia. A generalised linear model (GLM) with a break-point was used to determine how different levels of temperature and relative humidity affected the distribution of the number of cases of dengue fever. Break-point analysis was performed to modelled the effect before and after a break-point (change point) in the explanatory parameters under various scenarios. Akaike information criterion (AIC) and cross validation (CV) were used to assess the performance of the models. The results showed that maximum temperature and mean relative humidity are most probably the better predictors of the number of dengue fever cases in Jeddah. In this study three scenarios were modelled: no time lag, 1-week lag and 2-weeks lag. Among these scenarios, the 1-week lag model using mean relative humidity as an explanatory variable showed better performance. This study showed a clear relationship between the meteorological variables and the number of dengue fever cases in Jeddah. The results also demonstrated that meteorological variables can be successfully used to estimate the number of dengue fever cases for a given period of time. Break-point analysis provides further insight into the association between meteorological parameters and dengue fever cases by dividing the meteorological parameters into certain break-points. Copyright © 2016 Elsevier B.V. All rights reserved.

  9. Azimuthal Dependence of the Ground Motion Variability from Scenario Modeling of the 2014 Mw6.0 South Napa, California, Earthquake Using an Advanced Kinematic Source Model

    NASA Astrophysics Data System (ADS)

    Gallovič, F.

    2017-09-01

    Strong ground motion simulations require physically plausible earthquake source model. Here, I present the application of such a kinematic model introduced originally by Ruiz et al. (Geophys J Int 186:226-244, 2011). The model is constructed to inherently provide synthetics with the desired omega-squared spectral decay in the full frequency range. The source is composed of randomly distributed overlapping subsources with fractal number-size distribution. The position of the subsources can be constrained by prior knowledge of major asperities (stemming, e.g., from slip inversions), or can be completely random. From earthquake physics point of view, the model includes positive correlation between slip and rise time as found in dynamic source simulations. Rupture velocity and rise time follows local S-wave velocity profile, so that the rupture slows down and rise times increase close to the surface, avoiding unrealistically strong ground motions. Rupture velocity can also have random variations, which result in irregular rupture front while satisfying the causality principle. This advanced kinematic broadband source model is freely available and can be easily incorporated into any numerical wave propagation code, as the source is described by spatially distributed slip rate functions, not requiring any stochastic Green's functions. The source model has been previously validated against the observed data due to the very shallow unilateral 2014 Mw6 South Napa, California, earthquake; the model reproduces well the observed data including the near-fault directivity (Seism Res Lett 87:2-14, 2016). The performance of the source model is shown here on the scenario simulations for the same event. In particular, synthetics are compared with existing ground motion prediction equations (GMPEs), emphasizing the azimuthal dependence of the between-event ground motion variability. I propose a simple model reproducing the azimuthal variations of the between-event ground motion

  10. Self-consistent large- N analytical solutions of inhomogeneous condensates in quantum ℂP N - 1 model

    NASA Astrophysics Data System (ADS)

    Nitta, Muneto; Yoshii, Ryosuke

    2017-12-01

    We give, for the first time, self-consistent large- N analytical solutions of inhomogeneous condensates in the quantum ℂP N - 1 model in the large- N limit. We find a map from a set of gap equations of the ℂP N - 1 model to those of the Gross-Neveu (GN) model (or the gap equation and the Bogoliubov-de Gennes equation), which enables us to find the self-consistent solutions. We find that the Higgs field of the ℂP N - 1 model is given as a zero mode of solutions of the GN model, and consequently only topologically non-trivial solutions of the GN model yield nontrivial solutions of the ℂP N - 1 model. A stable single soliton is constructed from an anti-kink of the GN model and has a broken (Higgs) phase inside its core, in which ℂP N - 1 modes are localized, with a symmetric (confining) phase outside. We further find a stable periodic soliton lattice constructed from a real kink crystal in the GN model, while the Ablowitz-Kaup-Newell-Segur hierarchy yields multiple solitons at arbitrary separations.

  11. Two-point correlation functions in inhomogeneous and anisotropic cosmologies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marcori, Oton H.; Pereira, Thiago S., E-mail: otonhm@hotmail.com, E-mail: tspereira@uel.br

    Two-point correlation functions are ubiquitous tools of modern cosmology, appearing in disparate topics ranging from cosmological inflation to late-time astrophysics. When the background spacetime is maximally symmetric, invariance arguments can be used to fix the functional dependence of this function as the invariant distance between any two points. In this paper we introduce a novel formalism which fixes this functional dependence directly from the isometries of the background metric, thus allowing one to quickly assess the overall features of Gaussian correlators without resorting to the full machinery of perturbation theory. As an application we construct the CMB temperature correlation functionmore » in one inhomogeneous (namely, an off-center LTB model) and two spatially flat and anisotropic (Bianchi) universes, and derive their covariance matrices in the limit of almost Friedmannian symmetry. We show how the method can be extended to arbitrary N -point correlation functions and illustrate its use by constructing three-point correlation functions in some simple geometries.« less

  12. Spatially explicit dynamic N-mixture models

    USGS Publications Warehouse

    Zhao, Qing; Royle, Andy; Boomer, G. Scott

    2017-01-01

    Knowledge of demographic parameters such as survival, reproduction, emigration, and immigration is essential to understand metapopulation dynamics. Traditionally the estimation of these demographic parameters requires intensive data from marked animals. The development of dynamic N-mixture models makes it possible to estimate demographic parameters from count data of unmarked animals, but the original dynamic N-mixture model does not distinguish emigration and immigration from survival and reproduction, limiting its ability to explain important metapopulation processes such as movement among local populations. In this study we developed a spatially explicit dynamic N-mixture model that estimates survival, reproduction, emigration, local population size, and detection probability from count data under the assumption that movement only occurs among adjacent habitat patches. Simulation studies showed that the inference of our model depends on detection probability, local population size, and the implementation of robust sampling design. Our model provides reliable estimates of survival, reproduction, and emigration when detection probability is high, regardless of local population size or the type of sampling design. When detection probability is low, however, our model only provides reliable estimates of survival, reproduction, and emigration when local population size is moderate to high and robust sampling design is used. A sensitivity analysis showed that our model is robust against the violation of the assumption that movement only occurs among adjacent habitat patches, suggesting wide applications of this model. Our model can be used to improve our understanding of metapopulation dynamics based on count data that are relatively easy to collect in many systems.

  13. Limited Sampling Strategy for Accurate Prediction of Pharmacokinetics of Saroglitazar: A 3-point Linear Regression Model Development and Successful Prediction of Human Exposure.

    PubMed

    Joshi, Shuchi N; Srinivas, Nuggehally R; Parmar, Deven V

    2018-03-01

    Our aim was to develop and validate the extrapolative performance of a regression model using a limited sampling strategy for accurate estimation of the area under the plasma concentration versus time curve for saroglitazar. Healthy subject pharmacokinetic data from a well-powered food-effect study (fasted vs fed treatments; n = 50) was used in this work. The first 25 subjects' serial plasma concentration data up to 72 hours and corresponding AUC 0-t (ie, 72 hours) from the fasting group comprised a training dataset to develop the limited sampling model. The internal datasets for prediction included the remaining 25 subjects from the fasting group and all 50 subjects from the fed condition of the same study. The external datasets included pharmacokinetic data for saroglitazar from previous single-dose clinical studies. Limited sampling models were composed of 1-, 2-, and 3-concentration-time points' correlation with AUC 0-t of saroglitazar. Only models with regression coefficients (R 2 ) >0.90 were screened for further evaluation. The best R 2 model was validated for its utility based on mean prediction error, mean absolute prediction error, and root mean square error. Both correlations between predicted and observed AUC 0-t of saroglitazar and verification of precision and bias using Bland-Altman plot were carried out. None of the evaluated 1- and 2-concentration-time points models achieved R 2 > 0.90. Among the various 3-concentration-time points models, only 4 equations passed the predefined criterion of R 2 > 0.90. Limited sampling models with time points 0.5, 2, and 8 hours (R 2 = 0.9323) and 0.75, 2, and 8 hours (R 2 = 0.9375) were validated. Mean prediction error, mean absolute prediction error, and root mean square error were <30% (predefined criterion) and correlation (r) was at least 0.7950 for the consolidated internal and external datasets of 102 healthy subjects for the AUC 0-t prediction of saroglitazar. The same models, when applied to the AUC 0-t

  14. Pilot points method for conditioning multiple-point statistical facies simulation on flow data

    NASA Astrophysics Data System (ADS)

    Ma, Wei; Jafarpour, Behnam

    2018-05-01

    We propose a new pilot points method for conditioning discrete multiple-point statistical (MPS) facies simulation on dynamic flow data. While conditioning MPS simulation on static hard data is straightforward, their calibration against nonlinear flow data is nontrivial. The proposed method generates conditional models from a conceptual model of geologic connectivity, known as a training image (TI), by strategically placing and estimating pilot points. To place pilot points, a score map is generated based on three sources of information: (i) the uncertainty in facies distribution, (ii) the model response sensitivity information, and (iii) the observed flow data. Once the pilot points are placed, the facies values at these points are inferred from production data and then are used, along with available hard data at well locations, to simulate a new set of conditional facies realizations. While facies estimation at the pilot points can be performed using different inversion algorithms, in this study the ensemble smoother (ES) is adopted to update permeability maps from production data, which are then used to statistically infer facies types at the pilot point locations. The developed method combines the information in the flow data and the TI by using the former to infer facies values at selected locations away from the wells and the latter to ensure consistent facies structure and connectivity where away from measurement locations. Several numerical experiments are used to evaluate the performance of the developed method and to discuss its important properties.

  15. Effect of Finite Particle Size on Convergence of Point Particle Models in Euler-Lagrange Multiphase Dispersed Flow

    NASA Astrophysics Data System (ADS)

    Nili, Samaun; Park, Chanyoung; Haftka, Raphael T.; Kim, Nam H.; Balachandar, S.

    2017-11-01

    Point particle methods are extensively used in simulating Euler-Lagrange multiphase dispersed flow. When particles are much smaller than the Eulerian grid the point particle model is on firm theoretical ground. However, this standard approach of evaluating the gas-particle coupling at the particle center fails to converge as the Eulerian grid is reduced below particle size. We present an approach to model the interaction between particles and fluid for finite size particles that permits convergence. We use the generalized Faxen form to compute the force on a particle and compare the results against traditional point particle method. We apportion the different force components on the particle to fluid cells based on the fraction of particle volume or surface in the cell. The application is to a one-dimensional model of shock propagation through a particle-laden field at moderate volume fraction, where the convergence is achieved for a well-formulated force model and back coupling for finite size particles. Comparison with 3D direct fully resolved numerical simulations will be used to check if the approach also improves accuracy compared to the point particle model. Work supported by the U.S. Department of Energy, National Nuclear Security Administration, Advanced Simulation and Computing Program, as a Cooperative Agreement under the Predictive Science Academic Alliance Program, under Contract No. DE-NA0002378.

  16. Modeling of the influence of humidity on H1N1 flu in China

    NASA Astrophysics Data System (ADS)

    PEI, Y.; Tian, H.; Xu, B.

    2015-12-01

    In 2009, a heavy Flu hit the whole world. It was caused by the virus H1N1. The influenza first broke out in Mexico in March and the United States in April, 2009. The World Health Organization (WHO) announced that the H1N1 influenza became pandemic, alert to a warning phase of six. By the end of 2011, 181302 H1N1 cases were reported in mainland China. To improve our understanding on the impact of environmental factors on the disease transmission, we constructed an SIR (Susceptible - Infectious - Recovered) model incorporating environmental factors. It was found that the absolute humidity was a dominant environmental factor. The study interpolated the humidity data monitored with 340 weather stations from 1951 to 2011 in mainland China. First, the break point of the trend for the absolutely humidity was detected by the BFAST (Break For Additive Season and Trend) method. Then, the SIR model with and without the absolutely humidity incorporated in the model was built and tested. Finally, the results with the two scenarios were compared. Results indicate that lower absolutely humidity may promote the transmission of the H1N1 cases. The calculated basic reproductive number ranges from 1.65 to 3.66 with a changing absolute humidity. This is consistent with the former study result with basic reproductive number ranging from 2.03 to 4.18. The average recovery duration was estimated to be 5.7 days. The average duration to get immunity from the influenza is 399.02 days. A risk map is also produced to illustrate the model results.

  17. Implicit Shape Models for Object Detection in 3d Point Clouds

    NASA Astrophysics Data System (ADS)

    Velizhev, A.; Shapovalov, R.; Schindler, K.

    2012-07-01

    We present a method for automatic object localization and recognition in 3D point clouds representing outdoor urban scenes. The method is based on the implicit shape models (ISM) framework, which recognizes objects by voting for their center locations. It requires only few training examples per class, which is an important property for practical use. We also introduce and evaluate an improved version of the spin image descriptor, more robust to point density variation and uncertainty in normal direction estimation. Our experiments reveal a significant impact of these modifications on the recognition performance. We compare our results against the state-of-the-art method and get significant improvement in both precision and recall on the Ohio dataset, consisting of combined aerial and terrestrial LiDAR scans of 150,000 m2 of urban area in total.

  18. Z/sub n/ Baxter model: Critical behavior

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tracy, C.A.

    1986-07-01

    The Z/sub n/ Baxter Model is an exactly solvable lattice model in the special case of the Belavin parametrization. We calculate the critical behavior of Prob/sub n/ (q = w/sup k/) using techniques developed in number theory in the study of the congruence properties of p(m), the number of unrestricted partitions of an integer m.

  19. Measuring 3D point configurations in pictorial space

    PubMed Central

    Wagemans, Johan; van Doorn, Andrea J; Koenderink, Jan J

    2011-01-01

    We propose a novel method to probe the depth structure of the pictorial space evoked by paintings. The method involves an exocentric pointing paradigm that allows one to find the slope of the geodesic connection between any pair of points in pictorial space. Since the locations of the points in the picture plane are known, this immediately yields the depth difference between the points. A set of depth differences between all pairs of points from an N-point (N > 2) configuration then yields the configuration in depth up to an arbitrary depth offset. Since an N-point configuration implies N(N−1) (ordered) pairs, the number of observations typically far exceeds the number of inferred depths. This yields a powerful check on the geometrical consistency of the results. We report that the remaining inconsistencies are fully accounted for by the spread encountered in repeated observations. This implies that the concept of ‘pictorial space’ indeed has an empirical significance. The method is analyzed and empirically verified in considerable detail. We report large quantitative interobserver differences, though the results of all observers agree modulo a certain affine transformation that describes the basic cue ambiguities. This is expected on the basis of a formal analysis of monocular optical structure. The method will prove useful in a variety of potential applications. PMID:23145227

  20. Scan-To Output Validation: Towards a Standardized Geometric Quality Assessment of Building Information Models Based on Point Clouds

    NASA Astrophysics Data System (ADS)

    Bonduel, M.; Bassier, M.; Vergauwen, M.; Pauwels, P.; Klein, R.

    2017-11-01

    The use of Building Information Modeling (BIM) for existing buildings based on point clouds is increasing. Standardized geometric quality assessment of the BIMs is needed to make them more reliable and thus reusable for future users. First, available literature on the subject is studied. Next, an initial proposal for a standardized geometric quality assessment is presented. Finally, this method is tested and evaluated with a case study. The number of specifications on BIM relating to existing buildings is limited. The Levels of Accuracy (LOA) specification of the USIBD provides definitions and suggestions regarding geometric model accuracy, but lacks a standardized assessment method. A deviation analysis is found to be dependent on (1) the used mathematical model, (2) the density of the point clouds and (3) the order of comparison. Results of the analysis can be graphical and numerical. An analysis on macro (building) and micro (BIM object) scale is necessary. On macro scale, the complete model is compared to the original point cloud and vice versa to get an overview of the general model quality. The graphical results show occluded zones and non-modeled objects respectively. Colored point clouds are derived from this analysis and integrated in the BIM. On micro scale, the relevant surface parts are extracted per BIM object and compared to the complete point cloud. Occluded zones are extracted based on a maximum deviation. What remains is classified according to the LOA specification. The numerical results are integrated in the BIM with the use of object parameters.

  1. SPS antenna pointing control

    NASA Technical Reports Server (NTRS)

    Hung, J. C.

    1980-01-01

    The pointing control of a microwave antenna of the Satellite Power System was investigated emphasizing: (1) the SPS antenna pointing error sensing method; (2) a rigid body pointing control design; and (3) approaches for modeling the flexible body characteristics of the solar collector. Accuracy requirements for the antenna pointing control consist of a mechanical pointing control accuracy of three arc-minutes and an electronic phased array pointing accuracy of three arc-seconds. Results based on the factors considered in current analysis, show that the three arc-minute overall pointing control accuracy can be achieved in practice.

  2. Adding-point strategy for reduced-order hypersonic aerothermodynamics modeling based on fuzzy clustering

    NASA Astrophysics Data System (ADS)

    Chen, Xin; Liu, Li; Zhou, Sida; Yue, Zhenjiang

    2016-09-01

    Reduced order models(ROMs) based on the snapshots on the CFD high-fidelity simulations have been paid great attention recently due to their capability of capturing the features of the complex geometries and flow configurations. To improve the efficiency and precision of the ROMs, it is indispensable to add extra sampling points to the initial snapshots, since the number of sampling points to achieve an adequately accurate ROM is generally unknown in prior, but a large number of initial sampling points reduces the parsimony of the ROMs. A fuzzy-clustering-based adding-point strategy is proposed and the fuzzy clustering acts an indicator of the region in which the precision of ROMs is relatively low. The proposed method is applied to construct the ROMs for the benchmark mathematical examples and a numerical example of hypersonic aerothermodynamics prediction for a typical control surface. The proposed method can achieve a 34.5% improvement on the efficiency than the estimated mean squared error prediction algorithm and shows same-level prediction accuracy.

  3. Ocean Turbulence. Paper 3; Two-Point Closure Model Momentum, Heat and Salt Vertical Diffusivities in the Presence of Shear

    NASA Technical Reports Server (NTRS)

    Canuto, V. M.; Dubovikov, M. S.; Howard, A.; Cheng, Y.

    1999-01-01

    In papers 1 and 2 we have presented the results of the most updated 1-point closure model for the turbulent vertical diffusivities of momentum, heat and salt, K(sub m,h,s). In this paper, we derive the analytic expressions for K(sub m,h,s) using a new 2-point closure model that has recently been developed and successfully tested against some approx. 80 turbulence statistics for different flows. The new model has no free parameters. The expressions for K(sub m, h. s) are analytical functions of two stability parameters: the Turner number R(sub rho) (salinity gradient/temperature gradient) and the Richardson number R(sub i) (temperature gradient/shear). The turbulent kinetic energy K and its rate of dissipation may be taken local or non-local (K-epsilon model). Contrary to all previous models that to describe turbulent mixing below the mixed layer (ML) have adopted three adjustable "background diffusivities" for momentum. heat and salt, we propose a model that avoids such adjustable diffusivities. We assume that below the ML, K(sub m,h,s) have the same functional dependence on R(sub i) and R(sub rho) derived from the turbulence model. However, in order to compute R(sub i) below the ML, we use data of vertical shear due to wave-breaking measured by Gargett et al. (1981). The procedure frees the model from adjustable background diffusivities and indeed we use the same model throughout the entire vertical extent of the ocean. Using the new K(sub m,h, s), we run an O-GCM and present a variety of results that we compare with Levitus and the KPP model. Since the traditional 1-point (used in papers 1 and 2) and the new 2-point closure models used here represent different modeling philosophies and procedures, testing them in an O-GCM is indispensable. The basic motivation is to show that the new 2-point closure model gives results that are overall superior to the 1-point closure in spite of the fact that the latter rely on several adjustable parameters while the new 2-point

  4. Photoluminescence as a tool for characterizing point defects in semiconductors

    NASA Astrophysics Data System (ADS)

    Reshchikov, Michael

    2012-02-01

    Photoluminescence is one of the most powerful tools used to study optically-active point defects in semiconductors, especially in wide-bandgap materials. Gallium nitride (GaN) and zinc oxide (ZnO) have attracted considerable attention in the last two decades due to their prospects in optoelectronics applications, including blue and ultraviolet light-emitting devices. However, in spite of many years of extensive studies and a great number of publications on photoluminescence from GaN and ZnO, only a few defect-related luminescence bands are reliably identified. Among them are the Zn-related blue band in GaN, Cu-related green band and Li-related orange band in ZnO. Numerous suggestions for the identification of other luminescence bands, such as the yellow band in GaN, or green and yellow bands in ZnO, do not stand up under scrutiny. In these conditions, it is important to classify the defect-related luminescence bands and find their unique characteristics. In this presentation, we will review the origin of the major luminescence bands in GaN and ZnO. Through simulations of the temperature and excitation intensity dependences of photoluminescence and by employing phenomenological models we are able to obtain important characteristics of point defects such as carrier capture cross-sections for defects, concentrations of defects, and their charge states. These models are also used to find the absolute internal quantum efficiency of photoluminescence and obtain information about nonradiative defects. Results from photoluminescence measurements will be compared with results of the first-principle calculations, as well as with the experimental data obtained by other techniques such as positron annihilation spectroscopy, deep-level transient spectroscopy, and secondary ion mass spectrometry.

  5. General model for the pointing error analysis of Risley-prism system based on ray direction deviation in light refraction

    NASA Astrophysics Data System (ADS)

    Zhang, Hao; Yuan, Yan; Su, Lijuan; Huang, Fengzhen; Bai, Qing

    2016-09-01

    The Risley-prism-based light beam steering apparatus delivers superior pointing accuracy and it is used in imaging LIDAR and imaging microscopes. A general model for pointing error analysis of the Risley prisms is proposed in this paper, based on ray direction deviation in light refraction. This model captures incident beam deviation, assembly deflections, and prism rotational error. We derive the transmission matrixes of the model firstly. Then, the independent and cumulative effects of different errors are analyzed through this model. Accuracy study of the model shows that the prediction deviation of pointing error for different error is less than 4.1×10-5° when the error amplitude is 0.1°. Detailed analyses of errors indicate that different error sources affect the pointing accuracy to varying degree, and the major error source is the incident beam deviation. The prism tilting has a relative big effect on the pointing accuracy when prism tilts in the principal section. The cumulative effect analyses of multiple errors represent that the pointing error can be reduced by tuning the bearing tilting in the same direction. The cumulative effect of rotational error is relative big when the difference of these two prism rotational angles equals 0 or π, while it is relative small when the difference equals π/2. The novelty of these results suggests that our analysis can help to uncover the error distribution and aid in measurement calibration of Risley-prism systems.

  6. The evolving interaction of low-frequency earthquakes during transient slip.

    PubMed

    Frank, William B; Shapiro, Nikolaï M; Husker, Allen L; Kostoglodov, Vladimir; Gusev, Alexander A; Campillo, Michel

    2016-04-01

    Observed along the roots of seismogenic faults where the locked interface transitions to a stably sliding one, low-frequency earthquakes (LFEs) primarily occur as event bursts during slow slip. Using an event catalog from Guerrero, Mexico, we employ a statistical analysis to consider the sequence of LFEs at a single asperity as a point process, and deduce the level of time clustering from the shape of its autocorrelation function. We show that while the plate interface remains locked, LFEs behave as a simple Poisson process, whereas they become strongly clustered in time during even the smallest slow slip, consistent with interaction between different LFE sources. Our results demonstrate that bursts of LFEs can result from the collective behavior of asperities whose interaction depends on the state of the fault interface.

  7. Methanol clusters (CH3OH)n: putative global minimum-energy structures from model potentials and dispersion-corrected density functional theory.

    PubMed

    Kazachenko, Sergey; Bulusu, Satya; Thakkar, Ajit J

    2013-06-14

    Putative global minima are reported for methanol clusters (CH3OH)n with n ≤ 15. The predictions are based on global optimization of three intermolecular potential energy models followed by local optimization and single-point energy calculations using two variants of dispersion-corrected density functional theory. Recurring structural motifs include folded and/or twisted rings, folded rings with a short branch, and stacked rings. Many of the larger structures are stabilized by weak C-H···O bonds.

  8. 78 FR 55629 - Special Conditions: Cirrus Design Corporation, Model SF50; Inflatable Three-Point Restraint...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-11

    ...-0781; Special Conditions No. 23-261-SC] Special Conditions: Cirrus Design Corporation, Model SF50... conditions are issued for the Cirrus Design Corporation (Cirrus), model SF50. This airplane will have novel and unusual design features associated with installation of an inflatable three-point restraint safety...

  9. Spacing distribution functions for the one-dimensional point-island model with irreversible attachment

    NASA Astrophysics Data System (ADS)

    González, Diego Luis; Pimpinelli, Alberto; Einstein, T. L.

    2011-07-01

    We study the configurational structure of the point-island model for epitaxial growth in one dimension. In particular, we calculate the island gap and capture zone distributions. Our model is based on an approximate description of nucleation inside the gaps. Nucleation is described by the joint probability density pnXY(x,y), which represents the probability density to have nucleation at position x within a gap of size y. Our proposed functional form for pnXY(x,y) describes excellently the statistical behavior of the system. We compare our analytical model with extensive numerical simulations. Our model retains the most relevant physical properties of the system.

  10. Numerical simulation of a lattice polymer model at its integrable point

    NASA Astrophysics Data System (ADS)

    Bedini, A.; Owczarek, A. L.; Prellberg, T.

    2013-07-01

    We revisit an integrable lattice model of polymer collapse using numerical simulations. This model was first studied by Blöte and Nienhuis (1989 J. Phys. A: Math. Gen. 22 1415) and it describes polymers with some attraction, providing thus a model for the polymer collapse transition. At a particular set of Boltzmann weights the model is integrable and the exponents ν = 12/23 ≈ 0.522 and γ = 53/46 ≈ 1.152 have been computed via identification of the scaling dimensions xt = 1/12 and xh = -5/48. We directly investigate the polymer scaling exponents via Monte Carlo simulations using the pruned-enriched Rosenbluth method algorithm. By simulating this polymer model for walks up to length 4096 we find ν = 0.576(6) and γ = 1.045(5), which are clearly different from the predicted values. Our estimate for the exponent ν is compatible with the known θ-point value of 4/7 and in agreement with very recent numerical evaluation by Foster and Pinettes (2012 J. Phys. A: Math. Theor. 45 505003).

  11. A real-time ionospheric model based on GNSS Precise Point Positioning

    NASA Astrophysics Data System (ADS)

    Tu, Rui; Zhang, Hongping; Ge, Maorong; Huang, Guanwen

    2013-09-01

    This paper proposes a method of real-time monitoring and modeling the ionospheric Total Electron Content (TEC) by Precise Point Positioning (PPP). Firstly, the ionospheric TEC and receiver’s Differential Code Biases (DCB) are estimated with the undifferenced raw observation in real-time, then the ionospheric TEC model is established based on the Single Layer Model (SLM) assumption and the recovered ionospheric TEC. In this study, phase observations with high precision are directly used instead of phase smoothed code observations. In addition, the DCB estimation is separated from the establishment of the ionospheric model which will limit the impacts of the SLM assumption impacts. The ionospheric model is established at every epoch for real time application. The method is validated with three different GNSS networks on a local, regional, and global basis. The results show that the method is feasible and effective, the real-time ionosphere and DCB results are very consistent with the IGS final products, with a bias of 1-2 TECU and 0.4 ns respectively.

  12. Detailed kinetic modeling study of n-pentanol oxidation

    DOE PAGES

    Heufer, K. Alexander; Sarathy, S. Mani; Curran, Henry J.; ...

    2012-09-28

    To help overcome the world’s dependence upon fossil fuels, suitable biofuels are promising alternatives that can be used in the transportation sector. Recent research on internal combustion engines shows that short alcoholic fuels (e.g., ethanol or n-butanol) have reduced pollutant emissions and increased knock resistance compared to fossil fuels. Although higher molecular weight alcohols (e.g., n-pentanol and n-hexanol) exhibit higher reactivity that lowers their knock resistance, they are suitable for diesel engines or advanced engine concepts, such as homogeneous charge compression ignition (HCCI), where higher reactivity at lower temperatures is necessary for engine operation. The present study presents a detailedmore » kinetic model for n-pentanol based on modeling rules previously presented for n-butanol. This approach was initially validated using quantum chemistry calculations to verify the most stable n-pentanol conformation and to obtain C–H and C–C bond dissociation energies. In addition, the proposed model has been validated against ignition delay time data, speciation data from a jet-stirred reactor, and laminar flame velocity measurements. Overall, the model shows good agreement with the experiments and permits a detailed discussion of the differences between alcohols and alkanes.« less

  13. Point Defect Identification and Management for Sub-300 nm Light Emitting Diodes and Laser Diodes Grown on Bulk AlN Substrates

    NASA Astrophysics Data System (ADS)

    Bryan, Zachary A.

    The identification and role of point defects in AlN thin films and bulk crystals are studied. High-resolution photoluminescence studies on doped and undoped c-plane and mplane homoepitaxial films reveal several sharp donor-bound exciton (DBX) peaks with a full width at half maximum (FWHM) as narrow as 500 microeV. Power dependent photoluminescence distinguish DBXs tied to the Gamma5 free exciton (FX) from those tied to the Gamma 1 FX. The DBX transitions at 6.012 and 6.006 eV are identified as originating from the neutral-donor-silicon (Si0X) and neutral-donor-oxygen (O0X) respectively. With multiple DBXs and their respective two electron satellite peaks identified, a Haynes Rule plot is developed for the first time for AlN. While high quality AlN homoepitaxy is achievable by metalorganic chemical vapor deposition (MOCVD) growth, current commercially available AlN wafers are typically hindered by the presence of a broad below bandgap optical absorption band centered at 4.7 eV ( 265 nm) with an absorption coefficient of well over 1000 cm-1. Through density functional theory calculations, it is determined that substitutional carbon on the nitrogen site causes this absorption. Further studies reveal a donor-acceptor pair (DAP) recombination between substitutional carbon on the nitrogen site and a nitrogen vacancy with an emission energy of 2.8 eV. Lastly, co-doping bulk AlN with Si or O is explored and found to suppress the unwanted 4.7 eV absorption band. A novel Fermi level control scheme for point defect management during MOCVD growth in III-nitride materials by above bandgap illumination is proposed and implemented for Mg-doped GaN and Si-doped AlGaN materials as a proof of concept. The point defect control scheme uses photo-generated minority charge carriers to control the electro-chemical potential of the system and increase the formation energies of electrically charged compensating point defects. The result is a lower incorporation of compensating point

  14. A hierarchical model combining distance sampling and time removal to estimate detection probability during avian point counts

    USGS Publications Warehouse

    Amundson, Courtney L.; Royle, J. Andrew; Handel, Colleen M.

    2014-01-01

    Imperfect detection during animal surveys biases estimates of abundance and can lead to improper conclusions regarding distribution and population trends. Farnsworth et al. (2005) developed a combined distance-sampling and time-removal model for point-transect surveys that addresses both availability (the probability that an animal is available for detection; e.g., that a bird sings) and perceptibility (the probability that an observer detects an animal, given that it is available for detection). We developed a hierarchical extension of the combined model that provides an integrated analysis framework for a collection of survey points at which both distance from the observer and time of initial detection are recorded. Implemented in a Bayesian framework, this extension facilitates evaluating covariates on abundance and detection probability, incorporating excess zero counts (i.e. zero-inflation), accounting for spatial autocorrelation, and estimating population density. Species-specific characteristics, such as behavioral displays and territorial dispersion, may lead to different patterns of availability and perceptibility, which may, in turn, influence the performance of such hierarchical models. Therefore, we first test our proposed model using simulated data under different scenarios of availability and perceptibility. We then illustrate its performance with empirical point-transect data for a songbird that consistently produces loud, frequent, primarily auditory signals, the Golden-crowned Sparrow (Zonotrichia atricapilla); and for 2 ptarmigan species (Lagopus spp.) that produce more intermittent, subtle, and primarily visual cues. Data were collected by multiple observers along point transects across a broad landscape in southwest Alaska, so we evaluated point-level covariates on perceptibility (observer and habitat), availability (date within season and time of day), and abundance (habitat, elevation, and slope), and included a nested point

  15. The scalar-scalar-tensor inflationary three-point function in the axion monodromy model

    NASA Astrophysics Data System (ADS)

    Chowdhury, Debika; Sreenath, V.; Sriramkumar, L.

    2016-11-01

    The axion monodromy model involves a canonical scalar field that is governed by a linear potential with superimposed modulations. The modulations in the potential are responsible for a resonant behavior which gives rise to persisting oscillations in the scalar and, to a smaller extent, in the tensor power spectra. Interestingly, such spectra have been shown to lead to an improved fit to the cosmological data than the more conventional, nearly scale invariant, primordial power spectra. The scalar bi-spectrum in the model too exhibits continued modulations and the resonance is known to boost the amplitude of the scalar non-Gaussianity parameter to rather large values. An analytical expression for the scalar bi-spectrum had been arrived at earlier which, in fact, has been used to compare the model with the cosmic microwave background anisotropies at the level of three-point functions involving scalars. In this work, with future applications in mind, we arrive at a similar analytical template for the scalar-scalar-tensor cross-correlation. We also analytically establish the consistency relation (in the squeezed limit) for this three-point function. We conclude with a summary of the main results obtained.

  16. From containment to community: Trigger points from the London pandemic (H1N1) 2009 influenza incident response.

    PubMed

    Balasegaram, S; Glasswell, A; Cleary, V; Turbitt, D; McCloskey, B

    2011-02-01

    In the UK, during the first wave of pandemic (H1N1) 2009 influenza, a national 'containment' strategy was employed from 25 April to 2 July 2009, with case finding, treatment of cases, contact tracing and prophylaxis of close contacts. The aim of the strategy was to delay the introduction and spread of pandemic flu in the UK, provide a better understanding of the course of the novel disease, and thereby allow more time for the development of treatment and vaccination options. Descriptive study of the management of the containment phase of pandemic (H1N1) 2009 influenza. Analysis of data reported to the London Flu Response Centre (LFRC). The average number of telephone calls and faxes per day from health professionals before 15 June 2009 was 188, but this started to rise from 363 on 12 June, to 674 on 15 June, and peaked on 22 June at 2206 calls. The number of cases confirmed [by pandemic (H1N1) 2009 influenza specific H1 and N1 polymerase chain reaction] in London rose to a peak of 200 cases per day. There were widespread school outbreaks reporting large numbers of absences with influenza-like illnesses. Activity in the LFRC intensified to a point where London was declared a 'hot spot' for pandemic (H1N1) 2009 influenza on 19 June 2009 because of sustained community transmission. The local incident response was modified to the 'outbreak management phase' of the containment phase. The sharp rise in the number of telephone calls and the rise in school outbreaks appeared to be trigger points for community transmission. These indicators should inform decisions on modifying public health strategy in pandemic situations. Copyright © 2010 The Royal Society for Public Health. Published by Elsevier Ltd. All rights reserved.

  17. The OPALS Plan for Operations: Use of ISS Trajectory and Attitude Models in the OPALS Pointing Strategy

    NASA Technical Reports Server (NTRS)

    Abrahamson, Matthew J.; Oaida, Bogdan; Erkmen, Baris

    2013-01-01

    This paper will discuss the OPALS pointing strategy, focusing on incorporation of ISS trajectory and attitude models to build pointing predictions. Methods to extrapolate an ISS prediction based on past data will be discussed and will be compared to periodically published ISS predictions and Two-Line Element (TLE) predictions. The prediction performance will also be measured against GPS states available in telemetry. The performance of the pointing products will be compared to the allocated values in the OPALS pointing budget to assess compliance with requirements.

  18. Nonextensive models for earthquakes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Silva, R.; Franca, G.S.; Vilar, C.S.

    2006-02-15

    We have revisited the fragment-asperity interaction model recently introduced by Sotolongo-Costa and Posadas [Phy. Rev. Lett. 92, 048501 (2004)] by considering a different definition for mean values in the context of Tsallis nonextensive statistics and introducing a scale between the earthquake energy and the size of fragment {epsilon}{proportional_to}r{sup 3}. The energy-distribution function (EDF) deduced in our approach is considerably different from the one obtained in the above reference. We have also tested the viability of this EDF with data from two different catalogs (in three different areas), namely, the NEIC and the Bulletin Seismic of the Revista Brasileira de Geofisica.more » Although both approaches provide very similar values for the nonextensive parameter q, other physical quantities, e.g., energy density, differ considerably by several orders of magnitude.« less

  19. Relating Alfvén Wave Heating Model to Observations of a Solar Active Region

    NASA Astrophysics Data System (ADS)

    Yoritomo, J. Y.; Van Ballegooijen, A. A.

    2012-12-01

    We compared images from the Solar Dynamics Observatory's (SDO) Atmospheric Imaging Assembly (AIA) with simulations of propagating and dissipating Alfvén waves from a three-dimensional magnetohydrodynamic (MHD) model (van Ballegooijen et. al 2011; Asgari-Targhi & van Ballegooijen 2012). The goal was to search for observational evidence of Alfvén waves in the solar corona and understand their role in coronal heating. We looked at one particular active region on the 5th of May 2012. Certain distinct loops in the SDO/AIA observations were selected and expanded. Movies were created from these selections in an attempt to discover transverse motions that may be Alfvén waves. Using a magnetogram of that day and the corresponding synoptic map, a potential field model was created for the active region. Three-dimensional MHD models for several loops in different locations in the active region were created. Each model specifies the temperature, pressure, magnetic field strength, average heating rate, and other parameters along the loop. We find that the heating is intermittent in the loops and reflection occurs at the transition region. For loops at larger and larger height, a point is reached where thermal non-equilibrium occurs. In the center this critical height is much higher than in the periphery of the active region. Lastly, we find that the average heating rate and coronal pressure decrease with increasing height in the corona. This research was supported by an NSF grant for the Smithsonian Astrophysical Observatory (SAO) Solar REU program and a SDO/AIA grant for the Smithsonian Astrophysical Observatory.

  20. Solid Lubrication Fundamentals and Applications. Chapter 5; Abrasion: Plowing and Cutting

    NASA Technical Reports Server (NTRS)

    Miyoshi, Kazuhisa

    2001-01-01

    Chapter 5 discusses abrasion, a common wear phenomenon of great economic importance. It has been estimated that 50% of the wear encountered in industry is due to abrasion. Also, it is the mechanism involved in the finishing of many surfaces. Experiments are described to help in understanding the complex abrasion process and in predicting friction and wear behavior in plowing and/or cutting. These experimental modelings and measurements used a single spherical pin (asperity) and a single wedge pin (asperity). Other two-body and three-body abrasion studies used hard abrasive particles.

  1. Viviani Polytopes and Fermat Points

    ERIC Educational Resources Information Center

    Zhou, Li

    2012-01-01

    Given a set of oriented hyperplanes P = {p1, . . . , pk} in R[superscript n], define v : R[superscript n] [right arrow] R by v(X) = the sum of the signed distances from X to p[subscript 1], . . . , p[subscript k], for any point X [is a member of] R[superscript n]. We give a simple geometric characterization of P for which v is constant, leading to…

  2. DNA denaturation through a model of the partition points on a one-dimensional lattice

    NASA Astrophysics Data System (ADS)

    Mejdani, R.; Huseini, H.

    1994-08-01

    We have shown that by using a model of the partition points gas on a one-dimensional lattice, we can study, besides the saturation curves obtained before for the enzyme kinetics, also the denaturation process, i.e. the breaking of the hydrogen bonds connecting the two strands, under treatment by heat of DNA. We think that this model, as a very simple model and mathematically transparent, can be advantageous for pedagogic goals or other theoretical investigations in chemistry or modern biology.

  3. The modeling and design of the Annular Suspension and Pointing System /ASPS/. [for Space Shuttle

    NASA Technical Reports Server (NTRS)

    Kuo, B. C.; Lin, W. C. W.

    1979-01-01

    The Annular Suspension and Pointing System (ASPS) is a payload auxiliary pointing device of the Space Shuttle. The ASPS is comprised of two major subassemblies, a vernier and a coarse pointing subsystem. The three functions provided by the ASPS are related to the pointing of the payload, centering the payload in the magnetic actuator assembly, and tracking the payload mounting plate and shuttle motions by the coarse gimbals. The equations of motion of a simplified planar model of the ASPS are derived. Attention is given to a state diagram of the dynamics of the ASPS with position-plus-rate controller, the nonlinear spring characteristic for the wire-cable torque of the ASPS, the design of the analog ASPS through decoupling and pole placement, and the time response of different components of the continuous control system.

  4. Single Molecule Junctions: A Laboratory for Chemistry, Mechanics and Bond Rupture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hybertsen M. S.

    Simultaneous measurement [1] of junction conductance and sustained force in single molecule junctions bridging metal electrodes provides a powerful tool in the quantitative study of the character of molecule-metal bonds. In this talk I will discuss three topics. First, I will describe chemical trends in link bond strength based on experiments and Density Functional Theory based calculations. Second, I will focus on the specific case of pyridine-linked junctions. Bond rupture from the high conductance junction structure shows a requires a force that exceeds the rupture force of gold point contacts and clearly indicates the role of additional forces, beyond themore » specific N-Au donor acceptor bond. DFT-D2 calculations with empirical addition of dispersion interactions illustrates the interplay between the donor-acceptor bonding and the non-specific van der Waals interactions between the pyridine rings and Au asperities. Third, I will describe recent efforts to characterize the diversity of junction structures realized in break-junction experiments with suitable models for the potential surfaces that are observed. [1] Venkataraman Group, Columbia University.« less

  5. Point-in-convex polygon and point-in-convex polyhedron algorithms with O(1) complexity using space subdivision

    NASA Astrophysics Data System (ADS)

    Skala, Vaclav

    2016-06-01

    There are many space subdivision and space partitioning techniques used in many algorithms to speed up computations. They mostly rely on orthogonal space subdivision, resp. using hierarchical data structures, e.g. BSP trees, quadtrees, octrees, kd-trees, bounding volume hierarchies etc. However in some applications a non-orthogonal space subdivision can offer new ways for actual speed up. In the case of convex polygon in E2 a simple Point-in-Polygon test is of the O(N) complexity and the optimal algorithm is of O(log N) computational complexity. In the E3 case, the complexity is O(N) even for the convex polyhedron as no ordering is defined. New Point-in-Convex Polygon and Point-in-Convex Polyhedron algorithms are presented based on space subdivision in the preprocessing stage resulting to O(1) run-time complexity. The presented approach is simple to implement. Due to the principle of duality, dual problems, e.g. line-convex polygon, line clipping, can be solved in a similarly.

  6. Hourly predictive Levenberg-Marquardt ANN and multi linear regression models for predicting of dew point temperature

    NASA Astrophysics Data System (ADS)

    Zounemat-Kermani, Mohammad

    2012-08-01

    In this study, the ability of two models of multi linear regression (MLR) and Levenberg-Marquardt (LM) feed-forward neural network was examined to estimate the hourly dew point temperature. Dew point temperature is the temperature at which water vapor in the air condenses into liquid. This temperature can be useful in estimating meteorological variables such as fog, rain, snow, dew, and evapotranspiration and in investigating agronomical issues as stomatal closure in plants. The availability of hourly records of climatic data (air temperature, relative humidity and pressure) which could be used to predict dew point temperature initiated the practice of modeling. Additionally, the wind vector (wind speed magnitude and direction) and conceptual input of weather condition were employed as other input variables. The three quantitative standard statistical performance evaluation measures, i.e. the root mean squared error, mean absolute error, and absolute logarithmic Nash-Sutcliffe efficiency coefficient ( {| {{{Log}}({{NS}})} |} ) were employed to evaluate the performances of the developed models. The results showed that applying wind vector and weather condition as input vectors along with meteorological variables could slightly increase the ANN and MLR predictive accuracy. The results also revealed that LM-NN was superior to MLR model and the best performance was obtained by considering all potential input variables in terms of different evaluation criteria.

  7. Conformal Field Theories in the Epsilon and 1/N Expansions

    NASA Astrophysics Data System (ADS)

    Fei, Lin

    In this thesis, we study various conformal field theories in two different approximation schemes - the epsilon-expansion in dimensional continuation, and the large N expansion. We first propose a cubic theory in d = 6 - epsilon as the UV completion of the quartic scalar O(N) theory in d > 4. We study this theory to three-loop order and show that various operator dimensions are consistent with large-N results. This theory possesses an IR stable fixed point at real couplings for N > 1038, suggesting the existence of a perturbatively unitary interacting O(N) symmetric CFT in d = 5. Extending this model to Sp(N) symmetric theories, we find an interacting non-unitary CFT in d = 5. For the special case of Sp(2), the IR fixed point possesses an enhanced symmetry given by the supergroup OSp(1|2). We also observe that various operator dimensions of the Sp(2) theory match those from the 0-state Potts model. We provide a graph theoretic proof showing that the zero, two, and three-point functions in the Sp(2) model and the 0-state Potts model indeed match to all orders in perturbation theory, strongly suggesting their equivalence. We then study two fermionic theories in d = 2 + epsilon - the Gross-Neveu model and the Nambu-Jona-Lasinio model, together with their UV completions in d = 4 - epsilon given by the Gross-Neveu-Yukawa and the Nambu-Jona-Lasinio-Yukawa theories. We compute their sphere free energy and certain operator dimensions, passing all checks against large- N results. We use two sided Pade approximations with our epsilon-expansion results to obtain estimates of various quantities in the physical dimension d = 3. Finally, we provide evidence that the N=1 Gross-Neveu-Yukawa model which contains a 2-component Majorana fermion, and the N= 2 Nambu-Jona-Lasinion-Yukawa model which contains a 2-component Dirac fermion, both have emergent supersymmetry.

  8. Localized saddle-point search and application to temperature-accelerated dynamics

    NASA Astrophysics Data System (ADS)

    Shim, Yunsic; Callahan, Nathan B.; Amar, Jacques G.

    2013-03-01

    We present a method for speeding up temperature-accelerated dynamics (TAD) simulations by carrying out a localized saddle-point (LSAD) search. In this method, instead of using the entire system to determine the energy barriers of activated processes, the calculation is localized by only including a small chunk of atoms around the atoms directly involved in the transition. Using this method, we have obtained N-independent scaling for the computational cost of the saddle-point search as a function of system size N. The error arising from localization is analyzed using a variety of model systems, including a variety of activated processes on Ag(100) and Cu(100) surfaces, as well as multiatom moves in Cu radiation damage and metal heteroepitaxial growth. Our results show significantly improved performance of TAD with the LSAD method, for the case of Ag/Ag(100) annealing and Cu/Cu(100) growth, while maintaining a negligibly small error in energy barriers.

  9. Models Robustness for Simulating Drainage and NO3-N Fluxes

    NASA Astrophysics Data System (ADS)

    Jabro, Jay; Jabro, Ann

    2013-04-01

    Computer models simulate and forecast appropriate agricultural practices to reduce environmental impact. The objectives of this study were to assess and compare robustness and performance of three models -- LEACHM, NCSWAP, and SOIL-SOILN--for simulating drainage and NO3-N leaching fluxes in an intense pasture system without recalibration. A 3-yr study was conducted on a Hagerstown silt loam to measure drainage and NO3-N fluxes below 1 m depth from N-fertilized orchardgrass using intact core lysimeters. Five N-fertilizer treatments were replicated five times in a randomized complete block experimental design. The models were validated under orchardgrass using soil, water and N transformation rate parameters and C pools fractionation derived from a previous study conducted on similar soils under corn. The model efficiency (MEF) of drainage and NO3-N fluxes were 0.53, 0.69 for LEACHM; 0.75, 0.39 for NCSWAP; and 0.94, 0.91for SOIL-SOILN. The models failed to produce reasonable simulations of drainage and NO3-N fluxes in January, February and March due to limited water movement associated with frozen soil and snow accumulation and melt. The differences between simulated and measured NO3-N leaching and among models' performances may also be related to soil N and C transformation processes embedded in the models These results are a monumental progression in the validation of computer models which will lead to continued diffusion across diverse stakeholders.

  10. A maximally particle-hole asymmetric spectrum emanating from a semi-Dirac point.

    PubMed

    Quan, Yundi; Pickett, Warren E

    2018-02-21

    Tight binding models have proven an effective means of revealing Dirac (massless) dispersion, flat bands (infinite mass), and intermediate cases such as the semi-Dirac (sD) dispersion. This approach is extended to a three band model that yields, with chosen parameters in a two-band limit, a closed line with maximally asymmetric particle-hole dispersion: infinite mass holes, zero mass particles. The model retains the sD points for a general set of parameters. Adjacent to this limiting case, hole Fermi surfaces are tiny and needle-like. A pair of large electron Fermi surfaces at low doping merge and collapse at half filling to a flat (zero energy) closed contour with infinite mass along the contour and enclosing no carriers on either side, while the hole Fermi surface has shrunk to a point at zero energy, also containing no carriers. The tight binding model is used to study several characteristics of the dispersion and density of states. The model inspired generalization of sD dispersion to a general  ±[Formula: see text] form, for which analysis reveals that both n and m must be odd to provide a diabolical point with topological character. Evolution of the Hofstadter spectrum of this three band system with interband coupling strength is presented and discussed.

  11. A Hybrid Physics-Based Data-Driven Approach for Point-Particle Force Modeling

    NASA Astrophysics Data System (ADS)

    Moore, Chandler; Akiki, Georges; Balachandar, S.

    2017-11-01

    This study improves upon the physics-based pairwise interaction extended point-particle (PIEP) model. The PIEP model leverages a physical framework to predict fluid mediated interactions between solid particles. While the PIEP model is a powerful tool, its pairwise assumption leads to increased error in flows with high particle volume fractions. To reduce this error, a regression algorithm is used to model the differences between the current PIEP model's predictions and the results of direct numerical simulations (DNS) for an array of monodisperse solid particles subjected to various flow conditions. The resulting statistical model and the physical PIEP model are superimposed to construct a hybrid, physics-based data-driven PIEP model. It must be noted that the performance of a pure data-driven approach without the model-form provided by the physical PIEP model is substantially inferior. The hybrid model's predictive capabilities are analyzed using more DNS. In every case tested, the hybrid PIEP model's prediction are more accurate than those of physical PIEP model. This material is based upon work supported by the National Science Foundation Graduate Research Fellowship Program under Grant No. DGE-1315138 and the U.S. DOE, NNSA, ASC Program, as a Cooperative Agreement under Contract No. DE-NA0002378.

  12. A non-ideal model for predicting the effect of dissolved salt on the flash point of solvent mixtures.

    PubMed

    Liaw, Horng-Jang; Wang, Tzu-Ai

    2007-03-06

    Flash point is one of the major quantities used to characterize the fire and explosion hazard of liquids. Herein, a liquid with dissolved salt is presented in a salt-distillation process for separating close-boiling or azeotropic systems. The addition of salts to a liquid may reduce fire and explosion hazard. In this study, we have modified a previously proposed model for predicting the flash point of miscible mixtures to extend its application to solvent/salt mixtures. This modified model was verified by comparison with the experimental data for organic solvent/salt and aqueous-organic solvent/salt mixtures to confirm its efficacy in terms of prediction of the flash points of these mixtures. The experimental results confirm marked increases in liquid flash point increment with addition of inorganic salts relative to supplementation with equivalent quantities of water. Based on this evidence, it appears reasonable to suggest potential application for the model in assessment of the fire and explosion hazard for solvent/salt mixtures and, further, that addition of inorganic salts may prove useful for hazard reduction in flammable liquids.

  13. A Lidar Point Cloud Based Procedure for Vertical Canopy Structure Analysis And 3D Single Tree Modelling in Forest

    PubMed Central

    Wang, Yunsheng; Weinacker, Holger; Koch, Barbara

    2008-01-01

    A procedure for both vertical canopy structure analysis and 3D single tree modelling based on Lidar point cloud is presented in this paper. The whole area of research is segmented into small study cells by a raster net. For each cell, a normalized point cloud whose point heights represent the absolute heights of the ground objects is generated from the original Lidar raw point cloud. The main tree canopy layers and the height ranges of the layers are detected according to a statistical analysis of the height distribution probability of the normalized raw points. For the 3D modelling of individual trees, individual trees are detected and delineated not only from the top canopy layer but also from the sub canopy layer. The normalized points are resampled into a local voxel space. A series of horizontal 2D projection images at the different height levels are then generated respect to the voxel space. Tree crown regions are detected from the projection images. Individual trees are then extracted by means of a pre-order forest traversal process through all the tree crown regions at the different height levels. Finally, 3D tree crown models of the extracted individual trees are reconstructed. With further analyses on the 3D models of individual tree crowns, important parameters such as crown height range, crown volume and crown contours at the different height levels can be derived. PMID:27879916

  14. Backward bifurcations, turning points and rich dynamics in simple disease models.

    PubMed

    Zhang, Wenjing; Wahl, Lindi M; Yu, Pei

    2016-10-01

    In this paper, dynamical systems theory and bifurcation theory are applied to investigate the rich dynamical behaviours observed in three simple disease models. The 2- and 3-dimensional models we investigate have arisen in previous investigations of epidemiology, in-host disease, and autoimmunity. These closely related models display interesting dynamical behaviors including bistability, recurrence, and regular oscillations, each of which has possible clinical or public health implications. In this contribution we elucidate the key role of backward bifurcations in the parameter regimes leading to the behaviors of interest. We demonstrate that backward bifurcations with varied positions of turning points facilitate the appearance of Hopf bifurcations, and the varied dynamical behaviors are then determined by the properties of the Hopf bifurcation(s), including their location and direction. A Maple program developed earlier is implemented to determine the stability of limit cycles bifurcating from the Hopf bifurcation. Numerical simulations are presented to illustrate phenomena of interest such as bistability, recurrence and oscillation. We also discuss the physical motivations for the models and the clinical implications of the resulting dynamics.

  15. Finite Source Inversion for Laboratory Earthquakes

    NASA Astrophysics Data System (ADS)

    Parker, J. M.; Glaser, S. D.

    2017-12-01

    We produce finite source inversion results for laboratory earthquakes (LEQ) in PMMA confirmed by video recording of the fault contact. The LEQs are generated under highly controlled laboratory conditions and recorded by an array of absolutely calibrated acoustic emissions (AE) sensors. Following the method of Hartzell and Heaton (1983), we develop a solution using only the single-component AE sensors common in laboratory experiments. A set of calibration tests using glass capillary sources of varying size resolves the material characteristics and synthetic Green's Functions such that uncertainty in source location is reduced to 3σ<1mm; typical source radii are 1mm. Well-isolated events with corner frequencies on the order of 0.1 MHz (Mw -6) are recorded at 20 MHz and initially band-pass filtered from 0.1 to 1.0 MHz; in comparison, large earthquakes with corner frequencies around 0.1 Hz are commonly filtered from 0.1 to 1.0 Hz. We compare results of the inversion and video recording to slip distribution predicted by the Cattaneo partial slip asperity and numerical modeling. Not all asperities are large enough to resolve individually so some results must be interpreted as the smoothed effects of clusters of tiny contacts. For large asperities, partial slip is observed originating at the asperity edges and moving inward as predicted by the theory. Furthermore, expanding shear rupture fronts are observed as they reach resistive patches of asperities and halt or continue, depending on the relative energies of rupture and resistance.

  16. Liquid-liquid critical point in a simple analytical model of water.

    PubMed

    Urbic, Tomaz

    2016-10-01

    A statistical model for a simple three-dimensional Mercedes-Benz model of water was used to study phase diagrams. This model on a simple level describes the thermal and volumetric properties of waterlike molecules. A molecule is presented as a soft sphere with four directions in which hydrogen bonds can be formed. Two neighboring waters can interact through a van der Waals interaction or an orientation-dependent hydrogen-bonding interaction. For pure water, we explored properties such as molar volume, density, heat capacity, thermal expansion coefficient, and isothermal compressibility and found that the volumetric and thermal properties follow the same trends with temperature as in real water and are in good general agreement with Monte Carlo simulations. The model exhibits also two critical points for liquid-gas transition and transition between low-density and high-density fluid. Coexistence curves and a Widom line for the maximum and minimum in thermal expansion coefficient divides the phase space of the model into three parts: in one part we have gas region, in the second a high-density liquid, and the third region contains low-density liquid.

  17. Liquid-liquid critical point in a simple analytical model of water

    NASA Astrophysics Data System (ADS)

    Urbic, Tomaz

    2016-10-01

    A statistical model for a simple three-dimensional Mercedes-Benz model of water was used to study phase diagrams. This model on a simple level describes the thermal and volumetric properties of waterlike molecules. A molecule is presented as a soft sphere with four directions in which hydrogen bonds can be formed. Two neighboring waters can interact through a van der Waals interaction or an orientation-dependent hydrogen-bonding interaction. For pure water, we explored properties such as molar volume, density, heat capacity, thermal expansion coefficient, and isothermal compressibility and found that the volumetric and thermal properties follow the same trends with temperature as in real water and are in good general agreement with Monte Carlo simulations. The model exhibits also two critical points for liquid-gas transition and transition between low-density and high-density fluid. Coexistence curves and a Widom line for the maximum and minimum in thermal expansion coefficient divides the phase space of the model into three parts: in one part we have gas region, in the second a high-density liquid, and the third region contains low-density liquid.

  18. Earthquake source parameters of repeating microearthquakes at Parkfield, CA, determined using the SAFOD Pilot Hole seismic array

    NASA Astrophysics Data System (ADS)

    Imanishi, K.; Ellsworth, W. L.

    2005-12-01

    , where the average difference between the observed and the calculated omega-square model was assumed to represent the path and site effects. The estimated static stress drops are high, mostly in excess of 10 MPa and with some above 50 MPa. It should be noted that the highest value is near the strength of the rock. Apparent stresses range from 0.4 to 20 MPa, at the high end of the range of those reported by other studies. According to an asperity model [e.g., McGarr, 1981; Johnson & Nadeau, 2002], the small strong asperity patch is surrounded by a much weaker fault that creeps under the influence of tectonic stress. When the asperity patch ruptures, the surrounding area slips as it is dynamically loaded by the stress release of the asperity patch. If so, our estimated source dimensions seem to correspond to the size of the area surrounding the asperity patch, and the stress drops of the asperity might be much higher than our estimations. Although this is consistent with the hypocesis of Nadeau and Johnson [1998], it is unlikely that the stress drops exceed the strength of the rock. We should re-examine the asperity model based on the results obtained in this study.

  19. Asymptotic freedom in certain S O (N ) and S U (N ) models

    NASA Astrophysics Data System (ADS)

    Einhorn, Martin B.; Jones, D. R. Timothy

    2017-09-01

    We calculate the β -functions for S O (N ) and S U (N ) gauge theories coupled to adjoint and fundamental scalar representations, correcting longstanding, previous results. We explore the constraints on N resulting from requiring asymptotic freedom for all couplings. When we take into account the actual allowed behavior of the gauge coupling, the minimum value of N in both cases turns out to be larger than realized in earlier treatments. We also show that in the large N limit, both models have large regions of parameter space corresponding to total asymptotic freedom.

  20. Adaptive surrogate model based multi-objective transfer trajectory optimization between different libration points

    NASA Astrophysics Data System (ADS)

    Peng, Haijun; Wang, Wei

    2016-10-01

    An adaptive surrogate model-based multi-objective optimization strategy that combines the benefits of invariant manifolds and low-thrust control toward developing a low-computational-cost transfer trajectory between libration orbits around the L1 and L2 libration points in the Sun-Earth system has been proposed in this paper. A new structure for a multi-objective transfer trajectory optimization model that divides the transfer trajectory into several segments and gives the dominations for invariant manifolds and low-thrust control in different segments has been established. To reduce the computational cost of multi-objective transfer trajectory optimization, a mixed sampling strategy-based adaptive surrogate model has been proposed. Numerical simulations show that the results obtained from the adaptive surrogate-based multi-objective optimization are in agreement with the results obtained using direct multi-objective optimization methods, and the computational workload of the adaptive surrogate-based multi-objective optimization is only approximately 10% of that of direct multi-objective optimization. Furthermore, the generating efficiency of the Pareto points of the adaptive surrogate-based multi-objective optimization is approximately 8 times that of the direct multi-objective optimization. Therefore, the proposed adaptive surrogate-based multi-objective optimization provides obvious advantages over direct multi-objective optimization methods.