Comment on ''Equivalence between the Thirring model and a derivative-coupling model''
DOE Office of Scientific and Technical Information (OSTI.GOV)
Banerjee, R.
1988-06-15
An operator equivalence between the Thirring model and the fermionic sector of a Dirac field interacting via derivative coupling with two scalar fields is established in the path-integral framework. Relations between the coupling parameters of the two models, as found by Gomes and da Silva, can be reproduced.
Lieb-Thirring inequality for a model of particles with point interactions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frank, Rupert L.; Seiringer, Robert
2012-09-15
We consider a model of quantum-mechanical particles interacting via point interactions of infinite scattering length. In the case of fermions we prove a Lieb-Thirring inequality for the energy, i.e., we show that the energy is bounded from below by a constant times the integral of the particle density to the power (5/3).
Year-long upregulation of connexin43 in rabbit hearts by heavy ion irradiation.
Amino, Mari; Yoshioka, Koichiro; Fujibayashi, Daisuke; Hashida, Tadashi; Furusawa, Yoshiya; Zareba, Wojciech; Ikari, Yuji; Tanaka, Etsuro; Mori, Hidezo; Inokuchi, Sadaki; Kodama, Itsuo; Tanabe, Teruhisa
2010-03-01
A previous study from our laboratory has shown that a single targeted heavy ion irradiation (THIR; 15 Gy) to rabbit hearts increases connexin43 (Cx43) expression for 2 wk in association with an improvement of conduction, a decrease of the spatial inhomogeneity of repolarization, and a reduction of vulnerability to ventricular arrhythmias after myocardial infarction. This study investigated the time- and dose-dependent effects of THIR (5-15 Gy) on Cx43 expression in normal rabbit hearts (n = 45). Five rabbits without THIR were used as controls. A significant upregulation of Cx43 protein and mRNA in the ventricular myocardium was recognized by immunohistochemistry, Western blotting, and real-time PCR from 2 wk up to 1 yr after a single THIR at 15 Gy. THIR > or =10 Gy caused a significant dose-dependent increase of Cx43 protein and mRNA 2 wk after THIR. Anterior, lateral, and posterior free wall of the left ventricle, interventricular septum, and right ventricular free wall were affected similarly by THIR in terms of Cx43 upregulation. The radiation-induced increase of immunolabeled Cx43 was observed not only at the intercalated disk region but also at the lateral surface of ventricular myocytes. The increase of immunoreactive Cx43 protein was predominant in the membrane fraction insoluble in Triton X-100, that is the Cx43 in the sarcolemma. In vivo examinations of the rabbits 1 yr after THIR (15 Gy) revealed no significant changes in ECGs and echocardiograms (left ventricular dimensions, contractility, and diastolic function), indicating no apparent late radiation injury. A single application of THIR causes upregulation and altered cellular distribution of Cx43 in the ventricles lasting for at least 1 yr. This long-lasting remodeling effect on gap junctions may open the pathway to novel therapy against life threatening ventricular arrhythmias in structural heart disease.
New excitations in the Thirring model
NASA Astrophysics Data System (ADS)
Cortés, J. L.; Gamboa, J.; Schmidt, I.; Zanelli, J.
1998-12-01
The quantization of the massless Thirring model in the light-cone using functional methods is considered. The need to compactify the coordinate x- in the light-cone spacetime implies that the quantum effective action for left-handed fermions contains excitations similar to abelian instantons produced by composite of left-handed fermions. Right-handed fermions don't have a similar effective action. Thus, quantum mechanically, chiral symmetry must be broken as a result of the topological excitations. The conserved charge associated to the topological states is quantized. Different cases with only fermionic excitations or bosonic excitations or both can occur depending on the boundary conditions and the value of the coupling.
Glutamatergic and Dopaminergic Neurons in the Mouse Ventral Tegmental Area
Yamaguchi, Tsuyoshi; Qi, Jia; Wang, Hui-Ling; Zhang, Shiliang; Morales, Marisela
2014-01-01
The ventral tegmental area (VTA) comprises dopamine (DA), GABA and glutamate (Glu) neurons. Some rat VTA Glu neurons, expressing vesicular glutamate transporter 2 (VGluT2), co-express tyrosine hydroxylase (TH). While transgenic mice are now being used in attempts to determine the role of VGluT2/TH neurons in reward and neuronal signaling, such neurons have not been characterized in mouse tissue. By cellular detection of VGluT2-mRNA and TH-immunoreactivity (TH-IR), we determined the cellular expression of VGluT2-mRNA within VTA TH-IR neurons in the mouse. We found that some mouse VGluT2 neurons co-expressed TH-IR, but their frequency was lower than in the rat. To determine whether low expression of TH mRNA or TH-IR accounts for this low frequency, we evaluated VTA cellular co-expression of TH-transcripts and TH-protein. Within the medial aspects of the VTA, some neurons expressed TH mRNA but lacked TH-IR; among them a subset co-expressed VGluT2 mRNA. To determine if lack of VTA TH-IR was due to TH trafficking, we tagged VTA TH neurons by cre-inducible expression of mCherry in TH::Cre mice. By dual immunofluorescence, we detected axons containing mCherry, but lacking TH-IR, in the lateral habenula, indicating that mouse low frequency of VGluT2 mRNA (+)/TH-IR (+) neurons is due to lack of synthesis of TH protein, rather than TH-protein trafficking. In conclusion, VGluT2 neurons are present in the rat and mouse VTA, but they differ in the populations of VGluT2/TH and TH neurons. We reveal that under normal conditions, the translation of TH protein is suppressed in the mouse mesohabenular TH neurons. PMID:25572002
Benchmark results in the 2D lattice Thirring model with a chemical potential
NASA Astrophysics Data System (ADS)
Ayyar, Venkitesh; Chandrasekharan, Shailesh; Rantaharju, Jarno
2018-03-01
We study the two-dimensional lattice Thirring model in the presence of a fermion chemical potential. Our model is asymptotically free and contains massive fermions that mimic a baryon and light bosons that mimic pions. Hence, it is a useful toy model for QCD, especially since it, too, suffers from a sign problem in the auxiliary field formulation in the presence of a fermion chemical potential. In this work, we formulate the model in both the world line and fermion-bag representations and show that the sign problem can be completely eliminated with open boundary conditions when the fermions are massless. Hence, we are able accurately compute a variety of interesting quantities in the model, and these results could provide benchmarks for other methods that are being developed to solve the sign problem in QCD.
Validation of Nimbus-7 temperature-humidity infrared radiometer estimates of cloud type and amount
NASA Technical Reports Server (NTRS)
Stowe, L. L.
1982-01-01
Estimates of clear and low, middle and high cloud amount in fixed geographical regions approximately (160 km) squared are being made routinely from 11.5 micron radiance measurements of the Nimbus-7 Temperature-Humidity Infrared Radiometer (THIR). The purpose of validation is to determine the accuracy of the THIR cloud estimates. Validation requires that a comparison be made between the THIR estimates of cloudiness and the 'true' cloudiness. The validation results reported in this paper use human analysis of concurrent but independent satellite images with surface meteorological and radiosonde observations to approximate the 'true' cloudiness. Regression and error analyses are used to estimate the systematic and random errors of THIR derived clear amount.
Hayakawa, Tetsu; Takanaga, Akinori; Tanaka, Koichi; Maeda, Seishi; Seki, Makoto
2004-04-23
Almost all parasympathetic preganglionic motor neurons contain acetylcholine, whereas quite a few motor neurons in the dorsal motor nucleus of the vagus (DMV) contain dopamine. We determined the distribution and ultrastructure of these dopaminergic neurons with double-labeling immunohistochemistry for tyrosine hydroxylase (TH) and the retrograde tracer cholera toxin subunit b (CTb) following its injection into the stomach. A few TH-immunoreactive (TH-ir) neurons were found in the rostral half of the DMV, while a moderate number of these neurons were found in the caudal half. Most of the TH-ir neurons (78.4%) were double-labeled for CTb in the half of the DMV caudal to the area postrema, but only a few TH-ir neurons (5.5%) were double-labeled in the rostral half. About 20% of gastric motor neurons showed TH-immunoreactivity in the caudal half of the DMV, but only 0.3% were TH-ir in the rostral half. In all gastric motor neurons, 8.1% were double-labeled for TH. The ultrastructure of the TH-ir neurons in the caudal DMV was determined with immuno-gold-silver labeling. The TH-ir neurons were small (20.4 x 12.4 microm), round or oval, and contained numerous mitochondria, many free ribosomes, several Golgi apparatuses, a round nucleus and a few Nissl bodies. The average number of axosomatic terminals per section was 4.0. More than half of them contained round synaptic vesicles and made asymmetric synaptic contacts (Gray's type I). Most of the axodendritic terminals contacting TH-ir dendrites were Gray's type I (90%), but a few contained pleomorphic vesicles and made symmetric synaptic contacts (Gray's type II).
Critical flavor number of the Thirring model in three dimensions
NASA Astrophysics Data System (ADS)
Wellegehausen, Björn H.; Schmidt, Daniel; Wipf, Andreas
2017-11-01
The Thirring model is a four-fermion theory with a current-current interaction and U (2 N ) chiral symmetry. It is closely related to three-dimensional QED and other models used to describe properties of graphene. In addition, it serves as a toy model to study chiral symmetry breaking. In the limit of flavor number N →1 /2 it is equivalent to the Gross-Neveu model, which shows a parity-breaking discrete phase transition. The model was already studied with different methods, including Dyson-Schwinger equations, functional renormalization group methods, and lattice simulations. Most studies agree that there is a phase transition from a symmetric phase to a spontaneously broken phase for a small number of fermion flavors, but no symmetry breaking for large N . But there is no consensus on the critical flavor number Ncr above which there is no phase transition anymore and on further details of the critical behavior. Values of N found in the literature vary between 2 and 7. All earlier lattice studies were performed with staggered fermions. Thus it is questionable if in the continuum limit the lattice model recovers the internal symmetries of the continuum model. We present new results from lattice Monte Carlo simulations of the Thirring model with SLAC fermions which exactly implement all internal symmetries of the continuum model even at finite lattice spacing. If we reformulate the model in an irreducible representation of the Clifford algebra, we find, in contradiction to earlier results, that the behavior for even and odd flavor numbers is very different: for even flavor numbers, chiral and parity symmetry are always unbroken; for odd flavor numbers, parity symmetry is spontaneously broken below the critical flavor number Nircr=9 , while chiral symmetry is still unbroken.
Bogus-Nowakowska, Krystyna; Równiak, Maciej; Hermanowicz-Sobieraj, Beata; Wasilewska, Barbara; Najdzion, Janusz; Robak, Anna
2016-12-01
The present study examines the distribution of tyrosine hydroxylase (TH) immunoreactivity and its morphological relationships with neuropeptide Y (NPY)- and gonadoliberin (GnRH)-immunoreactive (IR) structures in the preoptic area (POA) of the male guinea pig. Tyrosine hydroxylase was expressed in relatively small population of perikarya and they were mostly observed in the periventricular preoptic nucleus and medial preoptic area. The tyrosine hydroxylase-immunoreactive (TH-IR) fibers were dispersed troughout the whole POA. The highest density of these fibers was observed in the median preoptic nucleus, however, in the periventricular preoptic nucleus and medial preoptic area they were only slightly less numerous. In the lateral preoptic area, the density of TH-IR fibers was moderate. Two morphological types of TH-IR fibers were distinguished: smooth and varicose. Double immunofluorescence staining showed that TH and GnRH overlapped in the guinea pig POA but they never coexisted in the same structures. TH-IR fibers often intersected with GnRH-IR structures and many of them touched the GnRH-IR perikarya or dendrites. NPY wchich was abundantly present in the POA only in fibers showed topographical proximity with TH-IR structures. Althoug TH-IR perikarya and fibers were often touched by NPY-IR fibers, colocalization of TH and NPY in the same structures was very rare. There was only a small population of fibers which contained both NPY and TH. In conclusion, the morphological evidence of contacts between TH- and GnRH-IR nerve structures may be the basis of catecholaminergic control of GnRH release in the preoptic area of the male guinea pig. Moreover, TH-IR neurons were conatcted by NPY-IR fibers and TH and NPY colocalized in some fibers, thus NPY may regulate catecholaminergic neurons in the POA. Copyright © 2016 Elsevier B.V. All rights reserved.
High-order rogue wave solutions of the classical massive Thirring model equations
NASA Astrophysics Data System (ADS)
Guo, Lijuan; Wang, Lihong; Cheng, Yi; He, Jingsong
2017-11-01
The nth-order solutions of the classical massive Thirring model (MTM) equations are derived by using the n-fold Darboux transformation. These solutions are expressed by the ratios of the two determinants consisted of 2n eigenfunctions under the reduction conditions. Using this method, rogue waves are constructed explicitly up to the third-order. Three patterns, i.e., fundamental, triangular and circular patterns, of the rogue waves are discussed. The parameter μ in the MTM model plays the role of the mass in the relativistic field theory while in optics it is related to the medium periodic constant, which also results in a significant rotation and a remarkable lengthening of the first-order rogue wave. These results provide new opportunities to observe rouge waves by using a combination of electromagnetically induced transparency and the Bragg scattering four-wave mixing because of large amplitudes.
NASA Astrophysics Data System (ADS)
You, Bei; Bursa, Michal; Życki, Piotr T.
2018-05-01
We develop a Monte Carlo code to compute the Compton-scattered X-ray flux arising from a hot inner flow that undergoes Lense–Thirring precession. The hot flow intercepts seed photons from an outer truncated thin disk. A fraction of the Comptonized photons will illuminate the disk, and the reflected/reprocessed photons will contribute to the observed spectrum. The total spectrum, including disk thermal emission, hot flow Comptonization, and disk reflection, is modeled within the framework of general relativity, taking light bending and gravitational redshift into account. The simulations are performed in the context of the Lense–Thirring precession model for the low-frequency quasi-periodic oscillations, so the inner flow is assumed to precess, leading to periodic modulation of the emitted radiation. In this work, we concentrate on the energy-dependent X-ray variability of the model and, in particular, on the evolution of the variability during the spectral transition from hard to soft state, which is implemented by the decrease of the truncation radius of the outer disk toward the innermost stable circular orbit. In the hard state, where the Comptonizing flow is geometrically thick, the Comptonization is weakly variable with a fractional variability amplitude of ≤10% in the soft state, where the Comptonizing flow is cooled down and thus becomes geometrically thin, the fractional variability of the Comptonization is highly variable, increasing with photon energy. The fractional variability of the reflection increases with energy, and the reflection emission for low spin is counterintuitively more variable than the one for high spin.
NASA Astrophysics Data System (ADS)
Iorio, L.
2007-03-01
In this paper we reply to recent claims by Ciufolini and Pavlis about certain aspects of the measurement of the general relativistic Lense-Thirring effect in the gravitational field of the Earth. (I) The proposal by such authors of using the existing satellites endowed with some active mechanism of compensation of the non-gravitational perturbations as an alternative strategy to improve the currently ongoing Lense-Thirring tests is unfeasible because of the impact of the uncancelled even zonal harmonics of the geopotential and of some time-dependent tidal perturbations. (II) It is shown that their criticisms about the possibility of using the existing altimeter Jason-1 and laser-ranged Ajisai satellites are groundless. (III) Ciufolini and Pavlis also claimed that we would have explicitly proposed to use the mean anomaly of the LAGEOS satellites in order to improve the accuracy of the Lense-Thirring tests. We prove that it is false. In regard to the mean anomaly of the LAGEOS satellites, Ciufolini himself did use such an orbital element in some previously published tests. About the latest test performed with the LAGEOS satellites, (IV) we discuss the cross-coupling between the inclination errors and the first even zonal harmonic as another possible source of systematic error affecting it with an additional 9% bias. (V) Finally, we stress the weak points of the claims about the origin of the two-nodes LAGEOS-LAGEOS II combination used in that test.
NASA Astrophysics Data System (ADS)
Hegedűs, Árpád
2018-03-01
In this paper, using the light-cone lattice regularization, we compute the finite volume expectation values of the composite operator \\overline{Ψ}Ψ between pure fermion states in the Massive Thirring Model. In the light-cone regularized picture, this expectation value is related to 2-point functions of lattice spin operators being located at neighboring sites of the lattice. The operator \\overline{Ψ}Ψ is proportional to the trace of the stress-energy tensor. This is why the continuum finite volume expectation values can be computed also from the set of non-linear integral equations (NLIE) governing the finite volume spectrum of the theory. Our results for the expectation values coming from the computation of lattice correlators agree with those of the NLIE computations. Previous conjectures for the LeClair-Mussardo-type series representation of the expectation values are also checked.
Forlano, Paul M.; Kim, Spencer D.; Krzyminska, Zuzanna M.; Sisneros, Joseph A.
2014-01-01
Although the neuroanatomical distribution of catecholaminergic (CA) neurons has been well documented across all vertebrate classes, few studies have examined CA connectivity to physiologically and anatomically identified neural circuitry that controls behavior. The goal of this study was to characterize CA distribution in the brain and inner ear of the plainfin midshipman fish (Porichthys notatus) with particular emphasis on their relationship with anatomically labeled circuitry that both produces and encodes social acoustic signals in this species. Neurobiotin labeling of the main auditory endorgan, the saccule, combined with tyrosine hydroxylase immunofluorescence (TH-ir) revealed a strong CA innervation of both the peripheral and central auditory system. Diencephalic TH-ir neurons in the periventricular posterior tuberculum, known to be dopaminergic, send ascending projections to the ventral telencephalon and prominent descending projections to vocal-acoustic integration sites, notably the hindbrain octavolateralis efferent nucleus, as well as onto the base of hair cells in the saccule via nerve VIII. Neurobiotin backfills of the vocal nerve in combination with TH-ir revealed CA terminals on all components of the vocal pattern generator which appears to largely originate from local TH-ir neurons but may include diencephalic projections as well. This study provides strong evidence for catecholamines as important neuromodulators of both auditory and vocal circuitry and acoustic-driven social behavior in midshipman fish. This first demonstration of TH-ir terminals in the main endorgan of hearing in a non-mammalian vertebrate suggests a conserved and important anatomical and functional role for dopamine in normal audition. PMID:24715479
Radad, Khaled; Scheller, Dieter; Rausch, Wolf-Dieter; Reichmann, Heinz; Gille, Gabrielle
2014-01-01
Dopamine agonists are suggested to be more efficacious in treating Parkinson's disease (PD) as they have neuroprotective properties in addition to their receptor-related actions. The present study was designed to investigate the neuroprotective effects of rotigotine, a D3/D2/D1 dopamine receptor agonist, against the two powerful complex I inhibitors, 1-methyl-4-phenylpyridinium (MPP+) and rotenone, in primary mesencephalic cell culture relevant to PD. Primary mesencephalic cell cultures were prepared from embryonic mouse mesencephala at gestation day 14. Three sets of cultures were treated with rotigotine alone, rotigotine and MPP⁺, and rotigotine and rotenone to investigate the effect of rotigotine on the survival of dopaminergic neurons against age-, MPP⁺- and rotenone-induced cell death. At the end of each treatment, cultures were fixed and stained immunohistochemically against tyrosine hydroxylase (TH). The effect of rotigotine against rotenone-induced reactive oxygen species (ROS) production was measured using CH-H2DCFDA fluorescence dye. Rotigotine alone did not influence the survival of tyrosine hydroxylase immunoreactive (THir) neurons except at 10 µM, it significantly decreased the number of THir neurons by 40% compared to untreated controls. Treatment of cultures with 0.01 µM rotigotine rescued 10% of THir neurons against MPP⁺-induced cell death. Rotigotine was also found to significantly rescue 20% of THir neurons at 0.01 µM of rotenone-treated cultures. Using of CH-H2DCFDA fluorescence dye, it was found that rotigotine significantly attenuated ROS production compared to rotenone-treated cultures. Rotigotine provides minor protection against MPP⁺ and rescues a significant number of THir neurons against rotenone in primary mesencephalic cell cultures relevant to PD.
Measuring the Lense-Thirring precession using a second Lageos satellite
NASA Technical Reports Server (NTRS)
Tapley, B. D.; Ciufolini, I.
1989-01-01
A complete numerical simulation and error analysis was performed for the proposed experiment with the objective of establishing an accurate assessment of the feasibility and the potential accuracy of the measurement of the Lense-Thirring precession. Consideration was given to identifying the error sources which limit the accuracy of the experiment and proposing procedures for eliminating or reducing the effect of these errors. Analytic investigations were conducted to study the effects of major error sources with the objective of providing error bounds on the experiment. The analysis of realistic simulated data is used to demonstrate that satellite laser ranging of two Lageos satellites, orbiting with supplemental inclinations, collected for a period of 3 years or more, can be used to verify the Lense-Thirring precession. A comprehensive covariance analysis for the solution was also developed.
Broken Scale Invariance and Anomalous Dimensions
DOE R&D Accomplishments Database
Wilson, K. G.
1970-05-01
Mack and Kastrup have proposed that broken scale invariance is a symmetry of strong interactions. There is evidence from the Thirring model and perturbation theory that the dimensions of fields defined by scale transformations will be changed by the interaction from their canonical values. We review these ideas and their consequences for strong interactions.
Li, Rui; Peng, Ning; Du, Fang; Li, Xu-ping; Le, Wei-dong
2006-04-01
To observe whether the dopaminergic neuroprotective effect of (-)-epigallocatechin gallate (EGCG) is associated with its inhibition of microglial cell activation in vivo. The effects of EGCG at different doses on dopaminergic neuronal survival were tested in a methyl-4-phenyl-pyridinium (MPP+)-induced dopaminergic neuronal injury model in the primary mesencephalic cell cultures. With unbiased stereological method, tyrosine hydroxylase-immunoreactive (TH-ir) cells were counted in the A8, A9 and A10 regions of the substantia nigra (SN) in 1-methyl-4-phenyl-1,2,3,6-tetrahydropyridine (MPTP)-treated C57BL/6 mice. The effect of EGCG on microglial activation in the SN was also investigated. Pretreatment with EGCG (1 to 100 micromol/L) significantly attenuated MPP+-induced TH-ir cell loss by 22.2% to 80.5% in the mesencephalic cell cultures. In MPTP-treated C57BL/6 mice, EGCG at a low concentration (1 mg/kg) provided significant protection against MPTP-induced TH-ir cell loss by 50.9% in the whole nigral area and by 71.7% in the A9 region. EGCG at 5 mg/kg showed more prominent protective effect than at 1 or 10 mg/kg. EGCG pretreatment significantly inhibited microglial activation and CD11b expression induced by MPTP. EGCG exerts potent dopaminergic neuroprotective activity by means of microglial inhibition, which shed light on the potential use of EGCG in treatment of Parkinson's disease.
NASA Astrophysics Data System (ADS)
Iorio, Lorenzo
2018-05-01
We analytically calculate the time series for the perturbations Δ ρ \\left(t\\right), Δ \\dot{ρ }\\left(t\\right) induced by a general disturbing acceleration A on the mutual range ρ and range-rate \\dot{ρ } of two test particles A, B orbiting the same spinning body. We apply it to the general relativistic Lense-Thirring effect, due to the primary's spin S, and the classical perturbation arising from its quadrupole mass moment J2 for arbitrary orbital geometries and orientation of the source's symmetry axis {\\hat{S}}. The Earth-Mercury range and range-rate are nominally affected by the Sun's gravitomagnetic field to the 10 m, 10-3 cm s-1 level, respectively, during the extended phase (2026-2028) of the forthcoming BepiColombo mission to Mercury whose expected tracking accuracy is of the order of ≃0.1 m, 2 × 10-4 cm s-1. The competing signatures due to the solar quadrupole J_2^{\\odot }, if modelled at the σ _{J_2^{\\odot }}˜eq 10^{-9} level of the latest planetary ephemerides INPOP17a, are nearly 10 times smaller than the relativistic gravitomagnetic effects. The position and velocity vectors \\boldsymbol {r}, \\boldsymbol {v} of Mercury and Earth are changed by the solar Lense-Thirring effect by about 10 m, 1.5 m and 10-3 cm s-1, 10-5 cm s-1, respectively, over 2 yr; neglecting such shifts may have an impact on long-term integrations of the inner Solar system dynamics over ˜Gyr time-scales.
Polynomial approximation of the Lense-Thirring rigid precession frequency
NASA Astrophysics Data System (ADS)
De Falco, Vittorio; Motta, Sara
2018-05-01
We propose a polynomial approximation of the global Lense-Thirring rigid precession frequency to study low-frequency quasi-periodic oscillations around spinning black holes. This high-performing approximation allows to determine the expected frequencies of a precessing thick accretion disc with fixed inner radius and variable outer radius around a black hole with given mass and spin. We discuss the accuracy and the applicability regions of our polynomial approximation, showing that the computational times are reduced by a factor of ≈70 in the range of minutes.
Nimbus earth resources observations
NASA Technical Reports Server (NTRS)
Sabatini, R. R.; Rabchevsky, G. A.; Sissala, J. E.
1971-01-01
The potential for utilizing data gathered by Nimbus satellites to study the earth surface and its physical properties is illustrated. The Nimbus data applicable to investigations of the earth and its resources, and to the problems of resolution and cloud cover are described. Geological, hydrological, and oceanographic applications are discussed. Applications of the data to other fields, such as cartography, agriculture, forestry, and urban analysis are presented. Relevant information is also given on the Nimbus orbit and experiments; surface and atmospheric effects on HRIR and THIR radiation measurements; and noise problems in the AVCS, IDCS, HRIR, and THIR data.
POLARIZATION MODULATION FROM LENSE–THIRRING PRECESSION IN X-RAY BINARIES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ingram, Adam; Maccarone, Thomas J.; Poutanen, Juri
2015-07-01
It has long been recognized that quasi-periodic oscillations (QPOs) in the X-ray light curves of accreting black hole and neutron star binaries have the potential to be powerful diagnostics of strong field gravity. However, this potential cannot be fulfilled without a working theoretical model, which has remained elusive. Perhaps, the most promising model associates the QPO with Lense–Thirring precession of the inner accretion flow, with the changes in viewing angle and Doppler boosting modulating the flux over the course of a precession cycle. Here, we consider the polarization signature of a precessing inner accretion flow. We use simple assumptions about themore » Comptonization process generating the emitted spectrum and take all relativistic effects into account, parallel transporting polarization vectors toward the observer along null geodesics in the Kerr metric. We find that both the degree of linear polarization and the polarization angle should be modulated on the QPO frequency. We calculate the predicted absolute rms variability amplitude of the polarization degree and angle for a specific model geometry. We find that it should be possible to detect these modulations for a reasonable fraction of parameter space with a future X-ray polarimeter such as NASA’s Polarization Spectroscopic Telescope Array (the satellite incarnation of the balloon experiment X-Calibur)« less
Sullivan, Alana W; Beach, Elsworth C; Stetzik, Lucas A; Perry, Amy; D'Addezio, Alyssa S; Cushing, Bruce S; Patisaul, Heather B
2014-10-01
Impacts on brain and behavior have been reported in laboratory rodents after developmental exposure to bisphenol A (BPA), raising concerns about possible human effects. Epidemiological data suggest links between prenatal BPA exposure and altered affective behaviors in children, but potential mechanisms are unclear. Disruption of mesolimbic oxytocin (OT)/vasopressin (AVP) pathways have been proposed, but supporting evidence is minimal. To address these data gaps, we employed a novel animal model for neuroendocrine toxicology: the prairie vole (Microtus ochrogaster), which are more prosocial than lab rats or mice. Male and female prairie vole pups were orally exposed to 5-μg/kg body weight (bw)/d, 50-μg/kg bw/d, or 50-mg/kg bw/d BPA or vehicle over postnatal days 8-14. Subjects were tested as juveniles in open field and novel social tests and for partner preference as adults. Brains were then collected and assessed for immunoreactive (ir) tyrosine hydroxylase (TH) (a dopamine marker) neurons in the principal bed nucleus of the stria terminalis (pBNST) and TH-ir, OT-ir, and AVP-ir neurons in the paraventricular nucleus of the hypothalamus (PVN). Female open field activity indicated hyperactivity at the lowest dose and anxiety at the highest dose. Effects on social interactions were also observed, and partner preference formation was mildly inhibited at all dose levels. BPA masculinized principal bed nucleus of the stria terminalis TH-ir neuron numbers in females. Additionally, 50-mg/kg bw BPA-exposed females had more AVP-ir neurons in the anterior PVN and fewer OT-ir neurons in the posterior PVN. At the 2 lowest doses, BPA eliminated sex differences in PVN TH-ir neuron numbers and reversed this sex difference at the highest dose. Minimal behavioral effects were observed in BPA-exposed males. These data support the hypothesis that BPA alters affective behaviors, potentially via disruption of OT/AVP pathways.
Park, Eun-Bee; Jeon, Joo-Yeong; Jeon, Chang-Jin
2018-05-09
A growing number of studies have revealed the functional neuroarchitecture of the microbat retina and suggested that microbats can see using their eyes. To better understand the organization of the microbat retina, quantitative analysis of protein kinase C alpha (PKCα)- and tyrosine hydroxylase (TH)-immunoreactive (IR) cells was conducted on the greater horseshoe bat (Rhinolophus ferrumequinum) retina. As a result, PKCα immunoreactivity was observed in rod bipolar cells, consistent with previous studies on other mammalian retinas. PKCα-IR cell distribution in the inner nuclear layer showed regional differences in density, with the highest density found in the nasal retina. The average density of PKCα-IR cells was 10,487±441 cells/mm2 (mean ± S.D.; n=4), with a total of 43,077±1,843 cells/retina. TH-IR cells in the Rhinolophus ferrumequinum retina could be classified into four types based on soma location and ramification in the inner plexiform layer: conventional amacrine, displaced amacrine, interplexiform, and intercalated cells. The majority of TH-IR cells were conventional amacrine cells. TH-IR cells were nonrandomly distributed at low density over the retina. The average density was 29.7±3.1 cells/mm2 (mean ± S.D.; n=3), with a total of 124.0±11.3 cells/retina. TH-IR processes showed varicosities and formed ring-like structures encircling AII amacrine cells. Our study provides the foundation for understanding the neurochemical architecture of the microbat retina and supports the notion that the eyes do play a role in the visual system of microbats.
Goodson, James L; Kabelik, David; Kelly, Aubrey M; Rinaldi, Jacob; Klatt, James D
2009-05-26
Mesolimbic dopamine (DA) circuits mediate a wide range of goal-oriented behavioral processes, and DA strongly influences appetitive and consummatory aspects of male sexual behavior. In both birds and mammals, mesolimbic projections arise primarily from the ventral tegmental area (VTA), with a smaller contribution from the midbrain central gray (CG). Despite the well known importance of the VTA cell group for incentive motivation functions, relationships of VTA subpopulations to specific aspects of social phenotype remain wholly undescribed. We now show that in male zebra finches (Estrildidae: Taeniopygia guttata), Fos activity within a subpopulation of tyrosine hydroxylase-immunoreactive (TH-ir; presumably dopaminergic) neurons in the caudal VTA is significantly correlated with courtship singing and coupled to gonadal state. In addition, the number of TH-ir neurons in this caudal subpopulation dichotomously differentiates courting from non-courting male phenotypes, and evolves in relation to sociality (flocking vs. territorial) across several related finch species. Combined, these findings for the VTA suggest that divergent social phenotypes may arise due to the differential assignment of "incentive value" to conspecific stimuli. TH-ir neurons of the CG (a population of unknown function in mammals) exhibit properties that are even more selectively and tightly coupled to the expression of courtship phenotypes (and appetitive courtship singing), both in terms of TH-ir cell number, which correlates significantly with constitutive levels of courtship motivation, and with TH-Fos colocalization, which increases in direct proportion to the phasic expression of song. We propose that these neurons may be core components of social communication circuits across diverse vertebrate taxa.
A confirmation of the general relativistic prediction of the Lense-Thirring effect.
Ciufolini, I; Pavlis, E C
2004-10-21
An important early prediction of Einstein's general relativity was the advance of the perihelion of Mercury's orbit, whose measurement provided one of the classical tests of Einstein's theory. The advance of the orbital point-of-closest-approach also applies to a binary pulsar system and to an Earth-orbiting satellite. General relativity also predicts that the rotation of a body like Earth will drag the local inertial frames of reference around it, which will affect the orbit of a satellite. This Lense-Thirring effect has hitherto not been detected with high accuracy, but its detection with an error of about 1 per cent is the main goal of Gravity Probe B--an ongoing space mission using orbiting gyroscopes. Here we report a measurement of the Lense-Thirring effect on two Earth satellites: it is 99 +/- 5 per cent of the value predicted by general relativity; the uncertainty of this measurement includes all known random and systematic errors, but we allow for a total +/- 10 per cent uncertainty to include underestimated and unknown sources of error.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Homan, Jeroen, E-mail: jeroen@space.mit.edu
2012-12-01
Relativistic Lense-Thirring precession of a tilted inner accretion disk around a compact object has been proposed as a mechanism for low-frequency ({approx}0.01-70 Hz) quasi-periodic oscillations (QPOs) in the light curves of X-ray binaries. A substantial misalignment angle ({approx}15 Degree-Sign -20 Degree-Sign ) between the inner-disk rotation axis and the compact-object spin axis is required for the effects of this precession to produce observable modulations in the X-ray light curve. A consequence of this misalignment is that in high-inclination X-ray binaries the precessing inner disk will quasi-periodically intercept our line of sight to the compact object. In the case of neutron-starmore » systems, this should have a significant observational effect, since a large fraction of the accretion energy is released on or near the neutron-star surface. In this Letter, I suggest that this specific effect of Lense-Thirring precession may already have been observed as {approx}1 Hz QPOs in several dipping/eclipsing neutron-star X-ray binaries.« less
NASA Astrophysics Data System (ADS)
di Virgilio, Angela D. V.
Gyroscopes IN General Relativity (GINGER) is a proposal of an Earth-base experiment to measure the Lense-Thirring effect. GINGER uses an array of ring lasers, which are the most sensitive inertial sensors to measure the rotation rate of the Earth. GINGER is based on a three-dimensional array of large size ring lasers, able to measure the de Sitter and Lense-Thirring effects. The instrument will be located in the INFN Gran Sasso underground laboratory, in Italy. We describe preliminary developments and measurements. Earlier prototypes based in Italy, GP2, GINGERino, and G-LAS are also described and their preliminary results reported.
Oliveira, Francisco Gilberto; Nascimento-Júnior, Expedito Silva do; Cavalcante, Judney Cley; Guzen, Fausto Pierdoná; Cavalcante, Jeferson de Souza; Soares, Joacil Germano; Cavalcanti, José Rodolfo Lopes de Paiva; Freitas, Leandro Moura de; Costa, Miriam Stela Maris de Oliveira; Andrade-da-Costa, Belmira Lara da Silveira
2018-07-01
The rock cavy (Kerodon rupestris) is a crepuscular Hystricomorpha rodent that has been used in comparative analysis of retinal targets, but its retinal organization remains to be investigated. In order to better characterize its visual system, the present study analyzed neurochemical features related to the topographic organization of catecholaminergic cells and ganglion cells, as well the distribution of calcium-binding proteins in the outer and inner retina. Retinal sections and/or wholemounts were processed using tyrosine hydroxylase (TH), GABA, calbindin, parvalbumin and calretinin immunohistochemistry or Nissl staining. Two types of TH-immunoreactive (TH-IR) cells were found which differ in soma size, dendritic arborization, intensity of TH immunoreactivity and stratification pattern in the inner plexiform layer. The topographic distribution of all TH-IR cells defines a visual streak along the horizontal meridian in the superior retina. The ganglion cells are also distributed in a visual streak and the visual acuity estimated considering their peak density is 4.13 cycles/degree. A subset of TH-IR cells express GABA or calbindin. Calretinin is abundant in most of retinal layers and coexists with calbindin in horizontal cells. Parvalbumin is less abundant and expressed by presumed amacrine cells in the INL and some ganglion cells in the GCL. The topographic distribution of TH-IR cells and ganglion cells in the rock cavy retina indicate a suitable adaptation for using a broad extension of its inferior visual field in aspects that involve resolution, adjustment to ambient light intensity and movement detection without specialized eye movements. Copyright © 2017 Elsevier B.V. All rights reserved.
Quantum Field Theory in Two Dimensions: Light-front Versus Space-like Solutions
NASA Astrophysics Data System (ADS)
Martinovic̆, L'ubomír
2017-07-01
A few non-perturbative topics of quantum field theory in D=1+1 are studied in both the conventional (SL) and light-front (LF) versions. First, we give a concise review of the recently proposed quantization of the two-dimensional massless LF fields. The LF version of bosonization follows in a simple and natural way including the bosonized form of the Thirring model. As a further application, we demonstrate the closeness of the 2D massless LF quantum fields to conformal field theory (CFT). We calculate several correlation functions including those between the components of the LF energy-momentum tensor and derive the LF version of the Virasoro algebra. Using the Euclidean time variable, we can immediately transform calculated quantities to the (anti)holomorphic form. The results found are in agreement with those from CFT. Finally, we show that the proposed framework provides us with the elements needed for an independent LF study of exactly solvable models. We compute the non-perturbative correlation functions from the exact operator solution of the LF Thirring model and compare it to the analogous results in the SL theory. While the vacuum effects are automatically taken into account in the LF case, the non-trivial vacuum structure has to be incorported by an explicit diagonalization of the SL Hamiltonians, to obtain the equivalently complete solution.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Veledina, Alexandra; Poutanen, Juri; Ingram, Adam, E-mail: alexandra.veledina@oulu.fi, E-mail: juri.poutanen@oulu.fi
2013-12-01
Recent observations of accreting black holes reveal the presence of quasi-periodic oscillations (QPO) in the optical power density spectra. The corresponding oscillation periods match those found in X-rays, implying a common origin. Among the numerous suggested X-ray QPO mechanisms, some may also work in the optical. However, their relevance to the broadband—optical through X-ray—spectral properties have not been investigated. For the first time, we discuss the QPO mechanism in the context of the self-consistent spectral model. We propose that the QPOs are produced by Lense-Thirring precession of the hot accretion flow, whose outer parts radiate in optical wavelengths. At themore » same time, its innermost parts are emitting X-rays, which explains the observed connection of QPO periods. We predict that the X-ray and optical QPOs should be either in phase or shifted by half a period, depending on the observer position. We investigate the QPO harmonic content and find that the variability amplitudes at the fundamental frequency are larger in the optical, while the X-rays are expected to have strong harmonics. We then discuss the QPO spectral dependence and compare the expectations to the existing data.« less
NASA Astrophysics Data System (ADS)
Stedman, G. E.; Schreiber, K. U.; Bilger, H. R.
2003-07-01
The possibility of detecting the Lense-Thirring field generated by the rotating earth (also rotating laboratory masses) is reassessed in view of recent dramatic advances in the technology of ring laser gyroscopes. This possibility is very much less remote than it was a decade ago. The effect may contribute significantly to the Sagnac frequency of planned instruments. Its discrimination and detection will require an improved metrology, linking the ring to the celestial reference frame, and a fuller study of dispersion- and backscatter-induced frequency pulling. Both these requirements have been the subject of recent major progress, and our goal looks feasible.
Coupled tapering/uptapering of Thirring type soliton pair in nonlinear media
NASA Astrophysics Data System (ADS)
Prasad, Shraddha; Dutta, Manoj Kumar; Sarkar, Ram Krishna
2018-03-01
The paper investigates coupled tapering/uptapering of Thirring type soliton pair, employing Beam Propagation Method. It is seen that, the pair uptapers in presence of losses and tapers in presence of gain. When the first beam has gain and the second one has losses in the nonlinear medium, the second beam induces uptapering in the first beam, while, first beam induces tapering in the second beam. When the medium provides gain/losses to only one of the two beams, the beam undergoes tapering/uptapering and also induces tapering/uptapering to the other loss less beam; however, magnitude of tapering/uptapering are different.
NASA Astrophysics Data System (ADS)
Iorio, Lorenzo
2009-12-01
We deal with the attempts to measure the Lense-Thirring effect with the Satellite Laser Ranging (SLR) technique applied to the existing LAGEOS and LAGEOS II terrestrial satellites and to the recently approved LARES spacecraft. According to general relativity, a central spinning body of mass M and angular momentum S like the Earth generates a gravitomagnetic field which induces small secular precessions of the orbit of a test particle geodesically moving around it. Extracting this signature from the data is a demanding task because of many classical orbital perturbations having the same pattern as the gravitomagnetic one, like those due to the centrifugal oblateness of the Earth which represents a major source of systematic bias. The first issue addressed here is: are the so far published evaluations of the systematic uncertainty induced by the bad knowledge of the even zonal harmonic coefficients J ℓ of the multipolar expansion of the Earth’s geopotential reliable and realistic? Our answer is negative. Indeed, if the differences Δ J ℓ among the even zonals estimated in different Earth’s gravity field global solutions from the dedicated GRACE mission are assumed for the uncertainties δ J ℓ instead of using their covariance sigmas σ_{J_{ell}} , it turns out that the systematic uncertainty δ μ in the Lense-Thirring test with the nodes Ω of LAGEOS and LAGEOS II may be up to 3 to 4 times larger than in the evaluations so far published (5-10%) based on the use of the sigmas of one model at a time separately. The second issue consists of the possibility of using a different approach in extracting the relativistic signature of interest from the LAGEOS-type data. The third issue is the possibility of reaching a realistic total accuracy of 1% with LAGEOS, LAGEOS II and LARES, which should be launched in November 2009 with a VEGA rocket. While LAGEOS and LAGEOS II fly at altitudes of about 6000 km, LARES will be likely placed at an altitude of 1450 km. Thus, it will be sensitive to much more even zonals than LAGEOS and LAGEOS II. Their corrupting impact has been evaluated with the standard Kaula’s approach up to degree ℓ=60 by using Δ J ℓ and σ_{J_{ell }} ; it turns out that it may be as large as some tens percent. The different orbit of LARES may also have some consequences on the non-gravitational orbital perturbations affecting it which might further degrade the obtainable accuracy in the Lense-Thirring test.
1991-07-23
8217 I ,R . -, Thir cnc-:-. a-:. On *: - ’y -e ir~d is Dle-: £ .. _... . . STINFO P No ’ &Cz" Final Report for Period I July 1987 - 30 April 1991...OF RESEARCH PAPERS AND ABSTRACTS Appendix K I. OVERALL OBJECTIVE AND STATEMENT OF WORK The overall objective of the proposed project is to investigate...determine what adjustments in administered dose are necessary to achieve equal brain levels of test compounds in each species. An inhalation and oral
Robinson, Bonnie; Dumas, Melanie; Gu, Qiang; Kanungo, Jyotshna
2018-06-08
N-acetylcysteine, a precursor molecule of glutathione, is an antioxidant. Ketamine, a pediatric anesthetic, has been implicated in cardiotoxicity and neurotoxicity including modulation of monoaminergic systems in mammals and zebrafish. Here, we show that N-acetylcysteine prevents ketamine's adverse effects on development and monoaminergic neurons in zebrafish embryos. The effects of ketamine and N-acetylcysteine alone or in combination were measured on the heart rate, body length, brain serotonergic neurons and tyrosine hydroxylase-immunoreactive (TH-IR) neurons. In the absence of N-acetylcysteine, a concentration of ketamine that produces an internal embryo exposure level comparable to human anesthetic plasma concentrations significantly reduced heart rate and body length and those effects were prevented by N-acetylcysteine co-treatment. Ketamine also reduced the areas occupied by serotonergic neurons in the brain, whereas N-acetylcysteine co-exposure counteracted this effect. TH-IR neurons in the embryo brain and TH-IR cells in the trunk were significantly reduced with ketamine treatment, but not in the presence of N-acetylcysteine. In our continued search for compounds that can prevent ketamine toxicity, this study using specific endpoints of developmental toxicity, cardiotoxicity and neurotoxicity, demonstrates protective effects of N-acetylcysteine against ketamine's adverse effects. This is the first study that shows the protective effects of N-acetylcysteine on ketamine-induced developmental defects of monoaminergic neurons as observed in a whole organism. Published by Elsevier B.V.
NASA Technical Reports Server (NTRS)
Arnold, J. E.; Scoggins, J. R.; Fuelberg, H. E.
1976-01-01
During the period of May 11 and 12, 1974, NASA conducted its second Atmospheric Variability Experiment (AVE II) over the eastern United States. In this time interval, two Nimbus 5 orbits crossed the AVE II area, providing a series of ITPR soundings as well as THIR data. Horizontal temperature mapping of the AVE II cloud field is examined using two grid print map scales. Implied cloud top heights are compared with maximum radar-echo top reports. In addition, shelter temperatures in areas of clear sky are compared with the surface temperatures as determined from 11.5 micrometer radiometer data of the THIR experiment. The ITPR sounding accuracy is evaluated using interpolated radiosonde temperatures at times nearly coincident with the ITPR soundings. It was found that mean differences between the two data sets were as small as 1.3 C near 500 mb and as large as 2.9 C near the tropopause. The differences between ITPR and radiosonde temperatures at constant pressure levels were sufficient to induce significant differences in the horizontal temperature gradient. Cross sections of geostrophic wind along the orbital tracks were developed using a thermal wind buildup based on the ITPR temperature data and the radiosonde temperature data. Differences between the radiosonde and ITPR geostrophic winds could be explained on the basis of differences in the ITPR and radiosonde temperature gradients.
Gombash, Sara E; Lipton, Jack W; Collier, Timothy J; Madhavan, Lalitha; Steece-Collier, Kathy; Cole-Strauss, Allyson; Terpstra, Brian T; Spieles-Engemann, Anne L; Daley, Brian F; Wohlgenant, Susan L; Thompson, Valerie B; Manfredsson, Fredric P; Mandel, Ronald J; Sortwell, Caryl E
2012-01-01
Neurotrophic factors are integrally involved in the development of the nigrostriatal system and in combination with gene therapy, possess great therapeutic potential for Parkinson's disease (PD). Pleiotrophin (PTN) is involved in the development, maintenance, and repair of the nigrostriatal dopamine (DA) system. The present study examined the ability of striatal PTN overexpression, delivered via psueudotyped recombinant adeno-associated virus type 2/1 (rAAV2/1), to provide neuroprotection and functional restoration from 6-hydroxydopamine (6-OHDA). Striatal PTN overexpression led to significant neuroprotection of tyrosine hydroxylase immunoreactive (THir) neurons in the substantia nigra pars compacta (SNpc) and THir neurite density in the striatum, with long-term PTN overexpression producing recovery from 6-OHDA-induced deficits in contralateral forelimb use. Transduced striatal PTN levels were increased threefold compared to adult striatal PTN expression and approximated peak endogenous developmental levels (P1). rAAV2/1 vector exclusively transduced neurons within the striatum and SNpc with approximately half the total striatal volume routinely transduced using our injection parameters. Our results indicate that striatal PTN overexpression can provide neuroprotection for the 6-OHDA lesioned nigrostriatal system based upon morphological and functional measures and that striatal PTN levels similar in magnitude to those expressed in the striatum during development are sufficient to provide neuroprotection from Parkinsonian insult. PMID:22008908
Krabbe, Christina; Bak, Sara Thornby; Jensen, Pia; von Linstow, Christian; Martínez Serrano, Alberto; Hansen, Claus; Meyer, Morten
2014-01-01
Neural stem cells (NSCs) constitute a promising source of cells for transplantation in Parkinson's disease (PD), but protocols for controlled dopaminergic differentiation are not yet available. Here we investigated the influence of oxygen on dopaminergic differentiation of human fetal NSCs derived from the midbrain and forebrain. Cells were differentiated for 10 days in vitro at low, physiological (3%) versus high, atmospheric (20%) oxygen tension. Low oxygen resulted in upregulation of vascular endothelial growth factor and increased the proportion of tyrosine hydroxylase-immunoreactive (TH-ir) cells in both types of cultures (midbrain: 9.1±0.5 and 17.1±0.4 (P<0.001); forebrain: 1.9±0.4 and 3.9±0.6 (P<0.01) percent of total cells). Regardless of oxygen levels, the content of TH-ir cells with mature neuronal morphologies was higher for midbrain as compared to forebrain cultures. Proliferative Ki67-ir cells were found in both types of cultures, but the relative proportion of these cells was significantly higher for forebrain NSCs cultured at low, as compared to high, oxygen tension. No such difference was detected for midbrain-derived cells. Western blot analysis revealed that low oxygen enhanced β-tubulin III and GFAP expression in both cultures. Up-regulation of β-tubulin III was most pronounced for midbrain cells, whereas GFAP expression was higher in forebrain as compared to midbrain cells. NSCs from both brain regions displayed less cell death when cultured at low oxygen tension. Following mictrotransplantation into mouse striatal slice cultures predifferentiated midbrain NSCs were found to proliferate and differentiate into substantial numbers of TH-ir neurons with mature neuronal morphologies, particularly at low oxygen. In contrast, predifferentiated forebrain NSCs microtransplanted using identical conditions displayed little proliferation and contained few TH-ir cells, all of which had an immature appearance. Our data may reflect differences in dopaminergic differentiation capacity and region-specific requirements of NSCs, with the dopamine-depleted striatum cultured at low oxygen offering an attractive micro-environment for midbrain NSCs. PMID:24788190
Park, Gunhyuk; Kim, Hyo Geun; Ju, Mi Sun; Ha, Sang Keun; Park, Yongkon; Kim, Sun Yeou; Oh, Myung Sook
2013-01-01
Aim: 6-Shogaol [1-(4-hydroxy-methoxyphenyl)-4-decen-one], a pungent compound isolated from ginger, has shown various neurobiological and anti-inflammatory effects. The aim of this study was to examine the effects of 6-shogaol on neuroinflammatory-induced damage of dopaminergic (DA) neurons in Parkinson's disease (PD) models. Methods: Cultured rat mesencephalic cells were treated with 6-shogaol (0.001 and 0.01 μmol/L) for 1 h, then with MPP+(10 μmol/L) for another 23 h. The levels of TNF-α and NO in medium were analyzed spectrophotometrically. C57/BL mice were administered 6-shogaol (10 mg·kg−1·d−1, po) for 3 d, and then MPTP (30 mg/kg, ip) for 5 d. Seven days after the last MPTP injection, behavioral testings were performed. The levels of tyrosine hydroxylase (TH) and macrophage antigen (MAC)-1 were determined with immunohistochemistry. The expression of iNOS and COX-2 was measured using RT PCR. Results: In MPP+-treated rat mesencephalic cultures, 6-shogaol significantly increased the number of TH-IR neurons and suppressed TNF-α and NO levels. In C57/BL mice, treatment with 6-shogaol reversed MPTP-induced changes in motor coordination and bradykinesia. Furthermore, 6-shogaol reversed MPTP-induced reductions in TH-positive cell number in the substantia nigra pars compacta (SNpc) and TH-IR fiber intensity in stratum (ST). Moreover, 6-shogaol significantly inhibited the MPTP-induced microglial activation and increases in the levels of TNF-α, NO, iNOS, and COX-2 in both SNpc and ST. Conclusion: 6-Shogaol exerts neuroprotective effects on DA neurons in in vitro and in vivo PD models. PMID:23811724
Park, Gunhyuk; Kim, Hyo Geun; Ju, Mi Sun; Ha, Sang Keun; Park, Yongkon; Kim, Sun Yeou; Oh, Myung Sook
2013-09-01
6-Shogaol [1-(4-hydroxy-methoxyphenyl)-4-decen-one], a pungent compound isolated from ginger, has shown various neurobiological and anti-inflammatory effects. The aim of this study was to examine the effects of 6-shogaol on neuroinflammatory-induced damage of dopaminergic (DA) neurons in Parkinson's disease (PD) models. Cultured rat mesencephalic cells were treated with 6-shogaol (0.001 and 0.01 μmol/L) for 1 h, then with MPP(+)(10 μmol/L) for another 23 h. The levels of TNF-α and NO in medium were analyzed spectrophotometrically. C57/BL mice were administered 6-shogaol (10 mg·kg(-1)·d(-1), po) for 3 d, and then MPTP (30 mg/kg, ip) for 5 d. Seven days after the last MPTP injection, behavioral testings were performed. The levels of tyrosine hydroxylase (TH) and macrophage antigen (MAC)-1 were determined with immunohistochemistry. The expression of iNOS and COX-2 was measured using RT PCR. In MPP(+)-treated rat mesencephalic cultures, 6-shogaol significantly increased the number of TH-IR neurons and suppressed TNF-α and NO levels. In C57/BL mice, treatment with 6-shogaol reversed MPTP-induced changes in motor coordination and bradykinesia. Furthermore, 6-shogaol reversed MPTP-induced reductions in TH-positive cell number in the substantia nigra pars compacta (SNpc) and TH-IR fiber intensity in stratum (ST). Moreover, 6-shogaol significantly inhibited the MPTP-induced microglial activation and increases in the levels of TNF-α, NO, iNOS, and COX-2 in both SNpc and ST. 6-Shogaol exerts neuroprotective effects on DA neurons in in vitro and in vivo PD models.
Sinpru, Panpradap; Sartsoongnoen, Natagarn; Rozenboim, Israel; Porter, Tom E; El Halawani, Mohamed E; Chaiseha, Yupaporn
2018-07-01
The mesotocinergic (MTergic) and dopaminergic (DAergic) systems have been documented to play pivotal roles in maternal behaviors in native Thai chickens. In native Thai chickens, plasma prolactin (PRL) concentrations are associated with maternal behaviors, which are also controlled by the DAergic system. However, the role of MT in conjunction with the roles of DA and PRL on the neuroendocrine regulation of the transition from incubating to rearing behavior has never been studied. Therefore, the aim of this study was to investigate the association of MT, DA, and PRL during the transition from incubating to rearing behavior in native Thai hens. Using an immunohistochemistry technique, the numbers of MT-immunoreactive (-ir) and tyrosine hydroxylase-ir (TH-ir, a DA marker) neurons were compared between incubating hens (INC; n = 6) and hens for which the incubated eggs were replaced with 3 newly hatched chicks for 3 days after 6, 10, and 14 days of incubation (REC; n = 6). Plasma PRL concentrations were determined by enzyme-linked immunosorbent assay. The results revealed that the numbers of MT-ir neurons within the nucleus supraopticus, pars ventralis (SOv), nucleus preopticus medialis (POM), and nucleus paraventricularis magnocellularis (PVN) increased in the REC hens when compared with those of the INC hens at 3 different time points (at days 9, 13, and 17). On the other hand, the number of TH-ir neurons in the nucleus intramedialis (nI) decreased in the REC13 and REC17 hens when compared with those of the INC hens. However, the number of TH-ir neurons in the nucleus mamillaris lateralis (ML) only decreased in the REC13 hens when compared with the INC13 hens. The decrease in the numbers of TH-ir neurons within the nI and ML is associated with the decrease in the levels of plasma PRL. This study suggests that the presence of either eggs or chicks is the key factor regulating the MTergic system within the SOv, POM, and PVN and the DAergic system within the nI and ML during the transition from incubating to rearing behavior in native Thai chickens. The results further indicate that these two systems play pivotal roles in the transition from incubating to rearing behavior in this equatorial species. Copyright © 2018 Elsevier Inc. All rights reserved.
Interpolation Inequalities and Spectral Estimates for Magnetic Operators
NASA Astrophysics Data System (ADS)
Dolbeault, Jean; Esteban, Maria J.; Laptev, Ari; Loss, Michael
2018-05-01
We prove magnetic interpolation inequalities and Keller-Lieb-Thir-ring estimates for the principal eigenvalue of magnetic Schr{\\"o}dinger operators. We establish explicit upper and lower bounds for the best constants and show by numerical methods that our theoretical estimates are accurate.
NASA Astrophysics Data System (ADS)
Iorio, Lorenzo; Lucchesi, David
2003-07-01
In this paper we analyse quantitatively the concept of LAGEOS-type satellites in critical supplementary orbit configuration (CSOC) which has proved capable of yielding various observables for many tests of general relativity in the terrestrial gravitational field, with particular emphasis on the measurement of the Lense-Thirring effect. By using an entirely new pair of LAGEOS-type satellites in identical, supplementary orbits with, e.g., semimajor axes a = 12 000 km, eccentricity e = 0.05 and inclinations iS1 = 63.4° and iS2 = 116.6°, it would be possible to cancel out the impact of the mismodelling of the static part of the gravitational field of the Earth to a very high level of accuracy. The departures from the ideal supplementary orbital configuration due to the orbital injection errors would yield systematic gravitational errors of the order of a few per cent, according to the covariance matrix of the EGM96 gravity model up to degree l = 20. However, the forthcoming, new gravity models from the CHAMP and GRACE missions should greatly improve the situation. So, it should be possible to measure the gravitomagnetic shifts of the sum of their nodes Σ\\dotΩLT with an accuracy level perhaps less than 1%, of the difference of their perigees Δ\\dotωLT with an accuracy level of 5% and of ≡ Σ\\dotΩLT - Δ\\dotωLT with an accuracy level of 2.8%. Such results, which are due to the non-gravitational perturbations mismodelling, have been obtained for an observational time span of about 6 years and could be further improved by fitting and removing from the analysed time series the major time-varying perturbations which have known periodicities.
Systematic effects in LOD from SLR observations
NASA Astrophysics Data System (ADS)
Bloßfeld, Mathis; Gerstl, Michael; Hugentobler, Urs; Angermann, Detlef; Müller, Horst
2014-09-01
Beside the estimation of station coordinates and the Earth’s gravity field, laser ranging observations to near-Earth satellites can be used to determine the rotation of the Earth. One parameter of this rotation is ΔLOD (excess Length Of Day) which describes the excess revolution time of the Earth w.r.t. 86,400 s. Due to correlations among the different parameter groups, it is difficult to obtain reliable estimates for all parameters. In the official ΔLOD products of the International Earth Rotation and Reference Systems Service (IERS), the ΔLOD information determined from laser ranging observations is excluded from the processing. In this paper, we study the existing correlations between ΔLOD, the orbital node Ω, the even zonal gravity field coefficients, cross-track empirical accelerations and relativistic accelerations caused by the Lense-Thirring and deSitter effect in detail using first order Gaussian perturbation equations. We found discrepancies due to different a priories by using different gravity field models of up to 1.0 ms for polar orbits at an altitude of 500 km and up to 40.0 ms, if the gravity field coefficients are estimated using only observations to LAGEOS 1. If observations to LAGEOS 2 are included, reliable ΔLOD estimates can be achieved. Nevertheless, an impact of the a priori gravity field even on the multi-satellite ΔLOD estimates can be clearly identified. Furthermore, we investigate the effect of empirical cross-track accelerations and the effect of relativistic accelerations of near-Earth satellites on ΔLOD. A total effect of 0.0088 ms is caused by not modeled Lense-Thirring and deSitter terms. The partial derivatives of these accelerations w.r.t. the position and velocity of the satellite cause very small variations (0.1 μs) on ΔLOD.
NASA Technical Reports Server (NTRS)
Ahn, C.; Ziemke, J. R.; Chandra, S.; Bhartia, P. K.
2002-01-01
A recently developed technique called cloud slicing used for deriving upper tropospheric ozone from the Nimbus 7 Total Ozone Mapping Spectrometer (TOMS) instrument combined together with temperature-humidity and infrared radiometer (THIR) is no longer applicable to the Earth Probe TOMS (EPTOMS) because EPTOMS does not have an instrument to measure cloud top temperatures. For continuing monitoring of tropospheric ozone between 200-500hPa and testing the feasibility of this technique across spacecrafts, EPTOMS data are co-located in time and space with the Geostationary Operational Environmental Satellite (GOES)-8 infrared data for 2001 and early 2002, covering most of North and South America (45S-45N and 120W-30W). The maximum column amounts for the mid-latitudinal sites of the northern hemisphere are found in the March-May season. For the mid-latitudinal sites of the southern hemisphere, the highest column amounts are found in the September-November season, although overall seasonal variability is smaller than those of the northern hemisphere. The tropical sites show the weakest seasonal variability compared to higher latitudes. The derived results for selected sites are cross validated qualitatively with the seasonality of ozonesonde observations and the results from THIR analyses over the 1979-1984 time period due to the lack of available ozonesonde measurements to study sites for 2001. These comparisons show a reasonably good agreement among THIR, ozonesonde observations, and cloud slicing-derived column ozone. With very limited co-located EPTOMS/GOES data sets, the cloud slicing technique is still viable to derive the upper tropospheric column ozone. Two new variant approaches, High-Low (HL) cloud slicing and ozone profile derivation from cloud slicing are introduced to estimate column ozone amounts using the entire cloud information in the troposphere.
MAGNETOHYDRODYNAMIC SIMULATION OF A DISK SUBJECTED TO LENSE-THIRRING PRECESSION
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sorathia, Kareem A.; Krolik, Julian H.; Hawley, John F.
2013-11-01
When matter orbits around a central mass obliquely with respect to the mass's spin axis, the Lense-Thirring effect causes it to precess at a rate declining sharply with radius. Ever since the work of Bardeen and Petterson, it has been expected that when a fluid fills an orbiting disk, the orbital angular momentum at small radii should then align with the mass's spin. Nearly all previous work has studied this alignment under the assumption that a phenomenological 'viscosity' isotropically degrades fluid shears in accretion disks, even though it is now understood that internal stress in flat disks is due tomore » anisotropic MHD turbulence. In this paper we report a pair of matched simulations, one in MHD and one in pure (non-viscous) HD in order to clarify the specific mechanisms of alignment. As in the previous work, we find that disk warps induce radial flows that mix angular momentum of different orientation; however, we also show that the speeds of these flows are generically transonic and are only very weakly influenced by internal stresses other than pressure. In particular, MHD turbulence does not act in a manner consistent with an isotropic viscosity. When MHD effects are present, the disk aligns, first at small radii and then at large; alignment is only partial in the HD case. We identify the specific angular momentum transport mechanisms causing alignment and show how MHD effects permit them to operate more efficiently. Last, we relate the speed at which an alignment front propagates outward (in the MHD case) to the rate at which Lense-Thirring torques deliver angular momentum at smaller radii.« less
Tilted Thick-Disk Accretion onto a Kerr Black Hole
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fragile, P C; Anninos, P
2003-12-12
We present the first results from fully general relativistic numerical studies of thick-disk accretion onto a rapidly-rotating (Kerr) black hole with a spin axis that is tilted (not aligned) with the angular momentum vector of the disk. We initialize the problem with the solution for an aligned, constant angular momentum, accreting thick disk around a black hole with spin a/M = J/M{sup 2} = +0.9 (prograde disk). The black hole is then instantaneously tilted, through a change in the metric, by an angle {beta}{sub 0}. In this Letter we report results with {beta}{sub 0} = 0, 15, and 30{sup o}.more » The disk is allowed to respond to the Lense-Thirring precession of the tilted black hole. We find that the disk settles into a quasi-static, twisted, warped configuration with Lense-Thirring precession dominating out to a radius analogous to the Bardeen-Petterson transition in tilted Keplerian disks.« less
DOT National Transportation Integrated Search
2007-01-03
This report is the thirs in a series describing the development of performance measures pertaining to the security of the maritime transportation network (port security metrics). THe development of measures to guide improvements in maritime security ...
Small City Transit : Chapel Hill, North Carolina : Public Transit Serving a University and Town
DOT National Transportation Integrated Search
1976-03-01
Chapel Hill, North Carolina, is an illustration of a public transit service providing a high level of service for a town its size and a good example of a cooperative arrangement between a town and a resident university. This case study is one of thir...
Rotating metric in nonsingular infinite derivative theories of gravity
NASA Astrophysics Data System (ADS)
Cornell, Alan S.; Harmsen, Gerhard; Lambiase, Gaetano; Mazumdar, Anupam
2018-05-01
In this paper, we will provide a nonsingular rotating spacetime metric for a ghost-free infinite derivative theory of gravity in a linearized limit. We will provide the predictions for the Lense-Thirring effect for a slowly rotating system, and how it is compared with that from general relativity.
Light-cone quantization of two dimensional field theory in the path integral approach
NASA Astrophysics Data System (ADS)
Cortés, J. L.; Gamboa, J.
1999-05-01
A quantization condition due to the boundary conditions and the compatification of the light cone space-time coordinate x- is identified at the level of the classical equations for the right-handed fermionic field in two dimensions. A detailed analysis of the implications of the implementation of this quantization condition at the quantum level is presented. In the case of the Thirring model one has selection rules on the excitations as a function of the coupling and in the case of the Schwinger model a double integer structure of the vacuum is derived in the light-cone frame. Two different quantized chiral Schwinger models are found, one of them without a θ-vacuum structure. A generalization of the quantization condition to theories with several fermionic fields and to higher dimensions is presented.
Battle Experience; Solomon Islands Actions Information. Bulletin Number 4
1942-11-01
BE COAST WATCHER AND RADAR. Planes were picked up by binoculars, radar and naked eye as they approached over Florida Island. Type radar 11 FD" and...2070-2080 kcs. earlier at about 2130, became excited and very numerous. Some thir- teen different stations were or this frequency at one time. Firing
Measuring general relativity effects in a terrestrial lab by means of laser gyroscopes
NASA Astrophysics Data System (ADS)
Beverini, N.; Allegrini, M.; Beghi, A.; Belfi, J.; Bouhadef, B.; Calamai, M.; Carelli, G.; Cuccato, D.; Di Virgilio, A.; Maccioni, E.; Ortolan, A.; Porzio, A.; Santagata, R.; Solimeno, S.; Tartaglia, A.
2014-07-01
GINGER is a proposed tridimensional array of laser gyroscopes with the aim of measuring the Lense-Thirring effect, predicted by the general relativity theory, in a terrestrial laboratory environment. We discuss the required accuracy, the methods to achieve it, and the preliminary experimental work in this direction.
Deep learning beyond Lefschetz thimbles
NASA Astrophysics Data System (ADS)
Alexandru, Andrei; Bedaque, Paulo F.; Lamm, Henry; Lawrence, Scott
2017-11-01
The generalized thimble method to treat field theories with sign problems requires repeatedly solving the computationally expensive holomorphic flow equations. We present a machine learning technique to bypass this problem. The central idea is to obtain a few field configurations via the flow equations to train a feed-forward neural network. The trained network defines a new manifold of integration which reduces the sign problem and can be rapidly sampled. We present results for the 1 +1 dimensional Thirring model with Wilson fermions on sizable lattices. In addition to the gain in speed, the parametrization of the integration manifold we use avoids the "trapping" of Monte Carlo chains which plagues large-flow calculations, a considerable shortcoming of the previous attempts.
Schotzinger, R J; Landis, S C
1990-05-01
Histochemical, immunocytochemical, and radioenzymatic techniques were used to examine the neurotransmitter-related properties of the innervation of thoracic hairy skin in rats during adulthood and postnatal development. In the adult, catecholamine-containing fibers were associated with blood vessels and piloerector muscles, and ran in nerve bundles throughout the dermis. The distribution of tyrosine hydroxylase (TH)-immunoreactive (IR) fibers was identical. Neuronal fibers displaying neuropeptide Y (NPY) immunoreactivity were seen in association with blood vessels. Double-labeling studies suggested that most, if not all, NPY-IR fibers were also TH-IR and likewise most, if not all, vessel-associated TH-IR fibers were also NPY-IR. Calcitonin gene-related peptide (CGRP)-IR fibers were observed near and penetrating into the epidermis, in close association with hair follicles and blood vessels, and in nerve bundles. A similar distribution of substance P (SP)-IR fibers was evident. In adult animals treated as neonates with the sympathetic neurotoxin 6-hydroxydopamine, a virtual absence of TH-IR and NPY-IR fibers was observed, whereas the distribution of CGRP-IR and SP-IR fibers appeared unaltered. During postnatal development, a generalized increase in the number, fluorescence intensity, and varicose morphology of neuronal fibers displaying catecholamine fluorescence, NPY-IR, CGRP-IR, and SP-IR was observed. By postnatal day 21, the distribution of the above fibers had reached essentially adult levels, although the density of epidermal-associated CGRP-IR and SP-IR fibers was significantly greater than in the adult. The following were not evident in thoracic hairy skin at any timepoint examined: choline acetyltransferase activity, acetylcholinesterase histochemical staining or immunoreactivity, fibers displaying immunoreactivity to vasoactive intestinal peptide, cholecystokinin, or leucine-enkephalin. The present study demonstrates that the thoracic hairy skin in developing and adult rats receives an abundant sympathetic catecholaminergic and sensory innervation, but not a cholinergic innervation.
Lafuente, Jose V; Requejo, Catalina; Carrasco, Alejandro; Bengoetxea, Harkaitz
2017-01-01
Parkinson's disease (PD) is the second most frequent neurodegenerative disorder, but current therapies are only symptomatic. Experimental models are necessary to go deeper in the comprehension of pathophysiological mechanism and to assess new therapeutic strategies. The unilateral 6-hydroxydopamine (6-OHDA) lesion either in medial forebrain bundle (MFB) or into the striatum in rats affords to study various stages of PD depending on the evolution time lapsed. A promising alternative to address the neurodegenerative process is the use of neurotrophic factors; but its clinical use has been limited due to its short half-life and rapid degradation after in vivo administration, along with difficulties for crossing the blood-brain barrier (BBB). Tyrosine hydroxylase (TH) immunostaining revealed a significant decrease of the TH-immunopositive striatal volume in 6-OHDA group from rostral to caudal one. The loss of TH-ir neurons and axodendritic network (ADN) was higher in caudal sections showing a selective vulnerability of the topological distributed dopaminergic system. In addition to a remarkable depletion of dopamine in the nigrostriatal system, the administration of 6-OHDA into MFB induces some other neuropathological changes such as an increase of glial fibrillary acidic protein (GFAP) positive cells in substantia nigra (SN) as well as in striatum. Intrastriatal implantation of micro- or nanosystems delivering neurotrophic factor in parkinsonized rats for bypassing BBB leads to a significative functional and morphological recovery. Neurorestorative morphological changes (preservation of the TH-ir cells and ADN) along the rostrocaudal axis of caudoputamen complex and SN have been probed supporting a selective recovering after the treatment as well. Others innovative therapeutic strategies, such as the intranasal delivery, have been recently assessed in order to search the NTF effects. In addition some others methodological key points are reviewed. © 2017 Elsevier Inc. All rights reserved.
Rapid Jet Precession During the 2015 Outburst of the Black Hole X-ray Binary V404 Cygni
NASA Astrophysics Data System (ADS)
Sivakoff, Gregory R.; Miller-Jones, James; Tetarenko, Alex J.
2017-08-01
In stellar-mass black holes that are orbited by lower-mass companions (black hole low-mass X-ray binaries), the accretion process can undergo dramatic outbursts that can be accompanied by the launching of powerful relativistic jets. We still do not know the exact mechanism responsible for launching these jets, despite decades of research and the importance of determining this mechanism given the clear analogue of accreting super-massive black holes and their jets. The two main models for launching jets involve the extraction of the rotational energy of a spinning black hole (Blandford-Znajek) and the centrifugal acceleration of particles by open magnetic field lines rotating with the accretion flow (Blandford-Payne). Since some relativistic jets are not fully aligned with the angular momentum of the binary's orbit, the inner accretion flow of some black hole X-ray binaries may precess due to frame-dragging by a spinning black hole (Lense-Thirring precession). This precession has been previously observed close to the black hole as second-timescale quasi-periodic (X-ray) variability. In this talk we will present radio-through-sub-mm timing and high-angular resolution radio imaging (including a high-timing resolution movie) of the black hole X-ray binary V404 Cygni during its 2015 outburst. These data show that at the peak of the outburst the relativistic jets in this system were precessing on timescales of hours. We will discuss how rapid precession can be explained by Lense-Thirring precession of a vertically-extended slim disc that is maintained out to a radius of 6 X 1010 cm by a highly super-Eddington accretion rate. This would imply that the jet axis of V404 Cyg is not aligned with the black hole spin. More importantly, this places a key requirement on any model for launching jets, and may favour launching the jet from the rotating magnetic fields threading the disc.
The M198 Howitzer as a Direct Support Weapon during Amphibious Operations.
1980-06-06
critical to the success of f:;ture amphibio ..s ot.er-.tions. Purpose of the Study, The purpose of this study is to determine the imrpact of the ,19 ’s...principle amphibio ;- shies lift capatilities and physical characte-istics indic -tcs thir flexibility a ,d speed, or lack tnhreof, in d::>.rking large
He, Huan; Guo, Wei-Wei; Xu, Rong-Rong; Chen, Xiao-Qing; Zhang, Nan; Wu, Xia; Wang, Xiao-Min
2016-10-24
Alkaloids from Piper longum (PLA), extracted from P. longum, have potent anti-inflammatory effects. The aim of this study was to investigate whether PLA could protect dopaminergic neurons against inflammation-mediated damage by inhibiting microglial activation using a lipopolysaccharide (LPS)-induced dopaminergic neuronal damage rat model. The animal behaviors of rotational behavior, rotarod test and open-field test were investigated. The survival ratio of dopaminergic neurons and microglial activation were examined. The dopamine (DA) and its metabolite were detected by high performance liquid chromatography (HPLC). The effects of PLA on the expression of interleukin (IL)-6, interleukin (IL)-1β and tumor necrosis factor (TNF)-α were detected by enzyme-linked immunosorbent assay (ELISA). Reactive oxygen species (ROS) and nitric oxide (NO) were also estimated. We showed that the survival ratio of tyrosine hydroxylase-immunoreactive (TH-ir) neurons in the substantia nigra pars compacta (SNpc) and DA content in the striatum were reduced after a single intranigral dose of LPS (10 μg) treatment. The survival rate of TH-ir neurons in the SNpc and DA levels in the striatum were significantly improved after treatment with PLA for 6 weeks. The over-activated microglial cells were suppressed by PLA treatment. We also observed that the levels of inflammatory cytokines, including TNF-α, IL-6 and IL-1β were decreased and the excessive production of ROS and NO were abolished after PLA treatment. Therefore, the behavioral dysfunctions induced by LPS were improved after PLA treatment. This study suggests that PLA plays a significant role in protecting dopaminergic neurons against inflammatory reaction induced damage.
UV Detector Materials Development Program
1981-12-01
document. UNCLASSIFIED SECURITY CLASSIFICATION OF THIr odkE ’Whe Date Entered) READ INSTRUCTIONS REPORT DOCUMENTATION PAGE BEFORE COMPLETING FORM 1... collection efficiency within the detector (internal quantum efficiency). As mentioned previously, it was found that reverse biasing the Schottky diodes...the ratio of the number of carriers collected in the detector versus the number of photons entering into the absorbing region. It is, therefore
1985-05-30
Order (FECO) ......... 23 3. X -Ray Diffraction ............................... 26 4. Transmission Electron Microscopy (TEM) ............... 26 5...remained amorphous after bombardment, as evidenced by X - ray diffraction, and showed no other changes. 0 (2) For Sb203, the crystallite size was reduced...main effect on MgF2 was the reduction in crystallite size. The films were too thir. for meaningful x - ray diffraction analysis. Durability and
The GINGER project and status of the GINGERino prototype at LNGS
NASA Astrophysics Data System (ADS)
Ortolan, A.; Belfi, J.; Bosi, F.; Di Virgilio, A.; Beverini, N.; Carelli, G.; Maccioni, E.; Santagata, R.; Simonelli, A.; Beghi, A.; Cuccato, D.; Donazzan, A.; Naletto, G.
2016-05-01
GINGER (Gyroscopes IN GEneral Relativity) is a proposal for measuring in a ground-based laboratory the Lense-Thirring effect, known also as inertial frame dragging, that is predicted by General Relativity, and is induced by the rotation of a massive source. GINGER will consist in an array of at least three square ring lasers, mutually orthogonal, with about 6-10 m side, and located in a deep underground site, possibly the INFN - National Laboratories of Gran Sasso. The tri-axial design will provide a complete estimation of the laboratory frame angular velocity, to be compared with the Earths rotation estimate provided by IERS with respect the fixed stars frame. Large-size ring lasers have already reached a very high sensitivity, allowing for relevant geodetic measurements. The accuracy required for Lense-Thirring effect measurement is higher than 10-14 rad/s and therefore Earth angular velocity must be measured within one part in 10-9. A 3.6 m side, square ring laser, called GINGERino, has been recently installed inside the Gran Sasso underground laboratories in order to qualify the site for a future installation of GINGER. We discuss the current status of the experimental work, and in particular of the GINGERino prototype.
The presence and nature of ellipticity in Appalachian hardwood logs
R. Edward Thomas; John S. Stanovick; Deborah Conner
2017-01-01
The ellipticity of hardwood logs is most often observed and measured from either end of a log. However, due to the nature of hardwood tree growth and bucking practices, the assessment of ellipticity in thir manner may not be accurate. Trees grown on hillsides often develop supporting wood that gives the first few feet of the log butt a significant degree of...
NASA Astrophysics Data System (ADS)
Dolbeault, Jean; Esteban, Maria J.; Laptev, Ari; Loss, Michael
2018-05-01
We study functional and spectral properties of perturbations of the operator -(∂s+i a ) 2 in L2(S1 ) . This operator appears when considering the restriction to the unit circle of a two-dimensional Schrödinger operator with the Bohm-Aharonov vector potential. We prove a Hardy-type inequality on R2 and, on S1, a sharp interpolation inequality and a sharp Keller-Lieb-Thirring inequality.
A cluster version of the GGT sum rule
NASA Astrophysics Data System (ADS)
Hencken, Kai; Baur, Gerhard; Trautmann, Dirk
2004-03-01
We discuss the derivation of a "cluster sum rule" from the Gellmann-Goldberger-Thirring (GGT) sum rule as an alternative to the Thomas-Reiche-Kuhn (TRK) sum rule, which was used as the basis up to now. We compare differences in the assumptions and approximations. Some applications of the sum rule for halo nuclei, as well as, nuclei with a pronounced cluster structure are discussed.
Geodetic precession or dragging of inertial frames
NASA Technical Reports Server (NTRS)
Ashby, Neil; Shahid-Saless, Bahman
1989-01-01
In General Relativity, the Principle of General Covariance allows one to describe phenomena by means of any convenient choice of coordinate system. Here, it is shown that the geodetic precession of a gyroscope orbiting a spherically symmetric, nonrotating mass can be recast as a Lense-Thirring frame-dragging effect, in an appropriately chosen coordinate frame whose origin falls freely along with the gyroscope and whose spatial coordinate axes point in fixed directions.
1975-11-17
and control (subsystem) COMM., comm AEC Atomic Energy Commission comsat AFB Air Force Base COMSTAR ACE A-hr aerospace ground equipment ampere...array TDA Satellite Assembly Building TDAL Space and Missile Systems Organization (U.S. Air Force) TDM THIR satellite communications system TI...Satellite Control Facility (U.S. Air Force) TIROS selective chopper radiometer TLM, T/M surface composition mapping radiometer TOS TRUST
The impact of the orbital decay of the LAGEOS satellites on the frame-dragging tests
NASA Astrophysics Data System (ADS)
Iorio, Lorenzo
2016-01-01
The laser-tracked geodetic satellites LAGEOS, LAGEOS II and LARES are currently employed, among other things, to measure the general relativistic Lense-Thirring effect in the gravitomagnetic field of the spinning Earth with the hope of providing a more accurate test of such a prediction of the Einstein's theory of gravitation than the existing ones. The secular decay a ˙ of the semimajor axes a of such spacecrafts, recently measured in an independent way to a σȧ ≈ 0.1-0.01 m yr-1 accuracy level, may indirectly impact the proposed relativistic experiment through its connection with the classical orbital precessions induced by the Earth's oblateness J2 . Indeed, the systematic bias due to the current measurement errors σȧ is of the same order of magnitude of, or even larger than, the expected relativistic signal itself; moreover, it grows linearly with the time span T of the analysis. Therefore, the parameter-fitting algorithms must be properly updated in order to suitably cope with such a new source of systematic uncertainty. Otherwise, an improvement of one-two orders of magnitude in measuring the orbital decay of the satellites of the LAGEOS family would be required to reduce this source of systematic uncertainty to a percent fraction of the Lense-Thirring signature.
A quasi-periodic modulation of the iron line centroid energy in the black hole binary H1743-322
NASA Astrophysics Data System (ADS)
Ingram, Adam; van der Klis, Michiel; Middleton, Matthew; Done, Chris; Altamirano, Diego; Heil, Lucy; Uttley, Phil; Axelsson, Magnus
2016-09-01
Accreting stellar-mass black holes often show a `Type-C' quasi-periodic oscillation (QPO) in their X-ray flux and an iron emission line in their X-ray spectrum. The iron line is generated through continuum photons reflecting off the accretion disc, and its shape is distorted by relativistic motion of the orbiting plasma and the gravitational pull of the black hole. The physical origin of the QPO has long been debated, but is often attributed to Lense-Thirring precession, a General Relativistic effect causing the inner flow to precess as the spinning black hole twists up the surrounding space-time. This predicts a characteristic rocking of the iron line between red- and blueshift as the receding and approaching sides of the disc are respectively illuminated. Here we report on XMM-Newton and NuSTAR observations of the black hole binary H1743-322 in which the line energy varies systematically over the ˜4 s QPO cycle (3.70σ significance), as predicted. This provides strong evidence that the QPO is produced by Lense-Thirring precession, constituting the first detection of this effect in the strong gravitation regime. There are however elements of our results harder to explain, with one section of data behaving differently than all the others. Our result enables the future application of tomographic techniques to map the inner regions of black hole accretion discs.
Structure and chemical organization of the accessory olfactory bulb in the goat.
Mogi, Kazutaka; Sakurai, Katsuyasu; Ichimaru, Toru; Ohkura, Satoshi; Mori, Yuji; Okamura, Hiroaki
2007-03-01
The structure and chemical composition of the accessory olfactory bulb (AOB) were examined in male and female goats. Sections were subjected to either Nissl staining, Klüver-Barrera staining, lectin histochemistry, or immunohistochemistry for nitric oxide synthase (NOS), neuropeptide Y (NPY), tyrosine hydroxylase (TH), dopamine beta-hydroxylase (DBH), and glutamic acid decarboxylase (GAD). The goat AOB was divided into four layers: the vomeronasal nerve layer (VNL), glomerular layer (GL), mitral/tufted (M/T) cell layer (MTL), and granule cell layer (GRL). Quantitative and morphometric analyses indicated that a single AOB contained 5,000-8,000 putative M/T cells with no sex differences, whereas the AOB was slightly larger in males. Of the 21 lectins examined, 7 specifically bound to the VNL and GL, and 1 bound not only to the VNL, but also to the MTL and GRL. In either of these cases, no heterogeneity of lectin staining was observed in the rostrocaudal direction. NOS-, TH-, DBH-, and GAD-immunoreactivity (ir) were observed in the MTL and GRL, whereas NPY-ir was present only in the GRL. In the GL, periglomerular cells with GAD-ir were found in abundance, and a subset of periglomerular cells containing TH-ir was also found. Double-labeling immunohistochemistry revealed that virtually all periglomerular cells containing TH-ir were colocalized with GAD-ir.
Physics of Gravitational Interaction: Geometry of Space or Quantum Field in Space
NASA Astrophysics Data System (ADS)
Baryshev, Yurij
2006-03-01
Thirring-Feynman's tensor field approach to gravitation opens new understanding on the physics of gravitational interaction and stimulates novel experiments on the nature of gravity. According to Field Gravity, the universal gravity force is caused by exchange of gravitons - the quanta of gravity field. Energy of this field is well-defined and excludes the singularity. All classical relativistic effects are the same as in General Relativity. The intrinsic scalar (spin 0) part of gravity field corresponds to ``antigravity'' and only together with the pure tensor (spin 2) part gives the usual Newtonian force. Laboratory and astrophysical experiments which may test the predictions of FG, will be performed in near future. In particular, observations at gravity observatories with bar and interferometric detectors, like Explorer, Nautilus, LIGO and VIRGO, will check the predicted scalar gravitational waves from supernova explosions. New types of cosmological models in Minkowski space are possible too.
The Nimbus 6 data catalog. Volume 1: 12 June 1975 through 31 August 1975. Data orbits 1 through 1082
NASA Technical Reports Server (NTRS)
1975-01-01
Subsections 1.2 through 1.10 of this catalog summarize the operational highlights of the individual experiments, present preliminary experiment results, and call attention to known data anamolies. Section 2 lists the on-off times for each experiment and provides a method for determining the geographical coverage of each experiment. Section 3 shows selected HIRS, SCAMS and ESMR images, and Section 4 presents THIR montages. Section 5 presents corrections to The Nimbus 6 User's Guide.
The Consortium Cooperation Versus Competition
1990-02-01
Approved tor pu-L’c t - Report IR004R 1 DistZ1urnc UnL-1=en Prepared pursuant to Department of Defense Contract MDA903-85-C-0139. The views expressed...presented. Government’s role in encouraging and supporting consortia is discussed in t . e conte.xt ofdomestic competition and the international...CLASSiFiCATION OF -- IS PaCE Al ()thir d itons are, )osolete \\ LA .,SSIFI E) ACKNOWLEDGMENTS The author wishes to thank Dr. Korhan Berzeg and Mr. Benjamin R
Soviet Perspectives on Current Sino-Soviet Relations.
1984-06-01
cmlbal power capable of power projection to the Thir , Worl , the growth of the strategic importance of South-East Asia, the Second Indochina War , the...34Chinese Ground Forces in ’Peoples War Under Modern Conditions’" ( Monograph February 1983), p. 6. 26. Jencks, "Peoples War ", Op. Cit., p. 5. 27...34 cessation of tradle. This continue( until 1_969, when act :-a conflict alone their utual ’or,3er broke out Im - nations uttered war threats
Tests of general relativity in earth orbit using a superconducting gravity gradiometer
NASA Technical Reports Server (NTRS)
Paik, H. J.
1989-01-01
Interesting new tests of general relativity could be performed in earth orbit using a sensitive superconducting gravity gradiometer under development. Two such experiments are discussed here: a null test of the tracelessness of the Riemann tensor and detection of the Lense-Thirring term in the earth's gravity field. The gravity gradient signals in various spacecraft orientations are derived, and dominant error sources in each experimental setting are discussed. The instrument, spacecraft, and orbit requirements imposed by the experiments are derived.
1980-04-01
difficult to detect erosion problems such as animal hole,:;, slou1hin;,, and erosion channels at an early stae. 4.3 Fva I u; t i on1 No si,,nificant...1/4 . . O)thir \\As-Bui It P’Ii ns Sol t Conserva tion Service 1 Bulrlil rigt on Squar, Suit,, 205 BurLiinton, Vermont 05 k’) 9 a.) - S S: ii
1977-09-30
When bonds to mature-- Callable before maturity 48 Commission to determine interest rate, form, denomina- tion, and execution of bonds 49 Officers whose...are limited in th:ir activity, however, sirce -hey nra without taxing, bonding , or assessment piwers (DeKrey, 1977). Soil conservation districts are...Qualification 04 Appointive members to qualify--Terms of office--Filling S vacancy 05 Officers--Office 06 Secretary-treasurer bond 07 Compensation and
DOE Office of Scientific and Technical Information (OSTI.GOV)
Iorio, Lorenzo; Zhang, Fupeng, E-mail: lorenzo.iorio@libero.it, E-mail: zhangfp7@mail.sysu.edu.cn
We perform detailed numerical analyses of the orbital motion of a test particle around a spinning primary, with the aim of investigating the possibility of using the post-Keplerian (pK) corrections to the orbiter’s periods (draconitic, anomalistic, and sidereal) as a further opportunity to perform new tests of post-Newtonian gravity. As a specific scenario, the S-stars orbiting the massive black hole (MBH) supposedly lurking in Sgr A* at the center of the Galaxy are adopted. We first study the effects of the pK Schwarzchild, Lense–Thirring, and quadrupole moment accelerations experienced by a target star for various possible initial orbital configurations. Itmore » turns out that the results of the numerical simulations are consistent with the analytical ones in the small eccentricity approximation for which almost all the latter ones were derived. For highly elliptical orbits, the sizes of the three pK corrections considered turn out to increase remarkably. The periods of the observed S2 and S0-102 stars as functions of the MBH’s spin axis orientation are considered as well. The pK accelerations lead to corrections of the orbital periods of the order of 1–100 days (Schwarzschild), 0.1–10 hr (Lense–Thirring), and 1–10{sup 3} s (quadrupole) for a target star with a = 300–800 au and e ≈ 0.8, which could be measurable with future facilities.« less
Upper Tropospheric Ozone Between Latitudes 60S and 60N Derived from Nimbus 7 TOMS/THIR Cloud Slicing
NASA Technical Reports Server (NTRS)
Ziemke, Jerald R.; Chandra, Sushil; Bhartia, P. K.
2002-01-01
This study evaluates the spatial distributions and seasonal cycles in upper tropospheric ozone (pressure range 200-500 hPa) from low to high latitudes (60S to 60N) derived from the satellite retrieval method called "Cloud Slicing." Cloud Slicing is a unique technique for determining ozone profile information in the troposphere by combining co-located measurements of cloud-top, pressure and above-cloud column ozone. For upper tropospheric ozone, co-located measurements of Nimbus 7 Total Ozone Mapping Spectrometer (TOMS) above-cloud column ozone, and Nimbus 7 Temperature Humidity Infrared Radiometer (THIR) cloud-top pressure during 1979-1984 were incorporated. In the tropics, upper tropospheric ozone shows year-round enhancement in the Atlantic region and evidence of a possible semiannual variability. Upper tropospheric ozone outside the tropics shows greatest abundance in winter and spring seasons in both hemispheres with largest seasonal and largest amounts in the NH. These characteristics are similar to lower stratospheric ozone. Comparisons of upper tropospheric column ozone with both stratospheric ozone and a proxy of lower stratospheric air mass (i.e., tropopause pressure) from National Centers for Environmental Prediction (NCEP) suggest that stratosphere-troposphere exchange (STE) may be a significant source for the seasonal variability of upper tropospheric ozone almost everywhere between 60S and 60N except in low latitudes around 10S to 25N where other sources (e.g., tropospheric transport, biomass burning, aerosol effects, lightning, etc.) may have a greater role.
GINGER (Gyroscopes IN General Relativity), a ring lasers array to measure the Lense-Thirring effect
NASA Astrophysics Data System (ADS)
Di Virgilio, Angela D. V.
The purpose of the GINGER is to perform the first test of general relativity (not considering the gravitational redshift measurements) in a terrestrial laboratory, using light as a probe. The experiment will complement the ones in space, performed or under way, with an entirely different technique and at a far lower cost. The methodology is based on ring-lasers, which are extremely accurate rotation sensors and can not only sense purely kinematical rotations (Sagnac effect accounting for the Earth rotation, polar motion of the terrestrial axis, local rotational movements of the laboratory due to the Earth crust dynamics...), but also general relativistic contributions such as the de Sitter effect (coupling between the gravito-electric field of the earth and the kinematical rotation) and the Lense-Thirring effect (inertial frame dragging due to the angular momentum of the earth). In order to reveal the latter effects, ring-laser response must be improved to be able to measure the effective rotation vector (kinematic plus GR terms) with an accuracy of 1 part in 109 or better. This is a challenging technological aspect, which however has been accurately taken into account by designing a system of ring lasers that will be implemented in this project. A ring laser have been installed inside the underground laboratory of GranSasso, with the purpose to see if an underground location is the right choice for GINGER. The apparatus and the preliminary results will be discussed.
Petraitiene, Viktorija; Pauza, Dainius H; Benetis, Rimantas
2014-06-01
The disbalance between adrenergic (sympathetic) and cholinergic (parasympathetic) cardiac inputs facilitates cardiac arrhythmias, including the lethal ones. In spite of the fact that the morphological pattern of the epicardiac ganglionated subplexuses (ENsubP) has been previously described in detail, the distribution of functionally distinct axons in human intrinsic nerves was not investigated thus far. Therefore, the aim of the present study was to quantitatively evaluate the distribution of tyrosine hydroxylase (TH)- and choline acetyltransferase (ChAT)-positive axons within intrinsic nerves at the level of the human heart hilum (HH), since they are of pivotal importance for determining proper treatment options for different arrhythmias. Tissue samples containing the intrinsic nerves from seven epicardiac subplexuses were obtained from nine human hearts without cardiac pathology and processed for immunofluorescent detection of TH and ChAT. The nerve area was measured and the numbers of axons were counted using microphotographs of nerve profiles. The densities of fibres were extrapolated and compared between subplexuses. ChAT-immunoreactive (IR) fibres were evidently predominant (>56%) in nerves of dorsal (DRA) and ventral right atrial (VRA) ENsubP. Within both left (LC) and right coronary ENsubP, the most abundant (70.9 and 83.0%, respectively) were TH-IR axons. Despite subplexal dependence, ChAT-IR fibres prevailed in comparatively thinner nerves, whereas TH-IR fibres in thicker ones. Morphometry showed that at the level of HH: (i) LC subplexal nerves were found to be the thickest (25 737 ± 4131 μm(2)) ones, whereas the thinnest (2604 ± 213 μm(2)) nerves concentrated in DRA ENsubP; (ii) the density of ChAT-IR axons was highest (6.8 ± 0.6/100 μm(2)) in the ventral left atrial nerves and lowest (3.2 ± 0.1/100 μm(2)) in left dorsal ENsubP and (iii) the density of TH-IR fibres was highest (15.9 ± 2.1/100 μm(2)) in LC subplexal nerves and lowest (4.4 ± 0.3/100 μm(2)) in VRA nerves. (i) The principal intrinsic adrenergic neural pathways in the human heart proceed via both coronary ENsubP that supply cardiac ventricles and (ii) the majority of cholinergic nerve fibres access the human heart through DRA and VRA ENsubP and extend towards the right atrium, including the region of the sinuatrial node. © The Author 2013. Published by Oxford University Press on behalf of the European Association for Cardio-Thoracic Surgery. All rights reserved.
Prosomeric organization of the hypothalamus in an elasmobranch, the catshark Scyliorhinus canicula
Santos-Durán, Gabriel N.; Menuet, Arnaud; Lagadec, Ronan; Mayeur, Hélène; Ferreiro-Galve, Susana; Mazan, Sylvie; Rodríguez-Moldes, Isabel; Candal, Eva
2015-01-01
The hypothalamus has been a central topic in neuroanatomy because of its important physiological functions, but its mature organization remains elusive. Deciphering its embryonic and adult organization is crucial in an evolutionary approach of the organization of the vertebrate forebrain. Here we studied the molecular organization of the hypothalamus and neighboring telencephalic domains in a cartilaginous fish, the catshark, Scyliorhinus canicula, focusing on ScFoxg1a, ScShh, ScNkx2.1, ScDlx2/5, ScOtp, and ScTbr1 expression profiles and on the identification α-acetylated-tubulin-immunoreactive (ir), TH-ir, 5-HT-ir, and GFAP-ir structures by means of immunohistochemistry. Analysis of the results within the updated prosomeric model framework support the existence of alar and basal histogenetic compartments in the hypothalamus similar to those described in the mouse, suggesting the ancestrality of these subdivisions in jawed vertebrates. These data provide new insights into hypothalamic organization in cartilaginous fishes and highlight the generality of key features of the prosomeric model in jawed vertebrates. PMID:25904850
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van Patten, R.A.; Everitt, C.W.F.
1976-03-22
In 1918, Lense and Thirring calculated that a moon orbiting a rotating planet would experience a nodal dragging effect due to general relativity. We describe an experiment to measure this effect to 1% with two counter-orbiting drag-free satellites in polar earth orbit. In addition to tracking data from existing ground stations, satellite-to-satellite Doppler ranging data are taken near the poles. New geophysical information is inherent in the polar data. (AIP)
Superconducting fluctuation current caused by gravitational drag
NASA Astrophysics Data System (ADS)
Tsuchida, Satoshi; Kuratsuji, Hiroshi
2017-12-01
We examine a possible effect of the Lense-Thirring field or gravitational drag by calculating the fluctuation current through a superconducting ring. The gravitational drag is induced by a rotating sphere, on top of which the superconducting ring is placed. The formulation is based on the Landau-Ginzburg free-energy functional of linear form. The resultant fluctuation current is shown to be greatly enhanced in the vicinity of the transition temperature, and the current also increases on increasing the winding number of the ring. These effects would provide a modest step towards magnification of tiny gravity.
Wang, Jianli; Liu, Chaobao; Ma, Yongping
2017-01-15
Parents-offspring bonding is critical for development of offspring in mammals. While it is known that pups stimuli provide rewarding effects on their parents, few studies have assessed whether parental stimuli serve as a reinforcing agent to their pups, and what the neural mechanisms underlying this reward process may be. In addition to maternal care, male ICR mice display pairmate-dependent parental behavior. Using the conditioned place preference (CPP) paradigm, we examined the effects of maternal and paternal conditioning on the postnatal day 17-21 female ICR mice pups, and compared the expression of oxytocin (OT)- and tyrosine hydroxylase (TH)- immunoreactive (IR) neurons. We found that the pups established dam- or sire- induced CPP when using mother conditioning (MC) or father conditioning (FC) alone. However, the pups failed to show any preference when using mother versus father conditioning (MFC). Compared to the control group, the MC and MFC groups displayed more OT-IR neurons in the supraoptic nucleus and more TH-IR neurons in the ventral tegmental area (VTA). The FC group showed more TH-IR neurons in the VTA compared to the control group, but there were no significant differences in OT-IR neurons. These findings indicate that female ICR mice pups may establish mother- or father- induced CPP. The underpinnings of preference for parents are associated with the activity of VTA dopaminergic neurons, and the preference of pups for mother in particular appears to be associated with OT levels. Copyright © 2016 Elsevier B.V. All rights reserved.
Lense-Thirring Precession of Accretion Disks and Quasi-Periodic Oscillations in X-Ray Binaries
NASA Astrophysics Data System (ADS)
Markovic, D.; Lamb, F. K.
2003-05-01
It has recently been suggested that gravitomagnetic precession of the inner part of the accretion disk, possibly driven by radiation torques, may be responsible for some of the 20-300 Hz quasi-periodic X-ray brightness oscillations (QPOs) observed in some low-mass binary systems containing accreting neutron stars and black hole candidates. We have explored warping modes of geometrically thin disks in the presence of gravitomagnetic and radiation torques. We have found a family of overdamped, low-frequency gravitomagnetic (LFGM) modes all of which have precession frequencies lower than a certain critical frequency ωcrit, which is 1 Hz for a compact object of solar mass. A radiation warping torque can cause a few of the lowest-frequency LFGM modes to grow with time, but even a strong radiation warping torque has essentially no effect on the LFGM modes with frequencies ≳10-4 Hz. We have also discovered a second family of high-frequency gravitomagnetic (HFGM) modes with precession frequencies that range from ωcrit up to slightly less than the gravitomagnetic precession frequency of a particle at the inner edge of the disk, which is 30 Hz if the disk extends inward to the innermost stable circular orbit around a 2M⊙ compact object with dimensionless angular momentum cJ/GM2 = 0.2. The highest-frequency HFGM modes are very localized spiral corrugations of the inner disk and are weakly damped, with Q values as large as 50. We discuss the implications of our results for the observability of Lense-Thirring precession in X-ray binaries.
Precession of orbits around the stellar-mass black hole in H 1743-322
NASA Astrophysics Data System (ADS)
Ingram, Adam
2016-07-01
Accreting stellar-mass black holes often show a quasi-periodic oscillation (QPO) in their X-ray flux with a period that slowly drifts from ~10s to ~0.05s, and an iron emission line in their X-ray spectrum. The iron line is generated by fluorescent re-emission, by the accretion disk, of X-ray photons originating in the innermost hot flow. The line shape is distorted by relativistic motion of the orbiting plasma and the gravitational pull of the black hole. The QPO arises from the immediate vicinity of the black hole, but its physical origin has long been debated. It has been suggested that the QPO originates via Lense-Thirring precession, a General Relativistic effect causing the inner flow to precess as the spinning black hole twists up the surrounding space-time. This predicts a characteristic rocking of the iron line between red and blue shift as the receding and approaching sides of the disk are respectively illuminated. I will talk about our observations of the black hole binary H 1743-322 in which the line energy varies in step with the ~4.5s QPO cycle, providing strong evidence that such QPOs originate via Lense-Thirring precession. This effect has previously been measured in our Solar System but our detection is in the strong field regime of General Relativity, at a precession rate 14 orders of magnitude faster than possible in the Earth's gravitational field. Our result enables the application of tomographic techniques to map the motion of matter in the strong gravity near black hole event horizons.
Electromagnetic versus Lense-Thirring alignment of black hole accretion discs
NASA Astrophysics Data System (ADS)
Polko, Peter; McKinney, Jonathan C.
2017-01-01
Accretion discs and black holes (BHs) have angular momenta that are generally misaligned, which can lead to warped discs and bends in any jets produced. We examine whether a disc that is misaligned at large radii can be aligned more efficiently by the torque of a Blandford-Znajek (BZ) jet than by Lense-Thirring (LT) precession. To obtain a strong result, we will assume that these torques maximally align the disc, rather than cause precession, or disc tearing. We consider several disc states that include radiatively inefficient thick discs, radiatively efficient thin discs, and super-Eddington accretion discs. The magnetic field strength of the BZ jet is chosen as either from standard equipartition arguments or from magnetically arrested disc (MAD) simulations. We show that standard thin accretion discs can reach spin-disc alignment out to large radii long before LT would play a role, due to the slow infall time that gives even a weak BZ jet time to align the disc. We show that geometrically thick radiatively inefficient discs and super-Eddington discs in the MAD state reach spin-disc alignment near the BH when density profiles are shallow as in magnetohydrodynamical simulations, while the BZ jet aligns discs with steep density profiles (as in advection-dominated accretion flows) out to larger radii. Our results imply that the BZ jet torque should affect the cosmological evolution of BH spin magnitude and direction, spin measurements in active galactic nuclei and X-ray binaries, and the interpretations for Event Horizon Telescope observations of discs or jets in strong-field gravity regimes.
NASA Astrophysics Data System (ADS)
Iorio, L.
2014-01-01
It has recently been proposed to combine the node drifts of the future constellation of 27 Galileo spacecraft together with those of the existing Laser Geodynamics Satellites (LAGEOS)-type satellites to improve the accuracy of the past and ongoing tests of the Lense-Thirring (LT) effect by removing the bias of a larger number of even zonal harmonics Jℓ than either done or planned so far. Actually, it seems a difficult goal to be achieved realistically for a number of reasons. First, the LT range signature of a Galileo-type satellite is as small as 0.5 mm over three-days arcs, corresponding to a node rate of just ˙ Ω LT = 2 milliarcseconds per year (mas yr-1). Some tesseral and sectorial ocean tides such as K1 and K2 induce long-period harmonic node perturbations with frequencies which are integer multiples of the extremely slow Galileo's node rate ˙ Ω completing a full cycle in about 40 yr. Thus, over time spans, T, of some years, they would act as superimposed semisecular aliasing trends. Since the coefficients of the Jℓ-free multisatellite linear combinations are determined only by the semimajor axis a, the eccentricity e and the inclination I, which are nominally equal for all the Galileo satellites, it is not possible to include all of them. Even using only one Galileo spacecraft together with the LAGEOS family would be unfeasible because of the fact that the resulting Galileo coefficient would be ≳ 1, thus enhancing the aliasing impact of the uncancelled nonconservative and tidal perturbations.
Misaligned Accretion and Jet Production
NASA Astrophysics Data System (ADS)
King, Andrew; Nixon, Chris
2018-04-01
Disk accretion onto a black hole is often misaligned from its spin axis. If the disk maintains a significant magnetic field normal to its local plane, we show that dipole radiation from Lense–Thirring precessing disk annuli can extract a significant fraction of the accretion energy, sharply peaked toward small disk radii R (as R ‑17/2 for fields with constant equipartition ratio). This low-frequency emission is immediately absorbed by surrounding matter or refracted toward the regions of lowest density. The resultant mechanical pressure, dipole angular pattern, and much lower matter density toward the rotational poles create a strong tendency to drive jets along the black hole spin axis, similar to the spin-axis jets of radio pulsars, also strong dipole emitters. The coherent primary emission may explain the high brightness temperatures seen in jets. The intrinsic disk emission is modulated at Lense–Thirring frequencies near the inner edge, providing a physical mechanism for low-frequency quasi-periodic oscillations (QPOs). Dipole emission requires nonzero hole spin, but uses only disk accretion energy. No spin energy is extracted, unlike the Blandford–Znajek process. Magnetohydrodynamic/general-relativistic magnetohydrodynamic (MHD/GRMHD) formulations do not directly give radiation fields, but can be checked post-process for dipole emission and therefore self-consistency, given sufficient resolution. Jets driven by dipole radiation should be more common in active galactic nuclei (AGN) than in X-ray binaries, and in low accretion-rate states than high, agreeing with observation. In non-black hole accretion, misaligned disk annuli precess because of the accretor’s mass quadrupole moment, similarly producing jets and QPOs.
Overview of the LARES Mission: orbit, error analysis and technological aspects
NASA Astrophysics Data System (ADS)
Ciufolini, Ignazio; Paolozzi, Antonio; Paris, Claudio
2012-03-01
LARES (LAser RElativity Satellite), is an Italian Space Agency (ASI) mission to be launched beginning of 2012 with the new European launch vehicle, VEGA; the launch opportunity was provided by the European Space Agency (ESA). LARES is a laser ranged satellite; it will be launched into a nearly circular orbit, with an altitude of 1450 km and an inclination of 69.5 degrees. The goal of the mission is the measurement of the Lense-Thirring effect with an uncertainty of few percent; such a small uncertainty will be achieved using LARES data together with data from the LAGEOS I (NASA) and LAGEOS II (NASA and ASI) satellites, and because GRACE mission (NASA-CSR and DLR-GFZ) is improving Earth's gravity field models. This paper describes LARES experiment along with the principal error sources affecting the measurement. Furthermore, some engineering aspects of the mission, in particular the structure and materials of the satellite (designed in order to minimize the non-gravitational perturbations), are described.
According to QFT there is likely no Lense-Thirring effect
NASA Astrophysics Data System (ADS)
Chen, Shao-Guang
According to QFT it is deduced that the gravitation is likely to originate from the polarization effect of Dirac vacuum fluctuation (Chen Shao-Guang, Nuovo Cimento B 104, 611, 1989). In Dirac vacuum the lowest-energy virtual neutrinos v0 possess most number, which exert isotropic colliding pressure to isolated mass-point A (m), the net force on A is zero. For another masspoint B (M) near A to obstruct v0 flux shooting to A, the v0 number along the line connecting A and B will decrease and destroy isotropic distribution of v0 , which leads to not only the change in momentum P (produces net v0 flux and net force Fp) but also the change in energy E or rest mass m (produces net force Fm) because in QFT the rest mass is not the bare mass but the physical mass of renormalization which contains v0 with energy. From the definition of force: F = Fp + Fm, Fp = m ( d v / d t ) , Fm = v (d m / d t ) (1) , on A (or B) net force FQ (quasi-Casimir pressure of weak interaction) is: FQ = Fp + Fm = - K (m M / r 2 ) ((r/r ) + (v /c )) (2). K calculated from the weak-electromagnetism unified theory (W-EUT) has the same order of magnitude as experimental gravitational constant G. Let a photon enter into the neighborhood of mass-point B and returns, we calculate the change in momentum-energy of photon with Eq.(2), and transform into the change in space-time metric through the commutation relations between conjugate momentum and conjugate coordinates in quantum theory. Again using the standard procedures of calibrating clock and calibrating ruler, we obtain Schwarzschild metric with constant K (Chen Shao-Guang, Origin of gravitation and gravitational redshift, pp 41- 48, Chinese Szechwan Science-Technique Press, Chengtu, 2004). Then FQ has geodetic effect. According to the change in masses caused by Bondi's inductive transfer of energy in GR (H. Bondi, Proc. R. Soc. London A 427, 249, 1990) and Eq. (1) a new gravitational formula is deduced: FG = Fp + Fm = - G(m M / r 2 ) ((r/r ) + (v /c )) (3). FG is equivalent to Einstein's equation, the multi-bodies gravitational problems can be solved by FG . FG and FQ as a bridge joined QFT and GR. If K ≡ G, gravitational theory would be merged into W-EUT. The gravitational laws predicted by FG and FQ are identical except quantum effects and Lense-Thirring effect —— the dragging of inertial frames. FQ has quantum effects but FG has not. Quantum effects of gravity had been verified by Nesvizhevsky et al. with the ultracold neutrons falling in the earth's gravitational field (V.V. Nesvizhevsky et al., Nature 415, 297, 2002), which shows that FQ is essential but FG is phenomenological. FG has Lense-Thirring effect but FQ has not. Because gravitational field of FG is on the around of B but the net v0 flux (as the gravitational field of FQ ) only appear on the line connecting A and B. When mass-point A moves to a new place, the net v0 flux will immediately appear on a new line connecting A and B. When the place of mass-point A does not change but B rotates, the net v0 flux will be still on the original line and will not rotate with B. Therefore, in 2004 I predicted that GP-B can not find the advance of Lense-Thirring effect and only can find the advance of geodetic effect, as a ‘negative result' for the mostly mission of GP-B. The result predicted from QFT or GR who is more correct will be judged by GP-B.
Schrödinger Operator with Non-Zero Accumulation Points of Complex Eigenvalues
NASA Astrophysics Data System (ADS)
Bögli, Sabine
2017-06-01
We study Schrödinger operators {H=-Δ + V} in {L2(Ω)} where {Ω} is R^d or the half-space R+d, subject to (real) Robin boundary conditions in the latter case. For {p > d} we construct a non-real potential {V \\in Lp(Ω) \\cap L^{∞}(Ω)} that decays at infinity so that H has infinitely many non-real eigenvalues accumulating at every point of the essential spectrum {σ_ess(H)=[0,∞)}. This demonstrates that the Lieb-Thirring inequalities for selfadjoint Schrödinger operators are no longer true in the non-selfadjoint case.
1974-10-01
of the fact that we want to get information to pilots which will create a meaningful mental picture for the pilot, we still teach classical high and...universities and other flight safety groups ) I that porbapq bove R creat deal of safety technology nvailable in order that we c.’n nssRil..tp mtn,,ifacturers to...Dr. David Kehlman, who directs the project at Kansas, is with us and I am sure he will be glad to talk abuut the airplanc with you. In thiR program
1976-10-22
tat afec. out livs dont now wathyarvoig They’ doin’t i.now what they are doing simply becausc the have no adequa te basis to judge the effects of thir...publicity toGls. The main ones that I use are radio, te!hvision, newsletters for patrons, staff notes, booklets, posters , and last of all--the library...especially in the libraries. It is the most God awful color. You can cover t 13 it though. You can get loads of posters from all over the place--foreign
A ring lasers array for fundamental physics
NASA Astrophysics Data System (ADS)
Di Virgilio, Angela; Allegrini, Maria; Beghi, Alessandro; Belfi, Jacopo; Beverini, Nicolò; Bosi, Filippo; Bouhadef, Bachir; Calamai, Massimo; Carelli, Giorgio; Cuccato, Davide; Maccioni, Enrico; Ortolan, Antonello; Passeggio, Giuseppe; Porzio, Alberto; Ruggiero, Matteo Luca; Santagata, Rosa; Tartaglia, Angelo
2014-12-01
After reviewing the importance of light as a probe for testing the structure of space-time, we describe the GINGER project. GINGER will be a three-dimensional array of large-size ring-lasers able to measure the de Sitter and Lense-Thirring effects. The instrument will be located at the underground laboratory of Gran Sasso, in Italy. We describe the preliminary actions and measurements already under way and present the full road map to GINGER. The intermediate apparatuses GP2 and GINGERino are described. GINGER is expected to be fully operating in few years. xml:lang="fr"
McCaffrey, Katherine A; Jones, Brian; Mabrey, Natalie; Weiss, Bernard; Swan, Shanna H; Patisaul, Heather B
2013-05-01
Bisphenol A (BPA) is a high volume production chemical used in polycarbonate plastics, epoxy resins, thermal paper receipts, and other household products. The neural effects of early life BPA exposure, particularly to low doses administered orally, remain unclear. Thus, to better characterize the dose range over which BPA alters sex specific neuroanatomy, we examined the impact of perinatal BPA exposure on two sexually dimorphic regions in the anterior hypothalamus, the sexually dimorphic nucleus of the preoptic area (SDN-POA) and the anterioventral periventricular (AVPV) nucleus. Both are sexually differentiated by estradiol and play a role in sex specific reproductive physiology and behavior. Long Evans rats were prenatally exposed to 10, 100, 1000, 10,000μg/kg bw/day BPA through daily, non-invasive oral administration of dosed-cookies to the dams. Offspring were reared to adulthood. Their brains were collected and immunolabeled for tyrosine hydroxylase (TH) in the AVPV and calbindin (CALB) in the SDN-POA. We observed decreased TH-ir cell numbers in the female AVPV across all exposure groups, an effect indicative of masculinization. In males, AVPV TH-ir cell numbers were significantly reduced in only the BPA 10 and BPA 10,000 groups. SDN-POA endpoints were unaltered in females but in males SDN-POA volume was significantly lower in all BPA exposure groups. CALB-ir was significantly lower in all but the BPA 1000 group. These effects are consistent with demasculinization. Collectively these data demonstrate that early life oral exposure to BPA at levels well below the current No Observed Adverse Effect Level (NOAEL) of 50mg/kg/day can alter sex specific hypothalamic morphology in the rat. Copyright © 2013 Elsevier Inc. All rights reserved.
McCaffrey, Katherine A.; Jones, Brian; Mabrey, Natalie; Weiss, Bernard; Swan, Shanna H.; Patisaul, Heather B.
2013-01-01
Bisphenol A (BPA) is a high volume production chemical used in polycarbonate plastics, epoxy resins, thermal paper receipts, and other household products. The neural effects of early life BPA exposure, particularly to low doses administered orally, remain unclear. Thus, to better characterize the dose range over which BPA alters sex specific neuroanatomy, we examined the impact of perinatal BPA exposure on two sexually dimorphic regions in the anterior hypothalamus, the sexually dimorphic nucleus of the preoptic area (SDN-POA) and the anterioventral periventricular (AVPV) nucleus. Both are sexually differentiated by estradiol and play a role in sex specific reproductive physiology and behavior. Long Evans rats were prenatally exposed to 10, 100, 1000, 10,000 mg/kg bw/day BPA through daily, noninvasive oral administration of dosed-cookies to the dams. Offspring were reared to adulthood. Their brains were collected and immunolabeled for tyrosine hydroxylase (TH) in the AVPV and calbindin (CALB) in the SDN-POA. We observed decreased TH-ir cell numbers in the female AVPV across all exposure groups, an effect indicative of masculinization. In males, AVPV TH-ir cell numbers were significantly reduced in only the BPA 10 and BPA 10,000 groups. SDN-POA endpoints were unaltered in females but in males SDN-POA volume was significantly lower in all BPA exposure groups. CALB-ir was significantly lower in all but the BPA 1000 group. These effects are consistent with demasculinization. Collectively these data demonstrate that early life oral exposure to BPA at levels well below the current No Observed Adverse Effect Level (NOAEL) of 50 mg/kg/day can alter sex specific hypothalamic morphology in the rat. PMID:23500335
Boussouar, A; Araneda, S; Hamelin, C; Soulage, C; Kitahama, K; Dalmaz, Y
2009-03-06
Ozone (O3) is widely distributed in the environment, with high levels of air pollution. However, very few studies have documented the effects on postnatal development of O3 during pregnancy. The long-term effects of prenatal O3 exposure in rats (0.5 ppm 12 h/day from embryonic day E5 to E20) were evaluated in the adult nucleus tractus solitarius (NTS) regulating respiratory control. Neuronal response was assessed by Fos protein immunolabeling (Fos-IR), and catecholaminergic neuron involvement by tyrosine hydroxylase (TH) labeling (TH-IR). Adult offspring were analyzed at baseline and following immobilization stress (one hour, plus two hours' recovery); immunolabeling was observed by confocal microscopy. Prenatal O3 increased the baseline TH gray level per cell (p < 0.001). In contrast, the number of Fos-IR cells, Fos-IR/TH-IR colabeled cells and proportion of TH double-labeled with Fos remained unchanged. After stress, the TH gray level (p < 0.001), number of Fos-IR cells (p < 0.001) and of colabeled Fos-IR/TH-IR cells (p < 0.05) and percentage of colabeled Fos-IR/TH-IR neurons against TH-IR cells (p < 0.05) increased in the control group. In prenatal-O3 rats, immobilization stress abolished these increases and reduced the TH gray level (p < 0.05), indicating that prenatal O3 led to loss of adult NTS reactivity to stress. We conclude that long-lasting sequelae were detected in the offspring beyond the prenatal O3 exposure. Prenatal O3 left a print on the NTS, revealed by stress. Disruption of neuronal plasticity to new challenge might be suggested.
Sasahara, Tais Harumi de Castro; Leal, Leonardo Martins; Spillantini, Maria Grazia; Machado, Márcia Rita Fernandes
2015-04-01
The majority of neuroanatomical and chemical studies of the olfactory bulb have been performed in small rodents, such as rats and mice. Thus, this study aimed to describe the organisation and the chemical neuroanatomy of the main olfactory bulb (MOB) in paca, a large rodent belonging to the Hystricomorpha suborder and Caviomorpha infraorder. For this purpose, histological and immunohistochemical procedures were used to characterise the tyrosine hydroxylase (TH) and calretinin (CR) neuronal populations and their distribution. The paca MOB has eight layers: the olfactory nerve layer (ONL), the glomerular layer (GL), the external plexiform layer (EPL; subdivided into the inner and outer sublayers), the mitral cell layer (MCL), the internal plexiform layer (IPL), the granule cell layer (GCL), the periventricular layer and the ependymal layer. TH-ir neurons were found mostly in the GL, and moderate numbers of TH-ir neurons were scattered in the EPL. Numerous varicose fibres were distributed in the IPL and in the GCL. CR-ir neurons concentrated in the GL, around the base of the olfactory glomeruli. Most of the CR-ir neurons were located in the MCL, IPL and GCL. Some of the granule cells had an apical dendrite with a growth cone. The CR immunoreactivity was also observed in the ONL with olfactory nerves strongly immunostained. This study has shown that the MOB organisation in paca is consistent with the description in other mammals. The characterisation and distribution of the population of TH and CR in the MOB is not exclusively to this species. This large rodent shares common patterns to other caviomorph rodent, as guinea pig, and to the myomorph rodents, as mice, rats and hamsters.
Study to determine cloud motion from meteorological satellite data
NASA Technical Reports Server (NTRS)
Clark, B. B.
1972-01-01
Processing techniques were tested for deducing cloud motion vectors from overlapped portions of pairs of pictures made from meteorological satellites. This was accomplished by programming and testing techniques for estimating pattern motion by means of cross correlation analysis with emphasis placed upon identifying and reducing errors resulting from various factors. Techniques were then selected and incorporated into a cloud motion determination program which included a routine which would select and prepare sample array pairs from the preprocessed test data. The program was then subjected to limited testing with data samples selected from the Nimbus 4 THIR data provided by the 11.5 micron channel.
NASA Technical Reports Server (NTRS)
Van Patten, R. A.; Everitt, C. W. F.
1975-01-01
In 1918, J. Lense and H. Thirring calculated that a moon in orbit around a massive rotating planet would experience a nodal dragging effect due to general relativity. We describe an experiment to measure this effect with two counter-orbiting drag-free satellites in polar earth orbit. For a 2 1/2 year experiment, the measurement accuracy should approach 1%. In addition to precision tracking data from existing ground stations, satellite-to-satellite Doppler ranging data are taken at points of passing near the poles. New geophysical information on both earth harmonics and tidal effects is inherent in the polar ranging data.
Gravitomagnetism: From Einstein's 1912 Paper to the Satellites LAGEOS and Gravity Probe B
NASA Astrophysics Data System (ADS)
Pfister, Herbert
The first concrete calculations of (linear) gravitomagnetic effects were performed by Einstein in 1912-1913. Einstein also directly and decisively contributed to the "famous" papers by Thirring (and Lense) from 1918. Generalizations to strong fields were performed not earlier than in 1966 by Brill and Cohen. Extensions to higher orders of the angular velocity ω by Pfister and Braun (1985-1989) led to a solution of the centrifugal force problem and to a quasiglobal principle of equivalence. The difficulties but also the recent successes to measure gravitomagnetic effects are reviewed, and cosmological and Machian aspects of gravitomagnetism are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Török, Gabriel; Goluchová, Katerina; Urbanec, Martin, E-mail: gabriel.torok@gmail.com, E-mail: katka.g@seznam.cz, E-mail: martin.urbanec@physics.cz
2016-12-20
Twin-peak quasi-periodic oscillations (QPOs) are observed in the X-ray power-density spectra of several accreting low-mass neutron star (NS) binaries. In our previous work we have considered several QPO models. We have identified and explored mass–angular-momentum relations implied by individual QPO models for the atoll source 4U 1636-53. In this paper we extend our study and confront QPO models with various NS equations of state (EoS). We start with simplified calculations assuming Kerr background geometry and then present results of detailed calculations considering the influence of NS quadrupole moment (related to rotationally induced NS oblateness) assuming Hartle–Thorne spacetimes. We show that themore » application of concrete EoS together with a particular QPO model yields a specific mass–angular-momentum relation. However, we demonstrate that the degeneracy in mass and angular momentum can be removed when the NS spin frequency inferred from the X-ray burst observations is considered. We inspect a large set of EoS and discuss their compatibility with the considered QPO models. We conclude that when the NS spin frequency in 4U 1636-53 is close to 580 Hz, we can exclude 51 of the 90 considered combinations of EoS and QPO models. We also discuss additional restrictions that may exclude even more combinations. Namely, 13 EOS are compatible with the observed twin-peak QPOs and the relativistic precession model. However, when considering the low-frequency QPOs and Lense–Thirring precession, only 5 EOS are compatible with the model.« less
Long-range interacting systems in the unconstrained ensemble.
Latella, Ivan; Pérez-Madrid, Agustín; Campa, Alessandro; Casetti, Lapo; Ruffo, Stefano
2017-01-01
Completely open systems can exchange heat, work, and matter with the environment. While energy, volume, and number of particles fluctuate under completely open conditions, the equilibrium states of the system, if they exist, can be specified using the temperature, pressure, and chemical potential as control parameters. The unconstrained ensemble is the statistical ensemble describing completely open systems and the replica energy is the appropriate free energy for these control parameters from which the thermodynamics must be derived. It turns out that macroscopic systems with short-range interactions cannot attain equilibrium configurations in the unconstrained ensemble, since temperature, pressure, and chemical potential cannot be taken as a set of independent variables in this case. In contrast, we show that systems with long-range interactions can reach states of thermodynamic equilibrium in the unconstrained ensemble. To illustrate this fact, we consider a modification of the Thirring model and compare the unconstrained ensemble with the canonical and grand-canonical ones: The more the ensemble is constrained by fixing the volume or number of particles, the larger the space of parameters defining the equilibrium configurations.
The Stochastic X-Ray Variability of the Accreting Millisecond Pulsar MAXI J0911-655
NASA Technical Reports Server (NTRS)
Bult, Peter
2017-01-01
In this work, I report on the stochastic X-ray variability of the 340 hertz accreting millisecond pulsar MAXI J0911-655. Analyzing pointed observations of the XMM-Newton and NuSTAR observatories, I find that the source shows broad band-limited stochastic variability in the 0.01-10 hertz range with a total fractional variability of approximately 24 percent root mean square timing residuals in the 0.4 to 3 kiloelectronvolt energy band that increases to approximately 40 percent root mean square timing residuals in the 3 to 10 kiloelectronvolt band. Additionally, a pair of harmonically related quasi-periodic oscillations (QPOs) are discovered. The fundamental frequency of this harmonic pair is observed between frequencies of 62 and 146 megahertz. Like the band-limited noise, the amplitudes of the QPOs show a steep increase as a function of energy; this suggests that they share a similar origin, likely the inner accretion flow. Based on their energy dependence and frequency relation with respect to the noise terms, the QPOs are identified as low-frequency oscillations and discussed in terms of the Lense-Thirring precession model.
NASA Astrophysics Data System (ADS)
Sobacchi, Emanuele; Sormani, Mattia C.; Stamerra, Antonio
2017-02-01
We describe a scenario to explain blazar periodicities with time-scales of ˜ few years. The scenario is based on a binary supermassive black hole (SMBH) system in which one of the two SMBHs carries a jet. We discuss the various mechanisms that can cause the jet to precess and produce corkscrew patterns through space with a scale of ˜ few pc. It turns out that the dominant mechanism responsible for the precession is simply the imprint of the jet-carrying SMBH orbital speed on the jet. Gravitational deflection and Lense-Thirring precession (due to the gravitational field of the other SMBH) are second-order effects. We complement the scenario with a kinematical jet model which is inspired to the spine-sheath structure observed in M87. One of the main advantages of such a structure is that it allows the peak of the synchrotron emission to scale with frequency according to νF ∝ νξ as the viewing angle is changed, where ξ is not necessarily 3 or 4 as in the case of jets with uniform velocity, but can be ξ ˜ 1. Finally, we apply the model to the source PG1553+113, which has been recently claimed to show a Tobs = (2.18 ± 0.08) yr periodicity. We are able to reproduce the optical and gamma-ray light curves and multiple synchrotron spectra simultaneously. We also give estimates of the source mass and size.
SELF-TRAPPING OF DISKOSEISMIC CORRUGATION MODES IN NEUTRON STAR SPACETIMES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tsang, David; Pappas, George
2016-02-10
We examine the effects of higher-order multipole contributions of rotating neutron star (NS) spacetimes on the propagation of corrugation (c-)modes within a thin accretion disk. We find that the Lense–Thirring precession frequency, which determines the propagation region of the low-frequency fundamental corrugation modes, can experience a turnover allowing for c-modes to become self-trapped for sufficiently high dimensionless spin j and quadrupole rotational deformability α. If such self-trapping c-modes can be detected, e.g., through phase-resolved spectroscopy of the iron line for a high-spin low-mass accreting neutron star, this could potentially constrain the spin-induced NS quadrupole and the NS equation of state.
NASA Technical Reports Server (NTRS)
Oakes, Arnold G.; Han, Daesoo; Kyle, H. Lee; Feldman, Gene Carl; Fleig, Albert J.; Hurley, Edward J.; Kaufman, Barbara A.
1989-01-01
Data sets resulting from the first nine years of operations of the Nimbus-7 Satellite are briefly described. After a brief description of the Nimbus-7 Mission, each of the eight experiments on-board the satellite (Coastal Zone Color Scanner (CZCS), Earth Radiation Budget (ERB), Limb Infrared Monitor of the Stratosphere (MIMS), Stratospheric Aerosol Measurement II (SAM II), Stratospheric and Mesospheric Sounder (SAMS), Solar Backscatter Ultraviolet/Total Ozone Mapping Spectrometer (SBUV/TOMS), Scanning Multichannel Microwave Radiometer (SMMR) and the Temperature Humidity Infrared Radiometer (THIR) are introduced and their respective data products are described in terms of media, general format, and suggested applications. Extensive references are provided. Instructions for obtaining further information, and for ordering data products are given.
Self-Trapping of Diskoseismic Corrugation Modes in Neutron Star Spacetimes
NASA Astrophysics Data System (ADS)
Tsang, David; Pappas, George
2016-02-01
We examine the effects of higher-order multipole contributions of rotating neutron star (NS) spacetimes on the propagation of corrugation (c-)modes within a thin accretion disk. We find that the Lense-Thirring precession frequency, which determines the propagation region of the low-frequency fundamental corrugation modes, can experience a turnover allowing for c-modes to become self-trapped for sufficiently high dimensionless spin j and quadrupole rotational deformability α. If such self-trapping c-modes can be detected, e.g., through phase-resolved spectroscopy of the iron line for a high-spin low-mass accreting neutron star, this could potentially constrain the spin-induced NS quadrupole and the NS equation of state.
Astrophysical observations: lensing and eclipsing Einstein's theories.
Bennett, Charles L
2005-02-11
Albert Einstein postulated the equivalence of energy and mass, developed the theory of special relativity, explained the photoelectric effect, and described Brownian motion in five papers, all published in 1905, 100 years ago. With these papers, Einstein provided the framework for understanding modern astrophysical phenomena. Conversely, astrophysical observations provide one of the most effective means for testing Einstein's theories. Here, I review astrophysical advances precipitated by Einstein's insights, including gravitational redshifts, gravitational lensing, gravitational waves, the Lense-Thirring effect, and modern cosmology. A complete understanding of cosmology, from the earliest moments to the ultimate fate of the universe, will require developments in physics beyond Einstein, to a unified theory of gravity and quantum physics.
Empirical Foundations of the Relativistic Gravity
NASA Astrophysics Data System (ADS)
Ni, Wei-Tou
In 1859, Le Verrier discovered the mercury perihelion advance anomaly. This anomaly turned out to be the first relativistic-gravity effect observed. During the 141 years to 2000, the precisions of laboratory and space experiments, and astrophysical and cosmological observations on relativistic gravity have been improved by 3 orders of magnitude. In 1999, we envisaged a 3-6 order improvement in the next 30 years in all directions of tests of relativistic gravity. In 2000, the interferometric gravitational wave detectors began their runs to accumulate data. In 2003, the measurement of relativistic Shapiro time-delay of the Cassini spacecraft determined the relativistic-gravity parameter γ to be 1.000021 ± 0.000023 of general relativity — a 1.5-order improvement. In October 2004, Ciufolini and Pavlis reported a measurement of the Lense-Thirring effect on the LAGEOS and LAGEOS2 satellites to be 0.99 ± 0.10 of the value predicted by general relativity. In April 2004, Gravity Probe B (Stanford relativity gyroscope experiment to measure the Lense-Thirring effect to 1%) was launched and has been accumulating science data for more than 170 days now. μSCOPE (MICROSCOPE: MICRO-Satellite à trainée Compensée pour l'Observation du Principle d'Équivalence) is on its way for a 2008 launch to test Galileo equivalence principle to 10-15. LISA Pathfinder (SMART2), the technological demonstrator for the LISA (Laser Interferometer Space Antenna) mission is well on its way for a 2009 launch. STEP (Satellite Test of Equivalence Principle), and ASTROD (Astrodynamical Space Test of Relativity using Optical Devices) are in good planning stage. Various astrophysical tests and cosmological tests of relativistic gravity will reach precision and ultra-precision stages. Clock tests and atomic interferometry tests of relativistic gravity will reach an ever-increasing precision. These will give revived interest and development both in experimental and theoretical aspects of gravity, and may lead to answers to some profound questions of gravity and the cosmos.
NASA Astrophysics Data System (ADS)
Iorio, Lorenzo
2017-07-01
We develop a general approach to analytically calculate the perturbations Δ δ τ _ {p} of the orbital component of the change δ τ _ {p} of the times of arrival of the pulses emitted by a binary pulsar p induced by the post-Keplerian accelerations due to the mass quadrupole Q_2, and the post-Newtonian gravitoelectric (GE) and Lense-Thirring (LT) fields. We apply our results to the so-far still hypothetical scenario involving a pulsar orbiting the supermassive black hole in the galactic center at Sgr A^*. We also evaluate the gravitomagnetic and quadrupolar Shapiro-like propagation delays δ τ _ {prop}. By assuming the orbit of the existing main sequence star S2 and a time span as long as its orbital period P_b, we obtain | Δ δ τ _ {p}^ {GE}| ≲ 10^3 {s}, | Δ δ τ _ {p}^ {LT}| ≲ 0.6 {s},| Δ δ τ _ {p}^{Q_2}| ≲ 0.04 {s}. Faster ( P_b= 5 {years}) and more eccentric ( e=0.97) orbits would imply net shifts per revolution as large as | < Δ δ τ _ {p}^ {GE}\\rangle | ≲ 10 {Ms}, | < Δ δ τ _ {p}^ {LT}\\rangle | ≲ 400 {s},| < Δ δ τ _ {p}^{Q_2}\\rangle | ≲ 10^3 {s}, depending on the other orbital parameters and the initial epoch. For the propagation delays, we have | δ τ _ {prop}^ {LT}| ≲ 0.02 {s}, | δ τ _ {prop}^{Q_2}| ≲ 1 μs. The results for the mass quadrupole and the Lense-Thirring field depend, among other things, on the spatial orientation of the spin axis of the Black Hole. The expected precision in pulsar timing in Sgr A^* is of the order of 100 μs, or, perhaps, even 1-10 μs. Our method is, in principle, neither limited just to some particular orbital configuration nor to the dynamical effects considered in the present study.
Experimental determination of gravitomagnetic effects by means of ring lasers
NASA Astrophysics Data System (ADS)
Tartaglia, Angelo
2013-08-01
A new experiment aimed to the detection of the gravito-magnetic Lense-Thirring effect at the surface of the Earth will be presented; the name of the experiment is GINGER. The proposed technique is based on the behavior of light beams in ring-lasers, also known as gyrolasers. A three-dimensional array of ringlasers will be attached to a rigid "monument"; each ring will have a different orientation in space. Within the space-time of a rotating mass the propagation of light is indeed anisotropic; part of the anisotropy is purely kinematical (Sagnac effect), part is due to the interaction between the gravito-electric field of the source and the kinematical motion of the observer (de Sitter effect), finally there is a contribution from the gravito-magnetic component of the Earth (gravito-magnetic frame dragging or Lense-Thirring effect). In a ring-laser a light beam traveling counterclockwise is superposed to another beam traveling in the opposite sense. The anisotropy in the propagation leads to standing waves with slightly different frequencies in the two directions; the final effect is a beat frequency proportional to the size of the instrument and its effective rotation rate in space, including the gravito-magnetic drag. Current laser techniques and the performances of the best existing ring-lasers allow at the moment a sensitivity within one order of magnitude of the required accuracy for the detection of gravito-magnetic effects, so that the objective of GINGER is in the range of feasibility and aims to improve the sensitivity of a couple of orders of magnitude with respect to present. The experiment will be underground, probably in the Gran Sasso National Laboratories in Italy, and is based on an international collaboration among four Italian groups, the Technische Universität München and the University of Canterbury in Christchurch (NZ).
Tomographic reflection modelling of quasi-periodic oscillations in the black hole binary H 1743-322
NASA Astrophysics Data System (ADS)
Ingram, Adam; van der Klis, Michiel; Middleton, Matthew; Altamirano, Diego; Uttley, Phil
2017-01-01
Accreting stellar mass black holes (BHs) routinely exhibit Type-C quasi-periodic oscillations (QPOs). These are often interpreted as Lense-Thirring precession of the inner accretion flow, a relativistic effect whereby the spin of the BH distorts the surrounding space-time, inducing nodal precession. The best evidence for the precession model is the recent discovery, using a long joint XMM-Newton and NuSTAR observation of H 1743-322, that the centroid energy of the iron florescence line changes systematically with QPO phase. This was interpreted as the inner flow illuminating different azimuths of the accretion disc as it precesses, giving rise to a blueshifted/redshifted iron line when the approaching/receding disc material is illuminated. Here, we develop a physical model for this interpretation, including a self-consistent reflection continuum, and fit this to the same H 1743-322 data. We use an analytic function to parametrize the asymmetric illumination pattern on the disc surface that would result from inner flow precession, and find that the data are well described if two bright patches rotate about the disc surface. This model is preferred to alternatives considering an oscillating disc ionization parameter, disc inner radius and radial emissivity profile. We find that the reflection fraction varies with QPO phase (3.5σ), adding to the now formidable body of evidence that Type-C QPOs are a geometric effect. This is the first example of tomographic QPO modelling, initiating a powerful new technique that utilizes QPOs in order to map the dynamics of accreting material close to the BH.
On the anomalous secular increase of the eccentricity of the orbit of the Moon
NASA Astrophysics Data System (ADS)
Iorio, L.
2011-08-01
A recent analysis of a Lunar Laser Ranging (LLR) data record spanning 38.7 yr revealed an anomalous increase of the eccentricity e of the lunar orbit amounting to ? yr-1. The present-day models of the dissipative phenomena occurring in the interiors of both the Earth and the Moon are not able to explain it. In this paper, we examine several dynamical effects, not modelled in the data analysis, in the framework of long-range modified models of gravity and of the standard Newtonian/Einsteinian paradigm. It turns out that none of them can accommodate ?. Many of them do not even induce long-term changes in e; other models do, instead, yield such an effect, but the resulting magnitudes are in disagreement with ?. In particular, the general relativistic gravitomagnetic acceleration of the Moon due to the Earth’s angular momentum has the right order of magnitude, but the resulting Lense-Thirring secular effect for the eccentricity vanishes. A potentially viable Newtonian candidate would be a trans-Plutonian massive object (Planet X/Nemesis/Tyche) since it, actually, would affect e with a non-vanishing long-term variation. On the other hand, the values for the physical and orbital parameters of such a hypothetical body required to obtain at least the right order of magnitude for ? are completely unrealistic: suffices it to say that an Earth-sized planet would be at 30 au, while a jovian mass would be at 200 au. Thus, the issue of finding a satisfactorily explanation for the anomalous behaviour of the Moon’s eccentricity remains open.
A Disk Origin for S-Stars in the Galactic Center?
NASA Astrophysics Data System (ADS)
Haislip, G.; Youdin, A. N.
2005-12-01
Young massive stars in the central 0.5" of our Galaxy probe dynamics around supermassive black holes, and challenge our understanding of star formation in extreme environments. Recent observations (Ghez et al. 2005, Eisenhauer et al. 2005) show large eccentricities and a seemingly random distribution of inclinations, which seems to contradict formation in a disk. We investigate scenarios in which the massive S-stars are born with circular, coplanar orbits and perturbed to their current relaxed state. John Chambers' MERCURY code is modified to include post-Newtonian corrections to the gravitational central force of a Schwarzchild hole and Lense-Thirring precession about a Kerr black hole. The role of resonant relaxation (Rauch & Tremaine, 1996) of angular momentum between S-stars and a background stellar halo is studied in this context.
Atmospheric Transmittance from 0.25 to 28.5 Microns: Computer Code LOWTRAN 3
1975-05-07
thait worKe’S whIo arc iirca~i) risinig I.A M’ Il V\\ 2 c~in ilwl(ite thir ciu -rd dvciks- ,Aith i nii~wimun of effort. \\ll rh ’,n ; and diir<to the lIt...specified on CARD I. The appropriate meteorological parameters and format for thr atmospheric data are given below. Z, P. T. DP. RH . WHL WO, AHAZE [FORMAT...M.GTOIREA!) 4Z9,ZfKJP(?,K),TM?,)P, RH ,HH(7.KI ,NO(1,KIANAZý (KI A &2,3E .JTIF!W(Z(K) b1.0E-6)#1. A 1:0sF IV(M.EQ.0) UK)=mil A UG IF(7(KI .GF..35.0I Jr(Z
Ginger: Measuring Gravitomagnetic Effects by Means of Light
NASA Astrophysics Data System (ADS)
Tartaglia, Angelo
2015-01-01
GINGER is a proposal for a new experiment aimed to the detection of the gravito-magnetic Lense-Thirring effect at the surface of the Earth. A three-dimensional set of ring lasers will be mounted on a rigid "monument". In a ring laser a light beam traveling counterclockwise is superposed to another beam traveling in the opposite sense. The anisotropy in the propagation leads to standing waves with slightly different frequencies in the two directions; the resulting beat frequency is proportional to the absolute rotation rate in space, including the gravito-magnetic drag. The experiment is planned to be built in the Gran Sasso National Laboratories in Italy and is based on an international collaboration among four Italian groups, the Technische Universität München and the University of Canterbury in Christchurch (NZ).
Science Applications of the RULLI Camera: Photon Thrust, General Relativity and the Crab Nebula
NASA Astrophysics Data System (ADS)
Currie, D.; Thompson, D.; Buck, S.; Des Georges, R.; Ho, C.; Remelius, D.; Shirey, B.; Gabriele, T.; Gamiz, V.; Ulibarri, L.; Hallada, M.; Szymanski, P.
RULLI (Remote Ultra-Low Light Imager) is a unique single photon imager with very high (microsecond) time resolution and continuous sensitivity, developed at Los Alamos National Laboratory. This technology allows a family of astrophysical and satellite observations that were not feasible in the past. We will describe the results of the analysis of recent observations of the LAGEOS II satellite and the opportunities expected for future observations of the Crab nebula. The LAGEOS/LARES experiments have measured the dynamical General Relativistic effects of the rotation of the earth, the Lense-Thirring effect. The major error source is photon thrust and a required knowledge of the orientation of the spin axis of LAGEOS. This information is required for the analysis of the observations to date, and for future observations to obtain more accurate measurements of the Lense-Thirring effect, of deviations from the inverse square law, and of other General Relativistic effects. The rotation of LAGEOS I is already too slow for traditional measurement methods and Lageos II will soon suffer a similar fate. The RULLI camera can provide new information and an extension of the lifetime for these measurements. We will discuss the 2004 LANL observations of LAGEOS at Starfire Optical Range, the unique software processing methods that allow the high accuracy analysis of the data (the FROID algorithm) and the transformation that allows the use of such data to obtain the orientation of the spin axis of the satellite. We are also planning future observations, including of the nebula surrounding the Crab Pulsar. The rapidly rotating pulsar generates enormous magnetic fields, a synchrotron plasma and stellar winds moving at nearly the velocity of light. Since the useful observations to date rely only on observations of the beamed emission when it points toward the earth, most descriptions of the details of the processes have been largely theoretical. The RULLI camera's continuous sensitivity and high time resolution should enable better signal to noise ratios for observations that may reveal properties like the orientation of the rotational and magnetic axes of the pulsar, the temperature, composition and electrical state of the plasma and effects of the magnetic field.
NASA Astrophysics Data System (ADS)
Ingram, A.
2017-10-01
Accreting stellar-mass black holes often show a quasi-periodic oscillation (QPO) in their X-ray flux, and an iron emission line in their X-ray spectrum. The iron line is generated through disc reflection, and its shape is distorted by rapid orbital motion and gravitational redshift. The physical origin of the QPO has long been debated, but is often attributed to Lense-Thirring precession, a General Relativistic effect causing the inner flow to precess as the spinning black hole twists up the surrounding space-time. This predicts a characteristic rocking of the iron line between red- and blueshift as the receding and approaching sides of the disc are respectively illuminated. I will first talk about our XMM-Newton and NuSTAR observations of the black hole binary H 1743-322 in which the line energy varies systematically over the ˜ 4 s QPO cycle, as predicted. This result has enabled us to map the inner accretion disc using tomographic techniques for the first time. I will then talk about the quasi-periodic swings in X-ray polarisation angle predicted by the precession model, and show how we can go about measuring such swings with the recently selected NASA Small explorer mission IXPE and proposed missions such as XIPE and eXTP.
Metric Properties of Relativistic Rotating Frames with Axial Symmetry
NASA Astrophysics Data System (ADS)
Torres, S. A.; Arenas, J. R.
2017-07-01
This abstract summarizes our poster contribution to the conference. We study the properties of an axially symmetric stationary gravitational field, by considering the spacetime properties of an uniformly rotating frame and the Einstein's Equivalence Principle (EEP). To undertake this, the weak field and slow-rotation limit of the kerr metric are determined, by making a first-order perturbation to the metric of a rotating frame. Also, we show a local connection between the effects of centrifugal and Coriolis forces with the effects of an axially symmetric stationary weak gravitational field, by calculating the geodesic equations of a free particle. It is observed that these geodesic, applying the (EEP), are locally equivalent to the geodesic equations of a free particle on a rotating frame. Furthermore, some aditional properties as the Lense-Thirring effect, the Sagnac effect, among others are studied.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Helled, R., E-mail: rhelled@ucla.edu
Knowledge of Saturn's axial moment of inertia can provide valuable information on its internal structure. We suggest that Saturn's angular momentum may be determined by the Solstice Mission (Cassini XXM) by measuring Saturn's pole precession rate and the Lense-Thirring acceleration on the spacecraft, and therefore put constraints on Saturn's moment of inertia. It is shown that Saturn's moment of inertia can change up to {approx}2% due to different core properties. However, a determination of Saturn's rotation rate is required to constrain its axial moment of inertia. A change of about seven minutes in rotation period leads to a similar uncertaintymore » in the moment of inertia value as different core properties (mass, radius). A determination of Saturn's angular momentum and rotation period by the Solstice Mission could reveal important information on Saturn's internal structure, in particular, its core properties.« less
Gravity Probe B: Testing Einstein with Gyroscopes
NASA Technical Reports Server (NTRS)
Geveden, Rex D.; May, Todd
2003-01-01
Some 40 years in the making, NASA' s historic Gravity Probe B (GP-B) mission is scheduled to launch aboard a Delta II in 2003. GP-B will test two extraordinary predictions from Einstein's General Relativity: geodetic precession and the Lense-Thirring effect (frame-dragging). Employing tiny, ultra-precise gyroscopes, GP-B features a measurement accuracy of 0.5 milli-arc-seconds per year. The extraordinary measurement precision is made possible by a host of breakthrough technologies, including electro-statically suspended, super-conducting quartz gyroscopes; virtual elimination of magnetic flux; a solid quartz star tracking telescope; helium microthrusters for drag-free control of the spacecraft; and a 2400 liter superfluid helium dewar. This paper will provide an overview of the science, key technologies, flight hardware, integration and test, and flight operations of the GP-B space vehicle. It will also examine some of the technical management challenges of a large-scale, technology-driven, Principal Investigator-led mission.
Gravity Probe B: Testing Einstein with Gyroscopes
NASA Technical Reports Server (NTRS)
Geveden, Rex D.; May, Todd
2003-01-01
Some 40 years in the making, NASA s historic Gravity Probe B (GP-B) mission is scheduled to launch aboard a Delta I1 in 2003. GP-B will test two extraordinary predictions from Einstein s General Relativity: geodetic precession and the Lense-Thirring effect (frame-dragging). Employing tiny, ultra-precise gyroscopes, GP-B features a measurement accuracy of 0.5 milli-arc-seconds per year. The extraordinary measurement precision is made possible by a host of breakthrough technologies, including electro-statically suspended, super-conducting quartz gyroscopes; virtual elimination of magnetic flux; a solid quartz star- tracking telescope; helium microthrusters for drag-free control of the spacecraft; and a 2400 liter superfluid helium dewar. This paper will provide an overview of the science, key technologies, flight hardware, integration and test, and flight operations of the GP-B space vehicle. It will also examine some of the technical management challenges of a large-scale, technology-driven, Principal Investigator-led mission.
A Monte Carlo Analysis for Collision Risk Assessment on Vega Launcher Payloads and LARES Satellite
NASA Astrophysics Data System (ADS)
Sindoni, G.; Ciufolini, I.; Battie, F.
2016-03-01
This work has been developed in the framework of the LARES mission of the Italian Space Agency (ASI). The LARES satellite has been built to test, with high accuracy, the frame-dragging effect predicted by the theory of General Relativity, specifically the Lense-Thirring drag of its node. LARES was the main payload in the qualification flight of the European Space Agency launcher VEGA. A concern arose about the possibility of an impact between the eight secondary payloads among themselves, with LARES and with the last stage of the launcher (AVUM). An impact would have caused failure on the payloads and the production of debris in violation of the space debris mitigation measures established internationally. As an additional contribution, this study allowed the effect of the payload release on the final manoeuvers of the AVUM to be understood.
LARES Laser Relativity Satellite
NASA Astrophysics Data System (ADS)
Ciufolini, Ignazio; et al.
2011-05-01
After almost three decades since the first idea of launching a passive satellite to measure gravitomagnetism, launch of LARES satellite is approaching. The new developed VEGA launcher will carry LARES in a nominally circular orbit at 1450 km altitude. This satellite, along with the two LAGEOS satellites, will allow to improve a previous measurement of the Lense-Thirring effect by a factor of 10. This important achievement will be a result of the idea of combining orbital parameters of a constellation of laser ranging satellites along with a specific design of LARES satellite. Other key points of the experiment are: the ever improving knowledge of the gravitational field of Earth, in particular the lower degree even zonal harmonics with GRACE satellites, and an accurate estimate of all the classical perturbations such as atmospheric drag and solar radiation pressure. In the paper both the scientific aspects as well as the design consideration will be described
NASA Astrophysics Data System (ADS)
Pavlis, E. C.; Ciufolini, I.; Paolozzi, A.
2012-12-01
LARES, Laser Relativity Satellite, is a spherical laser-ranged satellite, passive and covered with retroreflectors. It will be launched with ESA's new launch vehicle VEGA (ESA-ELV-ASI-AVIO) in early 2012. Its orbital elements will be: inclination 70° ± 1, semi-major axis 7830 km and near zero eccentricity. Its weight is about 387 kg and its radius 18.2 cm. It will be the single known most dense body orbiting Earth in the solar system, and the non-gravitational perturbations will be minimized by its very small 'cross-section-to-mass' ratio. The main objective of the LARES satellite is a test of the frame-dragging effect, a consequence of the gravitomagnetic field predicted by Einstein's theory of General Relativity. Together with the orbital data from LAGEOS and LAGEOS 2, it will allow a measurement of frame-dragging with an accuracy of a few percent.
Ultra-sensitive inertial sensors via neutral-atom interferometry
NASA Technical Reports Server (NTRS)
Clauser, John F.
1989-01-01
Upon looking at the various colossal interferometers, etc., discussed at this conference to test gravitational theory, one cannot avoid feeling that easier approaches exist. The use of low velocity, neutral atom matter waves in place of electromagnetic waves in sensitive inertial interferometer configurations is proposed. For applications, spacecraft experiments to sense a drag-free condition, to measure the Lense-Thirring precession, to measure the gravitomagnetic effect and/or the earth's geopotential (depending on altitude), and to detect long period gravitational waves are considered. Also, a terrestrial precision test of the equivalence principle on spin polarized atoms, capable of detecting effects of the 5th force is considered. While the ideas described herein are preliminary, the orders of magnitude are sufficiently tantalizing to warrant further study. Although existing proposed designs may be adequate for some of these experiments, the use of matter-wave interferometry offers reduced complexity and cost, and an absence of cryogenics.
Rapid variability as a probe of warped space-time around accreting black holes
NASA Astrophysics Data System (ADS)
Axelsson, Magnus
2016-07-01
The geometry of the inner accretion flow of X-ray binaries is complex, with multiple regions contributing to the observed emission. Frequency-resolved spectroscopy is a powerful tool in breaking this spectral degeneracy. We have extracted the spectra of the strong low-frequency quasi-periodic oscillation (QPO) and its harmonic in GX339-4 and XTE J1550-564, and compare these to the time-averaged spectrum and the spectrum of the rapid (<0.1 s) variability. Our results support the picture where the QPO arises from vertical (Lense-Thirring) precession of an inhomogeneous hot flow, softer at larger radii closer to the truncated disc and harder in the innermost parts where the rapid variability is produced. This coupling between variability and spectra allows us to constrain the soft Comptonization component, breaking the degeneracy plaguing the time-averaged spectrum and revealing the geometry of the accretion flow close to the black hole.
OJ287: Deciphering the "Rosetta stone of blazars★"
NASA Astrophysics Data System (ADS)
Britzen, S.; Fendt, C.; Witzel, G.; Qian, S.-J.; Pashchenko, I. N.; Kurtanidze, O.; Zajacek, M.; Martinez, G.; Karas, V.; Aller, M.; Aller, H.; Eckart, A.; Nilsson, K.; Arévalo, P.; Cuadra, J.; Subroweit, M.; Witzel, A.
2018-04-01
OJ287 is the best candidate Active Galactic Nucleus (AGN) for hosting a supermassive binary black hole (SMBBH) at very close separation. We present 120 Very Long Baseline Array (VLBA) observations (at 15 GHz) covering the time between Apr. 1995 and Apr. 2017. We find that the OJ287 radio jet is precessing on a timescale of ˜ 22 yr. In addition, our data are consistent with a jet-axis rotation on a yearly timescale. We model the precession (24±2 yr) and combined motion of jet precession and jet-axis rotation. The jet motion explains the variability of the total radio flux-density via viewing angle changes and Doppler beaming. Half of the jet-precession timescale is of the order of the dominant optical periodicity timescale. We suggest that the optical emission is synchrotron emission and related to the jet radiation. The jet dynamics and flux-density light curves can be understood in terms of geometrical effects. Disturbances of an accretion disc caused by a plunging black hole do not seem necessary to explain the observed variability. Although the SMBBH model does not seem necessary to explain the observed variability, a SMBBH or Lense-Thirring precession (disc aSround single black hole) seem to be required to explain the timescale of the precessing motion. Besides jet rotation also nutation of the jet axis could explain the observed motion of the jet axis. We find a strikingly similar scaling for the timescales for precession and nutation as indicated for SS433 with a factor of roughly 50 times longer in OJ287.
NASA Technical Reports Server (NTRS)
Ziemke, J. R.; Chandra, S.; Bhartia, P. K.; Einaudi, Franco (Technical Monitor)
2000-01-01
A new technique denoted cloud slicing has been developed for estimating tropospheric ozone profile information. All previous methods using satellite data were only capable of estimating the total column of ozone in the troposphere. Cloud slicing takes advantage of the opaque property of water vapor clouds to ultraviolet wavelength radiation. Measurements of above-cloud column ozone from the Nimbus 7 total ozone mapping spectrometer (TOMS) instrument are combined together with Nimbus 7 temperature humidity and infrared radiometer (THIR) cloud-top pressure data to derive ozone column amounts in the upper troposphere. In this study tropical TOMS and THIR data for the period 1979-1984 are analyzed. By combining total tropospheric column ozone (denoted TCO) measurements from the convective cloud differential (CCD) method with 100-400 hPa upper tropospheric column ozone amounts from cloud slicing, it is possible to estimate 400-1000 hPa lower tropospheric column ozone and evaluate its spatial and temporal variability. Results for both the upper and lower tropical troposphere show a year-round zonal wavenumber 1 pattern in column ozone with largest amounts in the Atlantic region (up to approx. 15 DU in the 100-400 hPa pressure band and approx. 25-30 DU in the 400-1000 hPa pressure band). Upper tropospheric ozone derived from cloud slicing shows maximum column amounts in the Atlantic region in the June-August and September-November seasons which is similar to the seasonal variability of CCD derived TCO in the region. For the lower troposphere, largest column amounts occur in the September-November season over Brazil in South America and also southern Africa. Localized increases in the tropics in lower tropospheric ozone are found over the northern region of South America around August and off the west coast of equatorial Africa in the March-May season. Time series analysis for several regions in South America and Africa show an anomalous increase in ozone in the lower troposphere around the month of March which is not observed in the upper troposphere. The eastern Pacific indicates weak seasonal variability of upper, lower, and total tropospheric ozone compared to the western Pacific which shows largest TCO amounts in both hemispheres around spring months. Ozone variability in the western Pacific is expected to have greater variability caused by strong convection, pollution and biomass burning, land/sea contrast and monsoon developments.
Lense-Thirring Precession and Quasi-periodic Oscillations in X-Ray Binaries
NASA Astrophysics Data System (ADS)
Marković , Dragoljub; Lamb, Frederick K.
1998-11-01
It has recently been suggested that gravitomagnetic precession of the inner part of the accretion disk, possibly driven by radiation torques, may be responsible for some of the quasi-periodic X-ray brightness oscillations (QPOs) and other spectral features with frequencies between 20 and 300 Hz observed in the power spectra of some low-mass binary systems containing accreting neutron stars and black hole candidates. We have explored the free and driven normal modes of geometrically thin disks in the presence of gravitomagnetic and radiation warping torques. We have found a family of low-frequency gravitomagnetic (LFGM) modes with precession frequencies that range from the lowest frequency allowed by the size of the disk up to a certain critical frequency ωcrit, which is ~1 Hz for a compact object of solar mass. The lowest frequency (lowest order) LFGM modes are similar to the previously known radiation warping modes, extend over much of the disk, and have damping rates >~10 times their precession frequencies. The highest frequency LFGM modes are tightly wound spiral corrugations of the disk that extend to ~10 times its inner radius and have damping rates >~103 times their precession frequencies. A radiation warping torque can cause a few of the lowest frequency LFGM modes to grow with time, but even a strong radiation warping torque has essentially no effect on the LFGM modes with frequencies >~10-4 Hz. We have also discovered a second family of high-frequency gravitomagnetic (HFGM) modes with precession frequencies that range from ωcrit up to slightly less than the gravitomagnetic precession frequency ωgm,i of a particle at the inner edge of the disk, which is 30 Hz if the disk extends inward to the innermost stable circular orbit around a 2 M⊙ compact object with dimensionless angular momentum cJ/GM2 = 0.2. The lowest frequency HFGM modes are very strongly damped and have warp functions and precession frequencies very similar to those of the highest frequency LFGM modes. In contrast, the highest frequency (lowest order) HFGM modes are very localized spiral corrugations of the inner disk and are weakly damped, with Q-values of ~2-50. We discuss the implications of our results for the observability of Lense-Thirring precession in X-ray binaries.
El Allali, Khalid; Achaâban, Mohamed R.; Piro, Mohammed; Ouassat, Mohammed; Challet, Etienne; Errami, Mohammed; Lakhdar-Ghazal, Nouria; Calas, André; Pévet, Paul
2017-01-01
In mammals, biological rhythms are driven by a master circadian clock located in the suprachiasmatic nucleus (SCN) of the hypothalamus. Recently, we have demonstrated that in the camel, the daily cycle of environmental temperature is able to entrain the master clock. This raises several questions about the structure and function of the SCN in this species. The current work is the first neuroanatomical investigation of the camel SCN. We carried out a cartography and cytoarchitectural study of the nucleus and then studied its cell types and chemical neuroanatomy. Relevant neuropeptides involved in the circadian system were investigated, including arginine-vasopressin (AVP), vasoactive intestinal polypeptide (VIP), met-enkephalin (Met-Enk), neuropeptide Y (NPY), as well as oxytocin (OT). The neurotransmitter serotonin (5-HT) and the enzymes tyrosine hydroxylase (TH) and aromatic L-amino acid decarboxylase (AADC) were also studied. The camel SCN is a large and elongated nucleus, extending rostrocaudally for 9.55 ± 0.10 mm. Based on histological and immunofluorescence findings, we subdivided the camel SCN into rostral/preoptic (rSCN), middle/main body (mSCN) and caudal/retrochiasmatic (cSCN) divisions. Among mammals, the rSCN is unusual and appears as an assembly of neurons that protrudes from the main mass of the hypothalamus. The mSCN exhibits the triangular shape described in rodents, while the cSCN is located in the retrochiasmatic area. As expected, VIP-immunoreactive (ir) neurons were observed in the ventral part of mSCN. AVP-ir neurons were located in the rSCN and mSCN. Results also showed the presence of OT-ir and TH-ir neurons which seem to be a peculiarity of the camel SCN. OT-ir neurons were either scattered or gathered in one isolated cluster, while TH-ir neurons constituted two defined populations, dorsal parvicellular and ventral magnocellular neurons, respectively. TH colocalized with VIP in some rSCN neurons. Moreover, a high density of Met-Enk-ir, 5-HT-ir and NPY-ir fibers were observed within the SCN. Both the cytoarchitecture and the distribution of neuropeptides are unusual in the camel SCN as compared to other mammals. The presence of OT and TH in the camel SCN suggests their role in the modulation of circadian rhythms and the adaptation to photic and non-photic cues under desert conditions. PMID:29249943
On some Aspects of Gravitomagnetism and Correction for Perihelion Advance
NASA Astrophysics Data System (ADS)
Rocha, F.; Malheiro, M.; Marinho, R., Jr.
2016-04-01
In 1918 Joseph Lense and Hans Thirring, discovered the gravitomagnetic effect when studied solutions to the Einstein field equations using the weak field and slow motion approximation of rotating systems. They noted that when a body falls towards a massive object in rotation it feels a force perpendicular to its movement. The equations that they obtained were similar to Maxwell’s equations of electromagnetism, now known as Maxwell’s equations for gravitomagnetism. Some authors affirm that the gravitomagnetic effect can cause precession then in this paper we calculate the precession that gravitomagnetic effect cause in Mercury’s perihelion advance. To make this we calculate the field between dipoles to measure the influence that the Sun has on Mercury, taking into account the gravitomagnetic field that the Sun and Mercury produces when they rotate around themselves. In addition, we calculate the ratio of the dipole force (of all solar system planet’s) and the Newton’s gravitational force to see how much is smaller.
NASA Technical Reports Server (NTRS)
Van Patten, R. A.; Everitt, C. W. F.
1976-01-01
In 1918, Lense and Thirring calculated that a moon in orbit around a massive rotating planet would experience a nodal dragging effect due to general relativity. We describe an experiment to measure this effect by means of two counter-orbiting drag-free satellites in polar orbit about the earth. For a 2-1/2 year experiment, the measurement should approach an accuracy of 1%. An independent measurement of the geodetic precession of the orbit plane due to the motion about the sun may also be possible to about 10% accuracy. In addition to precision tracking data from existing ground stations, satellite-to-satellite Doppler data are taken at points of passing near the poles to yield an accurate measurement of the separation distance between the two satellites. New geophysical information on both earth harmonics and tidal effects is inherent in this polar ranging data.
The Hartree Equation for Infinitely Many Particles I. Well-Posedness Theory
NASA Astrophysics Data System (ADS)
Lewin, Mathieu; Sabin, Julien
2015-02-01
We show local and global well-posedness results for the Hartree equation where γ is a bounded self-adjoint operator on , ρ γ ( x) = γ( x, x) and w is a smooth short-range interaction potential. The initial datum γ(0) is assumed to be a perturbation of a translation-invariant state γ f = f(-Δ) which describes a quantum system with an infinite number of particles, such as the Fermi sea at zero temperature, or the Fermi-Dirac and Bose-Einstein gases at positive temperature. Global well-posedness follows from the conservation of the relative (free) energy of the state γ( t), counted relatively to the stationary state γ f . We indeed use a general notion of relative entropy, which allows us to treat a wide class of stationary states f(-Δ). Our results are based on a Lieb-Thirring inequality at positive density and on a recent Strichartz inequality for orthonormal functions, which are both due to Frank, Lieb, Seiringer and the first author of this article.
Black Hole with Wobbling Disk Artist Concept
2016-07-12
This artist's impression depicts the accretion disc surrounding a black hole, in which the inner region of the disc precesses. "Precession" means that the orbit of material surrounding the black hole changes orientation around the central object. In these three views, the precessing inner disc shines high-energy radiation that strikes the matter in the surrounding accretion disc. This causes the iron atoms in that disc to emit X-rays, depicted as the glow on the accretion disc to the right (in view a), to the front (in view b) and to the left (in view c) (see Figure 1). In a study published in July 2016, astronomers used data from ESA's XMM-Newton X-ray Observatory and NASA's NuSTAR telescope to measure this "wobble" in X-ray emission from excited iron atoms. Scientists interpreted this as evidence for the Lense-Thirring effect -- a name for the precession phenomenon -- in the strong gravitational field of a black hole. http://photojournal.jpl.nasa.gov/catalog/PIA20697
Orechio, Dailiany; Aguiar, Bruna Andrade; Diniz, Giovanne Baroni; Bittencourt, Jackson Cioni; Haemmerle, Carlos A; Watanabe, Ii-Sei; Miglino, Maria Angelica; Castelucci, Patricia
2018-05-12
The existence of neurogenesis in the adult brain is a widely recognized phenomenon, occurring in the subventricular zone (SVZ) of the lateral ventricles and the subgranular zone of the dentate gyrus in several vertebrate species. Neural precursors originated in the SVZ migrate to the main olfactory bulb (MOB), originating the rostral migratory stream (RMS) in the process. To better understand the formation of the adult neurogenic niches in dogs, we investigated the cellular composition and morphological organization of these areas in 57 days-old dog fetuses. Using multiple immunohistochemical markers, we demonstrated that the SVZ in the canine fetus is remarkably similar to the adult SVZ, with glial GFAP-immunoreactive (-ir) cells, DCX-ir neuroblasts and SOX2-ir neuronal progenitors tangentially organized along the dorsal lateral ventricle. The fetal RMS has all the features of its adult counterpart and closely resembles the RMS of other mammalian species. The late-development canine MOB has most of the neurochemical features of the adult MOB, including an early-developed TH-ir population and maturing CALR-ir interneurons, but CALB-ir neurons in the granule cell layer will only appear in the post-partum period. Taken together, our results suggest that the canine fetal development of adult neurogenic niches closely resembles those of primates, and dogs may be suitable models of human adult neurogenesis. This article is protected by copyright. All rights reserved. © 2018 Wiley Periodicals, Inc.
Solar system constraints on planetary Coriolis-type effects induced by rotation of distant masses
DOE Office of Scientific and Technical Information (OSTI.GOV)
Iorio, Lorenzo, E-mail: lorenzo.iorio@libero.it
We phenomenologically put local constraints on the rotation of distant masses by using the planets of the solar system. First, we analytically compute the orbital secular precessions induced on the motion of a test particle about a massive primary by a Coriolis-like force, treated as a small perturbation, in the case of a constant angular velocity vector Ψ directed along a generic direction in space. The semimajor axis a and the eccentricity e of the test particle do not secularly change, contrary to the inclination I, the longitude of the ascending node Ω, the longitude of the pericenter varpi andmore » the mean anomaly M. Then, we compare our prediction for (dot varpi) with the corrections Δdot varpi to the usual perihelion precessions of the inner planets recently estimated by fitting long data sets with different versions of the EPM ephemerides. We obtain as preliminary upper bounds |Ψ{sub z}| ≤ 0.0006−0.013 arcsec cty{sup −1}, |Ψ{sub x}| ≤ 0.1−2.7 arcsec cty{sup −1}, |Ψ{sub y}| ≤ 0.3−2.3 arcsec cty{sup −1}. Interpreted in terms of models of space-time involving cosmic rotation, our results are able to yield constraints on cosmological parameters like the cosmological constant Λ and the Hubble parameter H{sub 0} not too far from their values determined with cosmological observations and, in some cases, several orders of magnitude better than the constraints usually obtained so far from space-time models not involving rotation. In the case of the rotation of the solar system throughout the Galaxy, occurring clockwise about the North Galactic Pole, our results for Ψ{sub z} are in disagreement with the expected value of it at more than 3−σ level. Modeling the Oort cloud as an Einstein-Thirring slowly rotating massive shell inducing Coriolis-type forces inside yields unphysical results for its putative rotation.« less
NASA Astrophysics Data System (ADS)
van Doesburgh, Marieke; van der Klis, Michiel
2017-03-01
We analyse all available RXTE data on a sample of 13 low-mass X-ray binaries with known neutron star spin that are not persistent pulsars. We carefully measure the correlations between the centroid frequencies of the quasi-periodic oscillations (QPOs). We compare these correlations to the prediction of the relativistic precession model that, due to frame dragging, a QPO will occur at the Lense-Thirring precession frequency νLT of a test-particle orbit whose orbital frequency is the upper kHz QPO frequency νu. Contrary to the most prominent previous studies, we find two different oscillations in the range predicted for νLT that are simultaneously present over a wide range of νu. Additionally, one of the low-frequency noise components evolves into a (third) QPO in the νLT range when νu exceeds 600 Hz. The frequencies of these QPOs all correlate to νu following power laws with indices between 0.4 and 3.3, significantly exceeding the predicted value of 2.0 in 80 per cent of the cases (at 3 to >20σ). Also, there is no evidence that the neutron star spin frequency affects any of these three QPO frequencies, as would be expected for frame dragging. Finally, the observed QPO frequencies tend to be higher than the νLT predicted for reasonable neutron star specific moment of inertia. In the light of recent successes of precession models in black holes, we briefly discuss ways in which such precession can occur in neutron stars at frequencies different from test-particle values and consistent with those observed. A precessing torus geometry and other torques than frame dragging may allow precession to produce the observed frequency correlations, but can only explain one of the three QPOs in the νLT range.
Rescue and Calibration of NIMBUS 1-4 IR Film Products, 1964 TO 1972
NASA Astrophysics Data System (ADS)
Morgan, T.; Campbell, G. G.
2017-12-01
Digital data exists from the high resolution infrared instruments on Nimbus 1 to 4 for about 1/4 of the possible orbits for parts of 1964, 1966, 1969 and 1970. We are now digitizing and navigating 35 mm film products from those instruments into digital files. Some of those orbits overlap with the digital data so we can "calibrate" the gray scale pictures into temperatures by comparison. Then that calibration can be extended to orbits with no digital data. This greatly improves the coverage of the night time IR view of the earth. Ultimately these data will be inserted into the NASA archive for general use. We will review our progress on this project and discuss an error estimate for the calibration of the HRIR (High Resolution Infrared Radiometer) data from Nimbus 1, 2 and 3 as well as the THIR (Thermal Infrared Radiometer) data on Nimbus 4. These more complete Infrared views of the Earth provide the opportunity to better understand the weather in this period. Comparisons will be made with pre-satellite era reanalysis products.
NASA Astrophysics Data System (ADS)
Dittus, Hansjörg; Lämmerzahl, Claus
Clocks are an almost universal tool for exploring the fundamental structure of theories related to relativity. For future clock experiments, it is important for them to be performed in space. One mission which has the capability to perform and improve all relativity tests based on clocks by several orders of magnitude is OPTIS. These tests consist of (i) tests of the isotropy of light propagation (from which information about the matter sector which the optical resonators are made of can also be drawn), (ii) tests of the constancy of the speed of light, (iii) tests of the universality of the gravitational redshift by comparing clocks based on light propagation, like light clocks and various atomic clocks, (iv) time dilation based on the Doppler effect, (v) measuring the absolute gravitational redshift, (vi) measuring the perihelion advance of the satellite's orbit by using very precise tracking techniques, (vii) measuring the Lense-Thirring effect, and (viii) testing Newton's gravitational potential law on the scale of Earth-bound satellites. The corresponding tests are not only important for fundamental physics but also indispensable for practical purposes like navigation, Earth sciences, metrology, etc.
NASA Astrophysics Data System (ADS)
Cameron, A. D.; Champion, D. J.; Kramer, M.; Bailes, M.; Barr, E. D.; Bassa, C. G.; Bhandari, S.; Bhat, N. D. R.; Burgay, M.; Burke-Spolaor, S.; Eatough, R. P.; Flynn, C. M. L.; Freire, P. C. C.; Jameson, A.; Johnston, S.; Karuppusamy, R.; Keith, M. J.; Levin, L.; Lorimer, D. R.; Lyne, A. G.; McLaughlin, M. A.; Ng, C.; Petroff, E.; Possenti, A.; Ridolfi, A.; Stappers, B. W.; van Straten, W.; Tauris, T. M.; Tiburzi, C.; Wex, N.
2018-03-01
We report the discovery of PSR J1757-1854, a 21.5-ms pulsar in a highly-eccentric, 4.4-h orbit with a neutron star (NS) companion. PSR J1757-1854 exhibits some of the most extreme relativistic parameters of any known pulsar, including the strongest relativistic effects due to gravitational-wave damping, with a merger time of 76 Myr. Following a 1.6-yr timing campaign, we have measured five post-Keplerian parameters, yielding the two component masses (mp = 1.3384(9) M⊙ and mc = 1.3946(9) M⊙) plus three tests of general relativity, which the theory passes. The larger mass of the NS companion provides important clues regarding the binary formation of PSR J1757-1854. With simulations suggesting 3-σ measurements of both the contribution of Lense-Thirring precession to the rate of change of the semimajor axis and the relativistic deformation of the orbit within ˜7-9 yr, PSR J1757-1854 stands out as a unique laboratory for new tests of gravitational theories.
A US coordination Facility for the Spectrum-X-Gamma Observatory
NASA Technical Reports Server (NTRS)
Forman, W.; West, Donald (Technical Monitor)
2001-01-01
We have completed our efforts in support of the Spectrum X Gamma mission under a NASA grant. These activities have included direct support to the mission, developing unifying tools applicable to SXG and other X-ray astronomy missions, and X-ray astronomy research to maintain our understanding of the importance and relevance of SXG to the field. SXG provides: 1) Simultaneous Multiwavelength Capability; 2) Large Field of View High Resolution Imaging Spectroscopy; 3) Sensitive Polarimetry with SXRP (Stellar X-Ray Polarimeter). These capabilities will ensure the fulfillment of the following objectives: understanding the accretion dynamics and the importance of reprocessing, upscattering, and disk viscosity around black holes; studying cluster mergers; spatially resolving cluster cooling flows to detect cooling gas; detecting cool gas in cluster outskirts in absorption; mapping gas in filaments around clusters; finding the 'missing' baryons in the Universe; determining the activity history of the black hole in the Galactic Center of our own central black hole; determining pulsar beam geometry; searching for the Lense-Thirring effect in black hole sources; constraining emission mechanisms and accretion geometry in AGN.
Black hole spin from wobbling and rotation of the M87 jet and a sign of a magnetically arrested disc
NASA Astrophysics Data System (ADS)
Sob'yanin, Denis Nikolaevich
2018-06-01
New long-term Very Long Baseline Array observations of the well-known jet in the M87 radio galaxy at 43 GHz show that the jet experiences a sideways shift with an approximately 8-10 yr quasi-periodicity. Such jet wobbling can be indicative of a relativistic Lense-Thirring precession resulting from a tilted accretion disc. The wobbling period together with up-to-date kinematic data on jet rotation opens up the possibility for estimating angular momentum of the central supermassive black hole. In the case of a test-particle precession, the specific angular momentum is J/Mc = (2.7 ± 1.5) × 1014 cm, implying moderate dimensionless spin parameters a = 0.5 ± 0.3 and 0.31 ± 0.17 for controversial gas-dynamic and stellar-dynamic black hole masses. However, in the case of a solid-body-like precession, the spin parameter is much smaller for both masses, 0.15 ± 0.05. Rejecting this value on the basis of other independent spin estimations requires the existence of a magnetically arrested disc in M87.
Revealing the inner accretion flow around black holes using rapid variability
NASA Astrophysics Data System (ADS)
Axelsson, Magnus
2015-08-01
The geometry of the inner accretion flow of X-ray binaries is complex, with multiple regions contributing to the observed emission. Frequency-resolved spectroscopy is a powerful tool in breaking this spectral degeneracy. We have extracted the spectra of the strong low-frequency quasi-periodic oscillation (QPO) and its harmonic in GX339-4 and XTE J1550-564. We compare these to the time-averaged spectrum and the spectrum of the rapid (< 0.1s) variability. Our results support the picture where the QPO arises from vertical (Lense-Thirring) precession of an inhomogeneous hot flow, so that it is softer at larger radii closer to the truncated disc, and harder in the innermost parts of the flow where the rapid variability is produced. This coupling between variability and spectra allows us to constrain the soft Comptonization component, breaking the degeneracy plaguing the time-averaged spectrum and revealing the geometry of the accretion flow close to the black hole. We further show how the upcoming launch of ASTRO-H will allow even more specific regions in the accretion flow to be probed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Su, Yi-Hao; Chou, Yi; Hu, Chin-Ping
We present time-frequency analysis results based on the Hilbert–Huang transform (HHT) for the evolution of a 4-Hz low-frequency quasi-periodic oscillation (LFQPO) around the black hole X-ray binary XTE J1550–564. The origin of LFQPOs is still debated. To understand the cause of the peak broadening, we utilized a recently developed time-frequency analysis, HHT, for tracking the evolution of the 4-Hz LFQPO from XTE J1550–564. By adaptively decomposing the ∼4-Hz oscillatory component from the light curve and acquiring its instantaneous frequency, the Hilbert spectrum illustrates that the LFQPO is composed of a series of intermittent oscillations appearing occasionally between 3 and 5more » Hz. We further characterized this intermittency by computing the confidence limits of the instantaneous amplitudes of the intermittent oscillations, and constructed both the distributions of the QPO’s high- and low-amplitude durations, which are the time intervals with and without significant ∼4-Hz oscillations, respectively. The mean high-amplitude duration is 1.45 s and 90% of the oscillation segments have lifetimes below 3.1 s. The mean low-amplitude duration is 0.42 s and 90% of these segments are shorter than 0.73 s. In addition, these intermittent oscillations exhibit a correlation between the oscillation’s rms amplitude and mean count rate. This correlation could be analogous to the linear rms-flux relation found in the 4-Hz LFQPO through Fourier analysis. We conclude that the LFQPO peak in the power spectrum is broadened owing to intermittent oscillations with varying frequencies, which could be explained by using the Lense–Thirring precession model.« less
The Measurement of Gravitomagnetism: A Challenging Enterprise
NASA Astrophysics Data System (ADS)
Iorio, Lorenzo
2007-11-01
This book is intended to give an updated overview on the state-of-the art of the theoretical and experimental efforts aimed to detect the elusive Lense-Thirring effect in the gravitational field of the Earth. The reader, after a robust introduction to the historical (Chapter 2) and theoretical (Chapters 3-5) aspects of the subject, will get acquainted with the subtleties required to design suitable observables which are able to sufficiently enhance the signal-to-noise ratio. Moreover, he/she should be able to follow autonomously the exciting developments which, hopefully, will take place in the near future if and when reliable few percent tests of this prediction of general relativity should become available. In an Earth-space based experiment with artificial satellites a good compromise between the need of reducing the impact of the systematic errors of gravitational origin and of non-gravitational origin must be obtained; this is not an easy task because such requirements are often in conflict one with each other. Consequently, a great attention is paid to elucidate many classical perturbing effects which, if not carefully modelled and accounted for in the data analysis, may alias the recovery of the gravitomagnetic signature. Indeed, we are dealing with a fundamental test of general relativity which must be honest, robust and based on solid error analysis. A critical and detailed discussion of the latest test with the LAGEOS satellites is included. The book will also be useful for better understanding the interplay among various geodetic, geophysical, general relativistic, astronomical and matter-wave interferometric effects which occurs in the weak-field and slow-motion approximation and which will become increasingly important in the near future thanks to the improvements in the accuracy of the orbital reconstruction process.
Antarctic Surface Temperatures Using Satellite Infrared Data from 1979 Through 1995
NASA Technical Reports Server (NTRS)
Comiso, Josefino C.; Stock, Larry
1997-01-01
The large scale spatial and temporal variations of surface ice temperature over the Antarctic region are studied using infrared data derived from the Nimbus-7 Temperature Humidity Infrared Radiometer (THIR) from 1979 through 1985 and from the NOAA Advanced Very High Resolution Radiometer (AVHRR) from 1984 through 1995. Enhanced techniques suitable for the polar regions for cloud masking and atmospheric correction were used before converting radiances to surface temperatures. The observed spatial distribution of surface temperature is highly correlated with surface ice sheet topography and agrees well with ice station temperatures with 2K to 4K standard deviations. The average surface ice temperature over the entire continent fluctuates by about 30K from summer to winter while that over the Antarctic Plateau varies by about 45K. Interannual fluctuations of the coldest interannual variations in surface temperature are highest at the Antarctic Plateau and the ice shelves (e.g., Ross and Ronne) with a periodic cycle of about 5 years and standard deviations of about 11K and 9K, respectively. Despite large temporal variability, however, especially in some regions, a regression analysis that includes removal of the seasonal cycle shows no apparent trend in temperature during the period 1979 through 1995.
Upper limit on NUT charge from the observed terrestrial Sagnac effect
NASA Astrophysics Data System (ADS)
Kulbakova, A.; Karimov, R. Kh; Izmailov, R. N.; Nandi, K. K.
2018-06-01
The exact Sagnac delay in the Kerr–Taub–NUT (Newman–Unti–Tamburino) spacetime is derived in the equatorial plane for non-geodesic as well as geodesic circular orbits. The resulting formula, being exact, can be directly applied to motion in the vicinity of any spinning object including black holes but here we are considering only the terrestrial case since observational data are available. The formula reveals that, in the limit of spin , the delay does not vanish. This fact is similar to the non-vanishing of Lense–Thirring precession under even though the two effects originate from different premises. Assuming a reasonable input that the Kerr–Taub–NUT corrections are subsumed in the average residual uncertainty in the measured Sagnac delay, we compute upper limits on the NUT charge n. It is found that the upper limits on n are far larger than the Earth’s gravitational mass, which has not been detected in observations, implying that the Sagnac effect cannot constrain n to smaller values near zero. We find a curious difference between the delays for non-geodesic and geodesic clock orbits and point out its implication for the well known ‘twin paradox’ of special relativity.
Light and/or atomic beams to detect ultraweak gravitational effects
NASA Astrophysics Data System (ADS)
Tartaglia, Angelo; Belfi, Jacopo; Beverini, Nicolò; Di Virgilio, Angela; Ortolan, Antonello; Porzio, Alberto; Ruggiero, Matteo Luca
2014-06-01
We shall review the opportunities lent by ring lasers and atomic beams interferometry in order to reveal gravitomagnetic effects on Earth. Both techniques are based on the asymmetric propagation of waves in the gravitational field of a rotating mass; actually the times of flight for co- or counter-rotating closed paths turn out to be different. After discussing properties and limitations of the two approaches we shall describe the proposed GINGER experiment which is being developed for the Gran Sasso National Laboratories in Italy. The experimental apparatus will consist of a three-dimensional array of square rings, 6m × 6m, that is planned to reach a sensitivity in the order of 1prad/√Hertz or better. This sensitivity would be one order of magnitude better than the best existing ring, which is the G-ring in Wettzell, Bavaria, and would allow for the terrestrial detection of the Lense-Thirring effect and possibly of deviations from General Relativity. The possibility of using either the ring laser approach or atomic interferometry in a space mission will also be considered. The technology problems are under experimental study using both the German G-ring and the smaller G-Pisa ring, located at the Gran Sasso.
Observation of Kilohertz Quasiperiodic Oscillations from the Atoll Source 4U 1702-429 by RXTE
NASA Technical Reports Server (NTRS)
Markwardt, C. B.; Strohmayer, Tod E.; Swank, Jean H.
1998-01-01
We present results of Rossi X-Ray Timing Explorer (RXTE) observations of the atoll source 4U 1702-429 in the middle of its luminosity range. Kilohertz-range quasiperiodic oscillations (QPOS) were observed first as a narrow (FWHM approximately 7 Hz) peak near 900 Hz, and later as a pair consisting of a narrow peak in the range 625 825 Hz and a faint broad (FWHM 91 Hz) peak. When the two peaks appeared simultaneously the separation was 333 +/- 5 Hz. Six type I thermonuclear bursts were detected, of which five exhibited almost coherent oscillations near 330 Hz, which makes 4U 1702-429 only the second source to show burst oscillations very close to the kilohertz QPO separation frequency. The energy spectrum and color-color diagram indicate that the source executed variations in the range between the "island" and "lower banana" atoll states. In addition to the kilohertz variability, oscillations at approximately 10, approximately 35, and 80 Hz were also detected at various times, superimposed on a red noise continuum. The centroid of the approximately 35 Hz QPO tracks the frequency of the kilohertz oscillation when they were both present. A Lense-Thirring gravitomagnetic precession interpretation appears more plausible in this case, compared to other atoll sources with low frequency QPOs.
General Relativistic Effects and QPOs in X-Ray Binaries
NASA Astrophysics Data System (ADS)
Markovic, D.; Lamb, F. K.
We have investigated whether general relativistic effects may be responsible for some of the quasi-periodic X-ray brightness oscillations (QPOs) observed in low-mass binary systems containing accreting neutron stars and black hole candidates. In particular, we have computed the motions of accreting gas in the strong gravitational fields near such objects and have explored possible mechanisms for producing X-ray flux oscillations. We have discovered a family of weakly damped global gravitomagnetic (Lense-Thirring) warping modes of the inner (viscous) accretion disk that have precession frequencies ranging up to the single-particle gravitomagnetic precession frequency at the inner edge of the disk, which is about 30 Hz if the disk extends inward to the innermost stable circular orbit around a compact object of solar mass with dimensionless angular momentum cJ/GM2 ~ 0.2. Precession of regions of enhanced viscous dissipation or modulation of the accretion flow by the precession may produce observable periodic variation of the X-ray flux. Detectable effects might also be produced if the gas in the inner disk breaks up into a collection of distinct clumps. We have analyzed the dynamics of such clumps as well as the conditions required for their formation and survival on time scales long enough to produce QPOs with the coherence observed in low-mass X-ray binaries.
Formation of precessing jets by tilted black hole discs in 3D general relativistic MHD simulations
NASA Astrophysics Data System (ADS)
Liska, M.; Hesp, C.; Tchekhovskoy, A.; Ingram, A.; van der Klis, M.; Markoff, S.
2018-02-01
Gas falling into a black hole (BH) from large distances is unaware of BH spin direction, and misalignment between the accretion disc and BH spin is expected to be common. However, the physics of tilted discs (e.g. angular momentum transport and jet formation) is poorly understood. Using our new GPU-accelerated code H-AMR, we performed 3D general relativistic magnetohydrodynamic simulations of tilted thick accretion discs around rapidly spinning BHs, at the highest resolution to date. We explored the limit where disc thermal pressure dominates magnetic pressure, and showed for the first time that, for different magnetic field strengths on the BH, these flows launch magnetized relativistic jets propagating along the rotation axis of the tilted disc (rather than of the BH). If strong large-scale magnetic flux reaches the BH, it bends the inner few gravitational radii of the disc and jets into partial alignment with the BH spin. On longer time-scales, the simulated disc-jet system as a whole undergoes Lense-Thirring precession and approaches alignment, demonstrating for the first time that jets can be used as probes of disc precession. When the disc turbulence is well resolved, our isolated discs spread out, causing both the alignment and precession to slow down.
Superconducting gravity gradiometer and a test of inverse square law
NASA Technical Reports Server (NTRS)
Moody, M. V.; Paik, Ho Jung
1989-01-01
The equivalence principle prohibits the distinction of gravity from acceleration by a local measurement. However, by making a differential measurement of acceleration over a baseline, platform accelerations can be cancelled and gravity gradients detected. In an in-line superconducting gravity gradiometer, this differencing is accomplished with two spring-mass accelerometers in which the proof masses are confined to motion in a single degree of freedom and are coupled together by superconducting circuits. Platform motions appear as common mode accelerations and are cancelled by adjusting the ratio of two persistent currents in the sensing circuit. The sensing circuit is connected to a commercial SQUID amplifier to sense changes in the persistent currents generated by differential accelerations, i.e., gravity gradients. A three-axis gravity gradiometer is formed by mounting six accelerometers on the faces of a precision cube, with the accelerometers on opposite faces of the cube forming one of three in-line gradiometers. A dedicated satellite mission for mapping the earth's gravity field is an important one. Additional scientific goals are a test of the inverse square law to a part in 10(exp 10) at 100 km, and a test of the Lense-Thirring effect by detecting the relativistic gravity magnetic terms in the gravity gradient tensor for the earth.
Spin precession in a black hole and naked singularity spacetimes
NASA Astrophysics Data System (ADS)
Chakraborty, Chandrachur; Kocherlakota, Prashant; Joshi, Pankaj S.
2017-02-01
We propose here a specific criterion to address the existence or otherwise of Kerr naked singularities, in terms of the precession of the spin of a test gyroscope due to the frame dragging by the central spinning body. We show that there is indeed an important characteristic difference in the behavior of gyro spin precession frequency in the limit of approach to these compact objects, and this can be used, in principle, to differentiate the naked singularity from a black hole. Specifically, if gyroscopes are fixed all along the polar axis up to the horizon of a Kerr black hole, the precession frequency becomes arbitrarily high, blowing up as the event horizon is approached. On the other hand, in the case of naked singularity, this frequency remains always finite and well behaved. Interestingly, this behavior is intimately related to and is governed by the geometry of the ergoregion in each of these cases, which we analyze here. One intriguing behavior that emerges is, in the Kerr naked singularity case, the Lense-Thirring precession frequency (ΩLT ) of the gyroscope due to frame-dragging effect decreases as (ΩLT∝r ) after reaching a maximum, in the limit of r =0 , as opposed to r-3 dependence in all other known astrophysical cases.
Sexual Dimorphism in the Brain of the Monogamous California Mouse (Peromyscus californicus).
Campi, Katharine L; Jameson, Chelsea E; Trainor, Brian C
2013-01-01
Sex differences in behavior and morphology are usually assumed to be stronger in polygynous species compared to monogamous species. A few brain structures have been identified as sexually dimorphic in polygynous rodent species, but it is less clear whether these differences persist in monogamous species. California mice are among the 5% or less of mammals that are considered to be monogamous and as such provide an ideal model to examine sexual dimorphism in neuroanatomy. In the present study we compared the volume of hypothalamic- and limbic-associated regions in female and male California mice for sexual dimorphism. We also used tyrosine hydroxylase (TH) immunohistochemistry to compare the number of dopamine neurons in the ventral tegmental area (VTA) in female and male California mice. Additionally, tract tracing was used to accurately delineate the boundaries of the VTA. The total volume of the sexually dimorphic nucleus of the preoptic area (SDN-POA), the principal nucleus of the bed nucleus of the stria terminalis (BNST), and the posterodorsal medial amygdala (MEA) was larger in males compared to females. In the SDN-POA we found that the magnitude of sex differences in the California mouse were intermediate between the large differences observed in promiscuous meadow voles and rats and the absence of significant differences in monogamous prairie voles. However, the magnitude of sex differences in MEA and the BNST were comparable to polygynous species. No sex differences were observed in the volume of the whole brain, the VTA, the nucleus accumbens or the number of TH-ir neurons in the VTA. These data show that despite a monogamous social organization, sexual dimorphisms that have been reported in polygynous rodents extend to California mice. Our data suggest that sex differences in brain structures such as the SDN-POA persist across species with different social organizations and may be an evolutionarily conserved characteristic of mammalian brains.
Experimental Design for the LATOR Mission
NASA Technical Reports Server (NTRS)
Turyshev, Slava G.; Shao, Michael; Nordtvedt, Kenneth, Jr.
2004-01-01
This paper discusses experimental design for the Laser Astrometric Test Of Relativity (LATOR) mission. LATOR is designed to reach unprecedented accuracy of 1 part in 10(exp 8) in measuring the curvature of the solar gravitational field as given by the value of the key Eddington post-Newtonian parameter gamma. This mission will demonstrate the accuracy needed to measure effects of the next post-Newtonian order (near infinity G2) of light deflection resulting from gravity s intrinsic non-linearity. LATOR will provide the first precise measurement of the solar quadrupole moment parameter, J(sub 2), and will improve determination of a variety of relativistic effects including Lense-Thirring precession. The mission will benefit from the recent progress in the optical communication technologies the immediate and natural step above the standard radio-metric techniques. The key element of LATOR is a geometric redundancy provided by the laser ranging and long-baseline optical interferometry. We discuss the mission and optical designs, as well as the expected performance of this proposed mission. LATOR will lead to very robust advances in the tests of Fundamental physics: this mission could discover a violation or extension of general relativity, or reveal the presence of an additional long range interaction in the physical law. There are no analogs to the LATOR experiment; it is unique and is a natural culmination of solar system gravity experiments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jensen, Pia; Department of Neurosurgery, University of Bern, CH-3010 Bern; Gramsbergen, Jan-Bert
Effective numerical expansion of dopaminergic precursors might overcome the limited availability of transplantable cells in replacement strategies for Parkinson's disease. Here we investigated the effect of fibroblast growth factor-2 (FGF2) and FGF8 on expansion and dopaminergic differentiation of rat embryonic ventral mesencephalic neuroblasts cultured at high (20%) and low (3%) oxygen tension. More cells incorporated bromodeoxyuridine in cultures expanded at low as compared to high oxygen tension, and after 6 days of differentiation there were significantly more neuronal cells in low than in high oxygen cultures. Low oxygen during FGF2-mediated expansion resulted also in a significant increase in tyrosine hydroxylase-immunoreactivemore » (TH-ir) dopaminergic neurons as compared to high oxygen tension, but no corresponding effect was observed for dopamine release into the culture medium. However, switching FGF2-expanded cultures from low to high oxygen tension during the last two days of differentiation significantly enhanced dopamine release and intracellular dopamine levels as compared to all other treatment groups. In addition, the short-term exposure to high oxygen enhanced in situ assessed TH enzyme activity, which may explain the elevated dopamine levels. Our findings demonstrate that modulation of oxygen tension is a recognizable factor for in vitro expansion and dopaminergic differentiation of rat embryonic midbrain precursor cells.« less
An observational method for fast stochastic X-ray polarimetry timing
NASA Astrophysics Data System (ADS)
Ingram, Adam R.; Maccarone, Thomas J.
2017-11-01
The upcoming launch of the first space based X-ray polarimeter in ˜40 yr will provide powerful new diagnostic information to study accreting compact objects. In particular, analysis of rapid variability of the polarization degree and angle will provide the opportunity to probe the relativistic motions of material in the strong gravitational fields close to the compact objects, and enable new methods to measure black hole and neutron star parameters. However, polarization properties are measured in a statistical sense, and a statistically significant polarization detection requires a fairly long exposure, even for the brightest objects. Therefore, the sub-minute time-scales of interest are not accessible using a direct time-resolved analysis of polarization degree and angle. Phase-folding can be used for coherent pulsations, but not for stochastic variability such as quasi-periodic oscillations. Here, we introduce a Fourier method that enables statistically robust detection of stochastic polarization variability for arbitrarily short variability time-scales. Our method is analogous to commonly used spectral-timing techniques. We find that it should be possible in the near future to detect the quasi-periodic swings in polarization angle predicted by Lense-Thirring precession of the inner accretion flow. This is contingent on the mean polarization degree of the source being greater than ˜4-5 per cent, which is consistent with the best current constraints on Cygnus X-1 from the late 1970s.
LARES successfully launched in orbit: Satellite and mission description
NASA Astrophysics Data System (ADS)
Paolozzi, Antonio; Ciufolini, Ignazio
2013-10-01
On February 13th 2012, the LARES satellite of the Italian Space Agency (ASI) was launched into orbit with the qualification flight of the new VEGA launcher of the European Space Agency (ESA). The payload was released very accurately in the nominal orbit. The name LARES means LAser RElativity Satellite and summarises the objective of the mission and some characteristics of the satellite. It is, in fact, a mission designed to test Einstein's General Relativity Theory (specifically 'frame-dragging' and Lense-Thirring effect). The satellite is passive and covered with optical retroreflectors that send back laser pulses to the emitting ground station. This allows accurate positioning of the satellite, which is important for measuring the very small deviations from Galilei-Newton's laws. In 2008, ASI selected the prime industrial contractor for the LARES system with a heavy involvement of the universities in all phases of the programme, from the design to the construction and testing of the satellite and separation system. The data exploitation phase started immediately after the launch under a new contract between ASI and those universities. Tracking of the satellite is provided by the International Laser Ranging Service. Due to its particular design, LARES is the orbiting object with the highest known mean density in the solar system. In this paper, it is shown that this peculiarity makes it the best proof particle ever manufactured. Design aspects, mission objectives and preliminary data analysis will be also presented.
Tolcos, M; McGregor, H; Walker, D; Rees, S
2000-03-01
Maternal cigarette smoking during pregnancy is associated with a significantly increased risk of Sudden Infant Death Syndrome (SIDS). This study investigated the effects of prenatal exposure to carbon monoxide (CO), a major component of cigarette smoke, on the neuroglial and neurochemical development of the medulla in the fetal guinea pig. Pregnant guinea pigs were exposed to 200 p.p.m CO for 10 h per day from day 23-25 of gestation (term = 68 days) until day 61-63, at which time fetuses were removed and brains collected for analysis. Using immunohistochemistry and quantitative image analysis, examination of the medulla of CO-exposed fetuses revealed a significant decrease in tyrosine hydroxylase-immunoreactivity (TH-IR) in the nucleus tractus solitarius, dorsal motor nucleus of the vagus (DMV), area postrema, intermediate reticular nucleus, and the ventrolateral medulla (VLM), and a significant increase in choline acetyltransferase-immunoreactivity (ChAT-IR) in the DMV and hypoglossal nucleus compared with controls. There was no difference between groups in immunoreactivity for the m2 muscarinic acetylcholine receptor, substance P- or met-enkephalin in any of the medullary nuclei examined, nor was there evidence of reactive astrogliosis. The results show that prenatal exposure to CO affects cholinergic and catecholaminergic pathways in the medulla of the guinea pig fetus, particularly in cardiorespiratory centers, regions thought to be compromised in SIDS.
General Relativistic Effects and QPOs in X-Ray Binaries
NASA Astrophysics Data System (ADS)
Markovic, D.; Lamb, F.
1999-05-01
We have investigated whether general relativistic effects may be responsible for some of the quasi-periodic X-ray brightness oscillations (QPOs) with frequencies 20--300 Hz observed in low-mass binary systems containing accreting neutron stars and black hole candidates. In particular, we have computed the motions of accreting gas in the strong gravitational fields near such objects and have explored possible mechanisms for producing X-ray flux oscillations. We have discovered a family of global gravitomagnetic (Lense-Thirring) warping modes of the inner accretion disk that have precession frequencies ranging up to the single-particle gravitomagnetic precession frequency at the inner edge of the disk, which is 30 Hz if the disk extends inward to the innermost stable circular orbit around a compact object of solar mass with dimensionless angular momentum cJ/GM2 0.2. The highest-frequency warping modes are very localized spiral corrugations of the inner disk and are weakly damped, with Q values 2--50. Precession of regions of enhanced viscous dissipation or modulation of the accretion flow by the precession may produce observable periodic variation of the X-ray flux. Detectable effects might also be produced if the gas in the inner disk breaks up into a collection of distinct clumps. We have analyzed the dynamics of such clumps as well as the conditions required for their formation and survival on time scales long enough to produce oscillations with the coherence observed in X-ray binaries.
Ichikawa, H; Helke, C J
1996-10-07
The presence and coexistence of calbindin D-28k-immunoreactivity (ir) and nicotinamide adenosine dinucleotide phosphate (NADPH)-diaphorase activity (a marker of neurons that are presumed to convert L-arginine to L-citrulline and nitric oxide) were examined in the glossopharyngeal and vagal sensory ganglia (jugular, petrosal and nodose ganglia) of the rat. Calbindin D-28k-ir nerve cells were found in moderate and large numbers in the petrosal and nodose ganglia, respectively. Some calbindin D-28k-ir nerve cells were also observed in the jugular ganglion. NADPH-diaphorase positive nerve cells were localized to the jugular and nodose ganglia and were rare in the petrosal ganglion. A considerable portion (33-51%) of the NADPH-diaphorase positive neurons in these ganglia colocalized calbindin D-28k-ir. The presence and colocalization of calbindin D-28k-ir and NADPH-diaphorase activity in neurotransmitter-identified subpopulations of visceral sensory neurons were also studied. In all three ganglia, calcitonin gene-related peptide (CGRP)-ir was present in many NADPH-diaphorase positive neurons, a subset of which also contained calbindin D-28k-ir. In the nodose ganglion, many (42%) of tyrosine hydroxylase (TH)-ir neurons also contained NADPH diaphorase activity but did not contain calbindin D-28k-ir. These data are consistent with a potential co-operative role for calbindin D-28k and NADPH-diaphorase in the functions of a subpopulation of vagal and glossopharyngeal sensory neurons.
General Relativity During the Great War
NASA Astrophysics Data System (ADS)
Trimble, Virginia L.
2016-01-01
Einstein's (and Hilbert's) equations saw light of day in the darkness of Berlin 1915, as is well known. Moving from this highlight to less conspicuous topics, we find Karl Schwarzschild's solution of those equations (1916) followed shortly by his death. On the observational and American front, Slipher's assemblage of galaxy radial velocities, begun in 1912 with M31, continued apace. Shapley was busily moving us out of the galactic center. Also at Mt. Wilson, Charles St. John looked for gravitational redshift in the solar spectrum in 1917 without firmly detecting it. Adams demonstrated the very low luminosities of Sirius B and 40 Eri B in 1914 (but his attempt at a redshift for the former came only in 1923). Perhaps least well known is that a handful of additional critical theoretical papers date from the war years and describe the Lense-Thirring effect, the Reissner-Nordstrom solution, and a charged solution with a cosmological constant (due to the even more obscure Friedrich Kottler). Some of these came out of neutral Holland, but Kottler served both at Ypres and on the Galician front. Interesting mixes of military service and relativistic contributions are also associated with the names of Friedmann, Le Lemaître, Weyl (of the tensor), Minkowski, Hubble, Flamm, Droste, and Kretschmann. Astronomers in neutral Denmark, Holland and (until 1917) the USA facilitated transmittal of astronomical observations and other news across the battle lines so that Schwarzschild received an obituary in Nature and Moseley one in Naturwissenschaften.
High-Accuracy Ring Laser Gyroscopes: Earth Rotation Rate and Relativistic Effects
NASA Astrophysics Data System (ADS)
Beverini, N.; Di Virgilio, A.; Belfi, J.; Ortolan, A.; Schreiber, K. U.; Gebauer, A.; Klügel, T.
2016-06-01
The Gross Ring G is a square ring laser gyroscope, built as a monolithic Zerodur structure with 4 m length on all sides. It has demonstrated that a large ring laser provides a sensitivity high enough to measure the rotational rate of the Earth with a high precision of ΔΩE < 10-8. It is possible to show that further improvement in accuracy could allow the observation of the metric frame dragging, produced by the Earth rotating mass (Lense-Thirring effect), as predicted by General Relativity. Furthermore, it can provide a local measurement of the Earth rotational rate with a sensitivity near to that provided by the international system IERS. The GINGER project is intending to take this level of sensitivity further and to improve the accuracy and the long-term stability. A monolithic structure similar to the G ring laser is not available for GINGER. Therefore the preliminary goal is the demonstration of the feasibility of a larger gyroscope structure, where the mechanical stability is obtained through an active control of the geometry. A prototype moderate size gyroscope (GP-2) has been set up in Pisa in order to test this active control of the ring geometry, while a second structure (GINGERino) has been installed inside the Gran Sasso underground laboratory in order to investigate the properties of a deep underground laboratory in view of an installation of a future GINGER apparatus. The preliminary data on these two latter instruments are presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Binukumar, BK; Gupta, Nidhi; Bal, Amanjit
Numerous epidemiological studies have shown an association between pesticide exposure and increased risk of developing Parkinson's diseases. Oxidative stress generated as a result of mitochondrial dysfunction has been implicated as an important factor in the etiology of Parkinson's disease. Previously, we reported that chronic dichlorvos exposure causes mitochondrial impairments and nigrostriatal neuronal death in rats. The present study was designed to test whether Coenzyme Q{sub 10} (CoQ{sub 10}) administration has any neuroprotective effect against dichlorvos mediated nigrostriatal neuronal death, {alpha}-synuclein aggregation, and motor dysfunction. Male albino rats were administered dichlorvos by subcutaneous injection at a dose of 2.5 mg/kg bodymore » weight over a period of 12 weeks. Results obtained there after showed that dichlorvos exposure leads to enhanced mitochondrial ROS production, {alpha}-synuclein aggregation, decreased dopamine and its metabolite levels resulting in nigrostriatal neurodegeneration. Pretreatment by Coenzyme Q{sub 10} (4.5 mg/kg ip for 12 weeks) to dichlorvos treated animals significantly attenuated the extent of nigrostriatal neuronal damage, in terms of decreased ROS production, increased dopamine and its metabolite levels, and restoration of motor dysfunction when compared to dichlorvos treated animals. Thus, the present study shows that Coenzyme Q{sub 10} administration may attenuate dichlorvos induced nigrostriatal neurodegeneration, {alpha}-synuclein aggregation and motor dysfunction by virtue of its antioxidant action. - Highlights: > CoQ{sub 10} administration attenuates dichlorvos induced nigrostriatal neurodegenaration. > CoQ{sub 10} pre treatment leads to preservation of TH-IR neurons. > CoQ{sub 10} may decrease oxidative damage and {alpha}-synuclin aggregation. > CoQ{sub 10} treatment enhances motor function and protects rats from catalepsy.« less
A tilted and warped inner accretion disc around a spinning black hole: an analytical solution
NASA Astrophysics Data System (ADS)
Chakraborty, Chandrachur; Bhattacharyya, Sudip
2017-08-01
Inner accretion disc around a black hole provides a rare, natural probe to understand the fundamental physics of the strong gravity regime. A possible tilt of such a disc, with respect to the black hole spin equator, is important. This is because such a tilt affects the observed spectral and timing properties of the disc X-ray emission via Lense-Thirring precession, which could be used to test the theoretical predictions regarding the strong gravity. Here, we analytically solve the steady, warped accretion disc equation of Scheurer and Feiler, and find an expression of the radial profile of the disc tilt angle. In our exact solution, considering a prograde disc around a slowly spinning black hole, we include the inner part of the disc, which was not done earlier in this formalism. Such a solution is timely, as a tilted inner disc has recently been inferred from X-ray spectral and timing features of the accreting black hole H1743-322. Our tilt angle radial profile expression includes observationally measurable parameters, such as black hole mass and Kerr parameter, and the disc inner edge tilt angle Win, and hence can be ideal to confront observations. Our solution shows that the disc tilt angle in 10-100 gravitational radii is a significant fraction of the disc outer edge tilt angle, even for Win = 0. Moreover, tilt angle radial profiles have humps in ˜10-1000 gravitational radii for some sets of parameter values, which should have implications for observed X-ray features.
Zheng, Huiyuan; Patterson, Laurel M; Berthoud, Hans-Rudolf
2005-05-02
Orexin-expressing neurons in the hypothalamus project throughout the neuraxis and are involved in regulation of the sleep/wake cycle, food intake, and autonomic functions. Here we specifically analyze the anatomical organization of orexin projections to the dorsal vagal complex (DVC) and raphe pallidus and effects on ingestive behavior and autonomic functions of local orexin-A administration in nonanesthetized rats. Retrograde tracing experiments revealed that as many as 20% of hypothalamic orexin neurons project to the DVC, where they form straight varicose axon profiles, some of which are in close anatomical apposition with tyrosine hydroxylase (TH)-, glucagon-like peptide-1-, gamma-aminobutyric acid-, and nitric oxide synthase-immunoreactive neurons in a nonselective manner. Similar contacts were frequently observed with neurons of the nucleus of the solitary tract whose activation by gastrointestinal food stimuli was demonstrated by the expression of nuclear c-Fos immunoreactivity. Orexin-A administration to the fourth ventricle induced significant Fos-expression throughout the DVC compared with saline control injections, with about 20-25% of TH-ir neurons among the stimulated ones. Fourth ventricular orexin injections also significantly stimulated chow and water intake in nonfood-deprived rats. Direct bilateral injections of orexin into the DVC increased intake of palatable high-fat pellets. Orexin-ir fibers also innervated raphe pallidus. Fourth ventricular orexin-A (1 nmol) activated Fos expression in the raphe pallidus and C1/A1 catecholaminergic neurons in the ventral medulla and increased body temperature, heart rate, and locomotor activity. The results confirm that hypothalamomedullary orexin projections are involved in a variety of physiological functions, including ingestive behavior and sympathetic outflow. Copyright 2005 Wiley-Liss, Inc.
Schwinger's Approach to Einstein's Gravity
NASA Astrophysics Data System (ADS)
Milton, Kim
2012-05-01
Albert Einstein was one of Julian Schwinger's heroes, and Schwinger was greatly honored when he received the first Einstein Prize (together with Kurt Godel) for his work on quantum electrodynamics. Schwinger contributed greatly to the development of a quantum version of gravitational theory, and his work led directly to the important work of (his students) Arnowitt, Deser, and DeWitt on the subject. Later in the 1960's and 1970's Schwinger developed a new formulation of quantum field theory, which he dubbed Source Theory, in an attempt to get closer contact to phenomena. In this formulation, he revisited gravity, and in books and papers showed how Einstein's theory of General Relativity emerged naturally from one physical assumption: that the carrier of the gravitational force is a massless, helicity-2 particle, the graviton. (There has been a minor dispute whether gravitational theory can be considered as the massless limit of a massive spin-2 theory; Schwinger believed that was the case, while Van Dam and Veltman concluded the opposite.) In the process, he showed how all of the tests of General Relativity could be explained simply, without using the full machinery of the theory and without the extraneous concept of curved space, including such effects as geodetic precession and the Lense-Thirring effect. (These effects have now been verified by the Gravity Probe B experiment.) This did not mean that he did not accept Einstein's equations, and in his book and full article on the subject, he showed how those emerge essentially uniquely from the assumption of the graviton. So to speak of Schwinger versus Einstein is misleading, although it is true that Schwinger saw no necessity to talk of curved spacetime. In this talk I will lay out Schwinger's approach, and the connection to Einstein's theory.
Mulder, Jan; Hökfelt, Tomas; Knuepfer, Mark M.
2013-01-01
Efferent renal sympathetic nerves reinnervate the kidney after renal denervation in animals and humans. Therefore, the long-term reduction in arterial pressure following renal denervation in drug-resistant hypertensive patients has been attributed to lack of afferent renal sensory reinnervation. However, afferent sensory reinnervation of any organ, including the kidney, is an understudied question. Therefore, we analyzed the time course of sympathetic and sensory reinnervation at multiple time points (1, 4, and 5 days and 1, 2, 3, 4, 6, 9, and 12 wk) after renal denervation in normal Sprague-Dawley rats. Sympathetic and sensory innervation in the innervated and contralateral denervated kidney was determined as optical density (ImageJ) of the sympathetic and sensory nerves identified by immunohistochemistry using antibodies against markers for sympathetic nerves [neuropeptide Y (NPY) and tyrosine hydroxylase (TH)] and sensory nerves [substance P and calcitonin gene-related peptide (CGRP)]. In denervated kidneys, the optical density of NPY-immunoreactive (ir) fibers in the renal cortex and substance P-ir fibers in the pelvic wall was 6, 39, and 100% and 8, 47, and 100%, respectively, of that in the contralateral innervated kidney at 4 days, 4 wk, and 12 wk after denervation. Linear regression analysis of the optical density of the ratio of the denervated/innervated kidney versus time yielded similar intercept and slope values for NPY-ir, TH-ir, substance P-ir, and CGRP-ir fibers (all R2 > 0.76). In conclusion, in normotensive rats, reinnervation of the renal sensory nerves occurs over the same time course as reinnervation of the renal sympathetic nerves, both being complete at 9 to 12 wk following renal denervation. PMID:23408032
Behavior of a test gyroscope moving towards a rotating traversable wormhole
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chakraborty, Chandrachur; Pradhan, Parthapratim, E-mail: chandrachur.chakraborty@tifr.res.in, E-mail: pppradhan77@gmail.com
2017-03-01
The geodesic structure of the Teo wormhole is briefly discussed and some observables are derived that promise to be of use in detecting a rotating traversable wormhole indirectly, if it does exist. We also deduce the exact Lense-Thirring (LT) precession frequency of a test gyroscope moving toward a rotating traversable Teo wormhole. The precession frequency diverges on the ergoregion, a behavior intimately related to and governed by the geometry of the ergoregion, analogous to the situation in a Kerr spacetime. Interestingly, it turns out that here the LT precession is inversely proportional to the angular momentum ( a ) ofmore » the wormhole along the pole and around it in the strong gravity regime, a behavior contrasting with its direct variation with a in the case of other compact objects. In fact, divergence of LT precession inside the ergoregion can also be avoided if the gyro moves with a non-zero angular velocity in a certain range. As a result, the spin precession frequency of the gyro can be made finite throughout its whole path, even very close to the throat, during its travel to the wormhole. Furthermore, it is evident from our formulation that this spin precession not only arises due to curvature or rotation of the spacetime but also due to the non-zero angular velocity of the spin when it does not move along a geodesic in the strong gravity regime. If in the future, interstellar travel indeed becomes possible through a wormhole or at least in its vicinity, our results would prove useful in determining the behavior of a test gyroscope which is known to serve as a fundamental navigation device.« less
NASA Astrophysics Data System (ADS)
Di Virgilio, Angela D. V.; Belfi, Jacopo; Ni, Wei-Tou; Beverini, Nicolo; Carelli, Giorgio; Maccioni, Enrico; Porzio, Alberto
2017-04-01
GINGER (Gyroscopes IN General Relativity) is a proposal for an Earth-based experiment to measure the Lense-Thirring (LT) and de Sitter effects. GINGER is based on ring lasers, which are the most sensitive inertial sensors to measure the rotation rate of the Earth. We show that two ring lasers, one at maximum signal and the other horizontal, would be the simplest configuration able to retrieve the GR effects. Here, we discuss this configuration in detail showing that it would have the capability to test LT effect at 1%, provided the accuracy of the scale factor of the instrument at the level of 1 part in 1012 is reached. In principle, one single ring laser could do the test, but the combination of the two ring lasers gives the necessary redundancy and the possibility to verify that the systematics of the lasers are sufficiently small. The discussion can be generalised to seismology and geodesy and it is possible to say that signals 10-12 orders of magnitude below the Earth rotation rate can be studied; the proposed array can be seen as the basic element of multi-axial systems, and the generalisation to three dimensions is feasible adding one or two devices and monitoring the relative angles between different ring lasers. This simple array can be used to measure with very high precision the amplitude of angular rotation rate (the length of the day, LOD), its short term variations, and the angle between the angular rotation vector and the horizontal ring laser. Finally this experiment could be useful to probe gravity at fundamental level giving indications on violations of Einstein Equivalence Principle and Lorenz Invariance and possible chiral effects in the gravitational field.
The Laser Astrometric Test of Relativity (LATOR) Mission
NASA Technical Reports Server (NTRS)
Turyshev, Slava G.; Shao, Michael; Nordtvedt, Kenneth, Jr.
2003-01-01
This paper discusses new fundamental physics experiment that will test relativistic gravity at the accuracy better than the effects of the second order in the gravitational field strength, proportional to G(sup 2). The Laser Astrometric Test Of Relativity (LATOR) mission uses laser interferometry between two micro-spacecraft whose lines of sight pass close by the Sun to accurately measure deflection of light in the solar gravity. The key element of the experimental design is a redundant geometry optical truss provided by a long-baseline (100 m) multi-channel stellar optical interferometer placed on the International Space Station (ISS). The spatial interferometer is used for measuring the angles between the two spacecraft and for orbit determination purposes. In Euclidean geometry, determination of a triangle s three sides determines any angle therein; with gravity changing the optical lengths of sides passing close by the Sun and deflecting the light, the Euclidean relationships are overthrown. The geometric redundancy enables LATOR to measure the departure from Euclidean geometry caused by the solar gravity field to a very high accuracy. LATOR will not only improve the value of the parameterized post-Newtonian (PPN) gamma to unprecedented levels of accuracy of 1 part in 10(exp 8), it will also reach ability to measure effects of the next post-Newtonian order (c(sup -4)) of light deflection resulting from gravity s intrinsic non-linearity. The solar quadrupole moment parameter, J(sub 2), will be measured with high precision, as well as a variety of other relativistic effects including Lense-Thirring precession. LATOR will lead to very robust advances in the tests of Fundamental physics: this mission could discover a violation or extension of general relativity, or reveal the presence of an additional long range interaction in the physical law. There are no analogs to the LATOR experiment; it is unique and is a natural culmination of solar system gravity experiments.
A possible new test of general relativity with Juno
NASA Astrophysics Data System (ADS)
Iorio, L.
2013-10-01
The expansion in multipoles Jℓ, ℓ = 2, … of the gravitational potential of a rotating body affects the orbital motion of a test particle orbiting it with long-term perturbations both at a classical and at a relativistic level. In this preliminary sensitivity analysis, we show that, for the first time, the J2c-2 effects could be measured by the ongoing Juno mission in the gravitational field of Jupiter during its nearly yearlong science phase (10 November 2016-5 October 2017), thanks to its high eccentricity (e = 0.947) and to the huge oblateness of Jupiter (J2 = 1.47 × 10-2). The semimajor axis a and the perijove ω of Juno are expected to be shifted by Δa ≲ 700-900 m and Δω ≲ 50-60 milliarcseconds (mas), respectively, over 1-2 yr. A numerical analysis shows also that the expected J2c-2 range-rate signal for Juno should be as large as ≈280 microns per second (μm s-1) during a typical 6 h pass at its closest approach. Independent analyses previously performed by other researchers about the measurability of the Lense-Thirring effect showed that the radio science apparatus of Juno should reach an accuracy in Doppler range-rate measurements of ≈1-5 μm s-1 over such passes. The range-rate signature of the classical even zonal perturbations is different from the first post-Newtonian (1PN) one. Thus, further investigations, based on covariance analyses of simulated Doppler data and dedicated parameters estimation, are worth of further consideration. It turns out that the J2c-2 effects cannot be responsible of the flyby anomaly in the gravitational field of the Earth. A dedicated spacecraft in a 6678 km × 57103 km polar orbit would experience a geocentric J2c-2 range-rate shift of ≈0.4 mm s-1.
Petersen, Christopher L; Timothy, Miky; Kim, D Spencer; Bhandiwad, Ashwin A; Mohr, Robert A; Sisneros, Joseph A; Forlano, Paul M
2013-01-01
While the neural circuitry and physiology of the auditory system is well studied among vertebrates, far less is known about how the auditory system interacts with other neural substrates to mediate behavioral responses to social acoustic signals. One species that has been the subject of intensive neuroethological investigation with regard to the production and perception of social acoustic signals is the plainfin midshipman fish, Porichthys notatus, in part because acoustic communication is essential to their reproductive behavior. Nesting male midshipman vocally court females by producing a long duration advertisement call. Females localize males by their advertisement call, spawn and deposit all their eggs in their mate's nest. As multiple courting males establish nests in close proximity to one another, the perception of another male's call may modulate individual calling behavior in competition for females. We tested the hypothesis that nesting males exposed to advertisement calls of other males would show elevated neural activity in auditory and vocal-acoustic brain centers as well as differential activation of catecholaminergic neurons compared to males exposed only to ambient noise. Experimental brains were then double labeled by immunofluorescence (-ir) for tyrosine hydroxylase (TH), an enzyme necessary for catecholamine synthesis, and cFos, an immediate-early gene product used as a marker for neural activation. Males exposed to other advertisement calls showed a significantly greater percentage of TH-ir cells colocalized with cFos-ir in the noradrenergic locus coeruleus and the dopaminergic periventricular posterior tuberculum, as well as increased numbers of cFos-ir neurons in several levels of the auditory and vocal-acoustic pathway. Increased activation of catecholaminergic neurons may serve to coordinate appropriate behavioral responses to male competitors. Additionally, these results implicate a role for specific catecholaminergic neuronal groups in auditory-driven social behavior in fishes, consistent with a conserved function in social acoustic behavior across vertebrates.
Petersen, Christopher L.; Timothy, Miky; Kim, D. Spencer; Bhandiwad, Ashwin A.; Mohr, Robert A.; Sisneros, Joseph A.; Forlano, Paul M.
2013-01-01
While the neural circuitry and physiology of the auditory system is well studied among vertebrates, far less is known about how the auditory system interacts with other neural substrates to mediate behavioral responses to social acoustic signals. One species that has been the subject of intensive neuroethological investigation with regard to the production and perception of social acoustic signals is the plainfin midshipman fish, Porichthys notatus, in part because acoustic communication is essential to their reproductive behavior. Nesting male midshipman vocally court females by producing a long duration advertisement call. Females localize males by their advertisement call, spawn and deposit all their eggs in their mate’s nest. As multiple courting males establish nests in close proximity to one another, the perception of another male’s call may modulate individual calling behavior in competition for females. We tested the hypothesis that nesting males exposed to advertisement calls of other males would show elevated neural activity in auditory and vocal-acoustic brain centers as well as differential activation of catecholaminergic neurons compared to males exposed only to ambient noise. Experimental brains were then double labeled by immunofluorescence (-ir) for tyrosine hydroxylase (TH), an enzyme necessary for catecholamine synthesis, and cFos, an immediate-early gene product used as a marker for neural activation. Males exposed to other advertisement calls showed a significantly greater percentage of TH-ir cells colocalized with cFos-ir in the noradrenergic locus coeruleus and the dopaminergic periventricular posterior tuberculum, as well as increased numbers of cFos-ir neurons in several levels of the auditory and vocal-acoustic pathway. Increased activation of catecholaminergic neurons may serve to coordinate appropriate behavioral responses to male competitors. Additionally, these results implicate a role for specific catecholaminergic neuronal groups in auditory-driven social behavior in fishes, consistent with a conserved function in social acoustic behavior across vertebrates. PMID:23936438
Brumovsky, Pablo R.; Seroogy, Kim B.; Lundgren, Kerstin H.; Watanabe, Masahiko; Hökfelt, Tomas; Gebhart, G. F.
2011-01-01
Glutamate is the main excitatory neurotransmitter in the nervous system, including in primary afferent neurons. However, to date a glutamatergic phenotype of autonomic neurons has not been described. Therefore, we explored the expression of vesicular glutamate transporters (VGLUTs) type 1, 2 and 3 in lumbar sympathetic chain (LSC) and major pelvic ganglion (MPG) of naïve BALB/C mice, as well as after pelvic nerve axotomy (PNA), using immunohistochemistry and in situ hybridization. Colocalization with activating transcription factor-3 (ATF-3), tyrosine hydroxylase (TH), vesicular acetylcholine transporter (VAChT) and calcitonin generelated peptide was also examined. Sham-PNA, sciatic nerve axotomy (SNA) or naïve mice were included. In naïve mice, VGLUT2-like immunoreactivity (LI) was only detected in fibers and varicosities in LSC and MPG; no ATF-3-immunoreactive (IR) neurons were visible. In contrast, PNA induced upregulation of VGLUT2 protein and transcript, as well as of ATF-3-LI in subpopulations of LSC neurons. Interestingly, VGLUT2-IR LSC neurons coexpressed ATF-3, and often lacked the noradrenergic marker TH. SNA only increased VGLUT2 protein and transcript in scattered LSC neurons. Neither PNA nor SNA upregulated VGLUT2 in MPG neurons. We also found perineuronal baskets immunoreactive either for VGLUT2 or the acetylcholinergic marker VAChT in non-PNA MPGs, usually around TH-IR neurons. VGLUT1-LI was restricted to some varicosities in MPGs, was absent in LSCs, and remained largely unaffected by PNA or SNA. This was confirmed by the lack of expression of VGLUT1 or VGLUT3 mRNAs in LSCs, even after PNA or SNA. Taken together, axotomy of visceral and non-visceral nerves results in a glutamatergic phenotype of some LSC neurons. In addition, we show previously non-described MPG perineuronal glutamatergic baskets. PMID:21596036
General relativistic effects on the orbit of the S2 star with GRAVITY
NASA Astrophysics Data System (ADS)
Grould, M.; Vincent, F. H.; Paumard, T.; Perrin, G.
2017-12-01
Context. The first observations of the GRAVITY instrument obtained in 2016, have shown that it should become possible to probe the spacetime close to the supermassive black hole Sagittarius A* (Sgr A*) at the Galactic center by using accurate astrometric positions of the S2 star. Aims: The goal of this paper is to investigate the detection by GRAVITY of different relativistic effects affecting the astrometric and/or spectroscopic observations of S2 such as the transverse Doppler shift, the gravitational redshift, the pericenter advance and higher-order general relativistic (GR) effects, in particular the Lense-Thirring effect due to the angular momentum of the black hole. Methods: We implement seven stellar-orbit models to simulate both astrometric and spectroscopic observations of S2 beginning near its next pericenter passage in 2018. Each model takes into account a certain number of relativistic effects. The most accurate one is a fully GR model and is used to generate the mock observations of the star. For each of the six other models, we determine the minimal observation times above which it fails to fit the observations, showing the effects that should be detected. These threshold times are obtained for different astrometric accuracies as well as for different spectroscopic errors. Results: Transverse Doppler shift and gravitational redshift can be detected within a few months by using S2 observations obtained with pairs of accuracies (σA,σV) = (10-100 μas, 1-10 km s-1) where σA and σV are the astrometric and spectroscopic accuracies, respectively. Gravitational lensing can be detected within a few years with (σA,σV) = (10 μas, 10 km s-1). Pericenter advance should be detected within a few years with (σA,σV) = (10 μas, 1-10 km s-1). Cumulative high-order photon curvature contributions, including the Shapiro time delay, affecting spectroscopic measurements can be observed within a few months with (σA,σV) = (10 μas, 1 km s-1). By using a stellar-orbit model neglecting relativistic effects on the photon path except the major contribution of gravitational lensing, S2 observations obtained with accuracies (σA,σV) = (10 μas, 10 km s-1), and a black hole angular momentum (a,i',Ω') = (0.99,45°,160°), the 1σ error on the spin parameter a is of about 0.4, 0.2, and 0.1 for a total observing run of 16, 30, and 47 yr, respectively. The 1σ errors on the direction of the angular momentum reach σi' ≈ 25° and σΩ' ≈ 40° when considering the three orbital periods run. We found that the uncertainties obtained with a less spinning black hole (a = 0.7) are similar to those evaluated with a = 0.99. Conclusions: The combination of S2 observations obtained with the GRAVITY instrument and the spectrograph SINFONI (Spectrograph for INtegral Field Observations in the Near Infrared) also installed at the VLT (Very Large Telescope) will lead to the detection of various relativistic effects. Such detections will be possible with S2 monitorings obtained within a few months or years, depending on the effect. Strong constraints on the angular momentum of Sgr A* (e.g., at 1σ = 0.1) with the S2 star will be possible with a simple stellar-orbit model without using a ray-tracing code but with approximating the gravitational lensing effect. However, long monitorings are necessary, and we thus must rely on the discovery of closer-in stars near Sgr A* if we want to efficiently constrain the black hole parameters with stellar orbits in a short time, or monitor the flares if they orbit around the black hole.
Jensen, P; Ducray, A D; Widmer, H R; Meyer, M
2015-12-03
Trefoil factor 1 (TFF1) belongs to a family of secreted peptides that are mainly expressed in the gastrointestinal tract. Notably, TFF1 has been suggested to operate as a neuropeptide, however, its specific cellular expression, regulation and function remain largely unknown. We have previously shown that TFF1 is expressed in developing and adult rat ventral mesencephalic tyrosine hydroxylase-immunoreactive (TH-ir) dopaminergic neurons. Here, we investigated the expression of TFF1 in rat ventral mesencephalic dopaminergic neurons (embryonic day 14) grown in culture for 5, 7 or 10 days in the absence (controls) or presence of either glial cell line-derived neurotrophic factor (GDNF), Forskolin or the combination. No TFF1-ir cells were identified at day 5 and only a few at day 7, whereas TH was markedly expressed at both time points. At day 10, several TFF1-ir cells were detected, and their numbers were significantly increased after the addition of GDNF (2.2-fold) or Forskolin (4.1-fold) compared to controls. Furthermore, the combination of GDNF and Forskolin had an additive effect and increased the number of TFF1-ir cells by 5.6-fold compared to controls. TFF1 expression was restricted to neuronal cells, and the percentage of TH/TFF1 co-expressing cells was increased to the same extent in GDNF and Forskolin-treated cultures (4-fold) as compared to controls. Interestingly, the combination of GDNF and Forskolin resulted in a significantly increased co-expression (8-fold) of TH/TFF1, which could indicate that GDNF and Forskolin targeted different subpopulations of TH/TFF1 neurons. Short-term treatment with Forskolin resulted in an increased number of TFF1-ir cells, and this effect was significantly reduced by the MEK1 inhibitor PD98059 or the protein kinase A (PKA) inhibitor H89, suggesting that Forskolin induced TFF1 expression through diverse signaling pathways. In conclusion, distinct populations of cultured dopaminergic neurons express TFF1, and their numbers can be increased by factors known to influence survival and differentiation of dopaminergic cells. Copyright © 2015 IBRO. Published by Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Comiso, Joey C.
1995-01-01
Surface temperature is one of the key variables associated with weather and climate. Accurate measurements of surface air temperatures are routinely made in meteorological stations around the world. Also, satellite data have been used to produce synoptic global temperature distributions. However, not much attention has been paid on temperature distributions in the polar regions. In the polar regions, the number of stations is very sparse. Because of adverse weather conditions and general inaccessibility, surface field measurements are also limited. Furthermore, accurate retrievals from satellite data in the region have been difficult to make because of persistent cloudiness and ambiguities in the discrimination of clouds from snow or ice. Surface temperature observations are required in the polar regions for air-sea-ice interaction studies, especially in the calculation of heat, salinity, and humidity fluxes. They are also useful in identifying areas of melt or meltponding within the sea ice pack and the ice sheets and in the calculation of emissivities of these surfaces. Moreover, the polar regions are unique in that they are the sites of temperature extremes, the location of which is difficult to identify without a global monitoring system. Furthermore, the regions may provide an early signal to a potential climate change because such signal is expected to be amplified in the region due to feedback effects. In cloud free areas, the thermal channels from infrared systems provide surface temperatures at relatively good accuracies. Previous capabilities include the use of the Temperature Humidity Infrared Radiometer (THIR) onboard the Nimbus-7 satellite which was launched in 1978. Current capabilities include the use of the Advance Very High Resolution Radiometer (AVHRR) aboard NOAA satellites. Together, these two systems cover a span of 16 years of thermal infrared data. Techniques for retrieving surface temperatures with these sensors in the polar regions have been developed. Errors have been estimated to range from 1K to 5K mainly due to cloud masking problems. With many additional channels available, it is expected that the EOS-Moderate Resolution Imaging Spectroradiometer (MODIS) will provide an improved characterization of clouds and a good discrimination of clouds from snow or ice surfaces.
Solar System and stellar tests of a quantum-corrected gravity
NASA Astrophysics Data System (ADS)
Zhao, Shan-Shan; Xie, Yi
2015-09-01
The renormalization group running of the gravitational constant has a universal form and represents a possible extension of general relativity. These renormalization group effects on general relativity will cause the running of the gravitational constant, and there exists a scale of renormalization α ν , which depends on the mass of an astronomical system and needs to be determined by observations. We test renormalization group effects on general relativity and obtain the upper bounds of α ν in the low-mass scales: the Solar System and five systems of binary pulsars. Using the supplementary advances of the perihelia provided by INPOP10a (IMCCE, France) and EPM2011 (IAA RAS, Russia) ephemerides, we obtain new upper bounds on α ν in the Solar System when the Lense-Thirring effect due to the Sun's angular momentum and the uncertainty of the Sun's quadrupole moment are properly taken into account. These two factors were absent in the previous work. We find that INPOP10a yields the upper bound as α ν =(0.3 ±2.8 )×10-20 while EPM2011 gives α ν =(-2.5 ±8.3 )×10-21. Both of them are tighter than the previous result by 4 orders of magnitude. Furthermore, based on the observational data sets of five systems of binary pulsars: PSR J 0737 -3039 , PSR B 1534 +12 , PSR J 1756 -2251 , PSR B 1913 +16 , and PSR B 2127 +11 C , the upper bound is found as α ν =(-2.6 ±5.1 )×10-17. From the bounds of this work at a low-mass scale and the ones at the mass scale of galaxies, we might catch an updated glimpse of the mass dependence of α ν , and it is found that our improvement of the upper bounds in the Solar System can significantly change the possible pattern of the relation between log |α ν | and log m from a linear one to a power law, where m is the mass of an astronomical system. This suggests that |α ν | needs to be suppressed more rapidly with the decrease of the mass of low-mass systems. It also predicts that |α ν | might have an upper limit in high-mass astrophysical systems, which can be tested in the future.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barack, Leor; Cutler, Curt; Jet Propulsion Laboratory, California Institute of Technology, Pasadena, California 91109
Inspirals of stellar-mass compact objects (COs) into {approx}10{sup 6}M{sub {center_dot}} black holes are especially interesting sources of gravitational waves for the planned Laser Interferometer Space Antenna (LISA). The orbits of these extreme-mass-ratio inspirals (EMRIs) are highly relativistic, displaying extreme versions of both perihelion precession and Lense-Thirring precession of the orbital plane. We investigate the question of whether the emitted waveforms can be used to strongly constrain the geometry of the central massive object, and in essence check that it corresponds to a Kerr black hole (BH). For a Kerr BH, all multipole moments of the spacetime have a simple, uniquemore » relation to M and S, the BH mass, and spin; in particular, the spacetime's mass quadrupole moment Q is given by Q=-S{sup 2}/M. Here we treat Q as an additional parameter, independent of S and M, and ask how well observation can constrain its difference from the Kerr value. This was already estimated by Ryan, but for the simplified case of circular, equatorial orbits, and Ryan also neglected the signal modulations arising from the motion of the LISA satellites. We consider generic orbits and include the modulations due to the satellite motions. For this analysis, we use a family of approximate (basically post-Newtonian) waveforms, which represent the full parameter space of EMRI sources, and which exhibit the main qualitative features of true, general relativistic waveforms. We extend this parameter space to include (in an approximate manner) an arbitrary value of Q, and then construct the Fisher information matrix for the extended parameter space. By inverting the Fisher matrix, we estimate how accurately Q could be extracted from LISA observations of EMRIs. For 1 yr of coherent data from the inspiral of a 10M{sub {center_dot}} black hole into rotating black holes of masses 10{sup 5.5}M{sub {center_dot}}, 10{sup 6}M{sub {center_dot}}, or 10{sup 6.5}M{sub {center_dot}}, we find {delta}(Q/M{sup 3}){approx}10{sup -4}, 10{sup -3}, or 10{sup -2}, respectively (assuming total signal-to-noise ratio of 100, typical of the brightest detectable EMRIs). These results depend only weakly on the eccentricity of the inspiral orbit or the spin of the central object.« less
Development of peptide-containing nerves in the human fetal prostate gland.
Jen, P Y; Dixon, J S
1995-08-01
Immunohistochemical methods were used to study the developing peptidergic innervation of the human fetal prostate gland in a series of specimens ranging in gestational age from 13 to 30 wk. The overall innervation of each specimen was visualised using protein gene product 9.5 (PGP), a general nerve marker. The onset and development of specific neuropeptide-containing subpopulations were investigated using antisera to neuropeptide Y (NPY), vasoactive intestinal peptide (VIP), substance P (SP), calcitonin gene-related peptide (CGRP), bombesin (BOM), somatostatin (SOM), leu-enkephalin (l-ENK) and met-enkephalin (m-ENK). In addition the occurrence and distribution of presumptive noradrenergic nerves was studied using antisera to dopamine-beta-hydroxylase (D beta H) and tyrosine hydroxylase (TH). At 13 wk numerous branching PGP-immunoreactive (-IR) nerves were observed in the capsule of the developing prostate gland and surrounding the preprostatic urethra but the remainder of the gland was devoid of nerves. The majority of nerves in the capsule contained D beta H and TH and were presumed to be noradrenergic in type while other nerves (in decreasing numbers) contained NPY, l-ENK, SP and CGRP. Nerves associated with the preprostatic urethra did not contain any of the neuropeptides under investigation. At 17 wk the density of nerves in the capsule had increased and occasional m-ENK-, VIP- and BOM-IR nerve fibres were also observed. In addition PGP, D beta H-, TH-, NPY- and l-ENK-IR nerves occurred in association with smooth muscle bundles which at 17 wk were present in the outer part of the gland. Occasional PGP-IR nerves were also present at the base of the epithelium forming some of the prostatic glands. At 23 wk some of the subepithelial nerves showed immunoreactivity for NPY, VIP or l-ENK. At 26 wk smooth muscle bundles occurred throughout the gland and were richly innervated by PGP, D beta H and TH-IR nerves while a less dense plexus was formed by NPY- and l-ENK-IR nerves together with a few m-ENK-IR nerves. Occasional smooth muscle-associated varicose nerve fibres showed immunoreactivity for SP, CGRP, VIP or BOM although the majority of these types of nerve formed perivascular plexuses. Also at 26 wk numerous varicose nerve fibres were observed in association with the prostatic acini, the majority of such nerves containing NPY with a few showing immunoreactivity to VIP, l-ENK, SP or CGRP.(ABSTRACT TRUNCATED AT 400 WORDS)
Cold Atom Optics on Ground and in Space
NASA Astrophysics Data System (ADS)
Rasel, E. M.
Microgravity is the ultimate laboratory environment for experiments in fundamental physics based on cold atoms. The talk will give a survey of recent activities on atomic quantum sensors and atom lasers. Inertial atomic quantum sensors are a promising and complementary technique for experiments in fundamental physics. Pioneering experiments at Yale [1,2] and Stanford [3] displayed recently the fascinating potential of matter-wave interferometers for precision measurements. The talk will present the status of a transportable matter-wave sensor under development at the Institut für Quantenoptik in Hannover: CASI. CASI stands for Cold Atom Sagnac Interferometer. The use of cold atoms makes it possible to realise compact devices with sensitivities competitive with classical state-of-the-art sensors. CASI's projected sensitivity is about 10-9 rad/ssurd Hz at the projection noise limit. The heart of our set-up will be a 15cm-long Mach-Zehnder interferometer formed by coherently splitting the atoms with Raman-type interactions. CASI is designed as a movable device, that it can be compared with other matter-wave sensors such as the cold caesium atom gyroscope at the BNM-SYRTE in Paris [4]. CASI is intimately connected with HYPER, an European initiative to send four atom interferometers in space hosted on a drag-free satellite. Main emphasis of the mission is placed on the mapping of the Earth's Lense-Thirring effect. Tests of the Equivalence Principle is under consideration as an alternative goal of high scientific value. HYPER was selected three years ago by the European Space Agency (ESA) as candidate for a future small-satellite mission within the next 10 to 15 years and is supported with detailed feasibility studies [5]. The latest status of the mission will be given. [1] T.L. Gustavson, A. Landragin, M.A, Kasevich, Rotation sensing with a dual atom-interferometer Sagnac gyroscope, Class. Quantum Grav. 17, 2385-2398 (2000) [2] J.M. McGuirk, G.T. Foster, J.B. Fixler, M.J. Snadden, M.A. Kasevich, Sensitive absolute-gravity gradiometry using atom interferometry, Phys. Rev. A 65, 033608-1 (2002) [3] A. Peters, K.Y. Chung, S. Chu, High-precision gravity measurements using atom interferometry, Metrologia 38, 25-61 (2001) [4] F. Yver-Leduc, P. Cheinet, J. Fils, A. Clairon, N. Dimarcq, D. Holleville, P. Bouyer, and A. Landragin. A. J. Opt. B : Quant. Semiclass. Opt. 5, S136 (2003) [5] http://sci.esa.int/home/hyper/index.cfm
The OPTIS satellite-improved tests of Special and General Relativity
NASA Astrophysics Data System (ADS)
Scheithauer, Silvia; Laemmerzahl, Claus; Dittus, Hansjoerg; Schiller, Stephan; Peters, Achim
2005-06-01
The OPTIS satellite mission is an international collaboration initiated by three German University institutes aiming at improving tests regarding the foundations of Special and General Relativity. The mission idea - which has already passed the state of the initial feasibility study - is to contribute to the most challenging project of physics in this century - the search for a Theory of Quantum Gravity. This theory should resolve the incompatibilities between the quantum theory and Einstein's General Relativity. All approaches for a Quantum Gravity Theory predict small deviations from Special and General Relativity. If such deviations could be found (e.g. an anisotropy of the speed of light, violations of the universality of gravitational red shift or of the universality of free fall) the way to a new understanding of the time and space structure of the universe would be open. Therefore the goal of the OPTIS satellite mission is an accuracy improvement of tests regarding the foundations of Special and General Relativity by up to three orders of magnitude. For that purpose several experiments will be carried out on board the OPTIS satellite testing (i) the isotropy of the speed of light, (ii) the independence of the speed of light from the velocity of the laboratory system, (iii) the universality of the gravitational redshift, (iv) the absolute gravitational redshift and (v) the special relativistic time-dilation. Furthermore, orbit analyses will be done in order to measure (vi) the Lense-Thirring effect and (vii) perigee advance as well as to test (viii) the Newtonian View the MathML source gravitational potential. The benefit from bringing these experiments into space is the nearly disturbance free environment allowing precise measurements and large measurement times. The OPTIS mission will use already available key technologies like optical cavities, highly stabilised lasers, atomic clocks, frequency combs, capacitive gravitational reference sensors, drag-free control, laser tracking and laser linking systems. For most of the proposed tests the measurements are done by comparing the rates of different clocks. For the test of the isotropy of the velocity of light (Michelson-Morley experiment) the frequencies of resonators ("light clocks") pointing in different directions are compared. Concerning the constancy of the speed of light (Kennedy-Thorndike experiment) a resonator and atomic clocks under varying velocities are compared. For tests of the time dilation the rates of clocks in different states of motion and for testing the universality of the gravitational redshift clocks at different positions in the gravitational field are compared. This paper will give an overview about the OPTIS satellite mission, including the science goals, science requirements, key technologies, measurement principles and devices.
Development of peptide-containing nerves in the human fetal prostate gland.
Jen, P Y; Dixon, J S
1995-01-01
Immunohistochemical methods were used to study the developing peptidergic innervation of the human fetal prostate gland in a series of specimens ranging in gestational age from 13 to 30 wk. The overall innervation of each specimen was visualised using protein gene product 9.5 (PGP), a general nerve marker. The onset and development of specific neuropeptide-containing subpopulations were investigated using antisera to neuropeptide Y (NPY), vasoactive intestinal peptide (VIP), substance P (SP), calcitonin gene-related peptide (CGRP), bombesin (BOM), somatostatin (SOM), leu-enkephalin (l-ENK) and met-enkephalin (m-ENK). In addition the occurrence and distribution of presumptive noradrenergic nerves was studied using antisera to dopamine-beta-hydroxylase (D beta H) and tyrosine hydroxylase (TH). At 13 wk numerous branching PGP-immunoreactive (-IR) nerves were observed in the capsule of the developing prostate gland and surrounding the preprostatic urethra but the remainder of the gland was devoid of nerves. The majority of nerves in the capsule contained D beta H and TH and were presumed to be noradrenergic in type while other nerves (in decreasing numbers) contained NPY, l-ENK, SP and CGRP. Nerves associated with the preprostatic urethra did not contain any of the neuropeptides under investigation. At 17 wk the density of nerves in the capsule had increased and occasional m-ENK-, VIP- and BOM-IR nerve fibres were also observed. In addition PGP, D beta H-, TH-, NPY- and l-ENK-IR nerves occurred in association with smooth muscle bundles which at 17 wk were present in the outer part of the gland. Occasional PGP-IR nerves were also present at the base of the epithelium forming some of the prostatic glands. At 23 wk some of the subepithelial nerves showed immunoreactivity for NPY, VIP or l-ENK. At 26 wk smooth muscle bundles occurred throughout the gland and were richly innervated by PGP, D beta H and TH-IR nerves while a less dense plexus was formed by NPY- and l-ENK-IR nerves together with a few m-ENK-IR nerves. Occasional smooth muscle-associated varicose nerve fibres showed immunoreactivity for SP, CGRP, VIP or BOM although the majority of these types of nerve formed perivascular plexuses. Also at 26 wk numerous varicose nerve fibres were observed in association with the prostatic acini, the majority of such nerves containing NPY with a few showing immunoreactivity to VIP, l-ENK, SP or CGRP.(ABSTRACT TRUNCATED AT 400 WORDS) Images Fig. 1 Fig. 2 Fig. 3 Fig. 4 Fig. 5 Fig. 6 Fig. 7 Fig. 8 Fig. 9 Fig. 10 Fig. 11 Fig. 12 Fig. 13 PMID:7591978
NASA Astrophysics Data System (ADS)
Assefa, Y.
The Bale mountain range is located between the wet east African mountains (proper) and the dry northeast African mountains, Southeast Ethiopia. This mountain range hosts some of endemic flora and fauna which are endangered of extinction. The most extensive Ericaceous vegetation in the continent is found in Bale Mountains. The southern slope of this mountain range is known for its distinct vegetation zonation of the Afromontane forests. The Ericaceous vegetation between the montane forest and the afroalpine of this slope is relatively little disturbed than other similar Ericaceous vegetation elsewhere in Africa. Study on the distribution and structure of this vegeta- tion was made from Nov. 1999- April 2000 on the southern slope, Harrena escarpment. The vegetation north of Rira village, between 3000m and 4200m was sampled after selecting continuous homogenous sites systematically along the altitudinal gradient. Cover abundance of the species for vascular plants, frequency, height and DBH for woody treeline species were taken in 110 quadrats. The environmental parameters along the altitudinal gradient including soil pH, texture, total nitrogen, and soil mois- ture were measured. Altitude, slope, and aspect were measured for all qudrats. All the environmental and vegetation data were analyzed with Syntax, Canoco, Minitab and Sigma plot. Anthropozogenic data was taken using questionnaire and analyzed. Thir- teen community types were described and their distribution showed a clear pattern at different parts of the Ericaceous vegetation. However some of the community types which were restricted to the Afroalpine belt were found in the Ericaceous vegetation. This might be a possible indication of the expansion of the afroalpine belt to lower altitude, even below 3400 m (Erica dominated Hagenia-Hypericum zone). The height of the tree and shrub species has shown a decreasing tendency with increase in al- titude. This trend was very gradual for E. trimera. The species occurs for about 1.2 km altitudinal range showing difference in height and habit along altitudinal gradi- ent. The regression analysis (r2=0.58) has shown a consistent decrease in height along altitude. No abrupt transition was documented in the systematically selected continu- ous Ericaceous vegetation. Among the environmental parameters taken, altitude was the strongest explanatory variable. While incidence of fire is correlated with socioeco- nomic parameters and relief Soil pH, and texture have shown stronger correlation with 1 altitude. While percent total nitrogen was showing more significant (p<0.01) correla- tion with microsite factors. Local people burn the Ericaceous vegetation mainly for grazing. Therefore, strategies that may reduce the rate of fire should take into account the pasture and the semi - pastoral local communities. Creat income-generating alter- natives for increasing population at Rira village. Barley used to be cultivated around Rira village in limited places. But now, indigenous settlers who were mainly depend- ing on animal rearing are shifting to mixed farming practices with increasing popu- lation. This could jeopardize the water shade of the area, in addition to the loss of biodiversity. Increasing awareness of the people on wise use of forest through school environmental clubs is also a possible option to approach the local people.
NASA Astrophysics Data System (ADS)
Braxmaier, Claus; Dittus, Hansjörg; Foulon, Bernard; Göklü, Ertan; Grimani, Catia; Guo, Jian; Herrmann, Sven; Lämmerzahl, Claus; Ni, Wei-Tou; Peters, Achim; Rievers, Benny; Samain, Étienne; Selig, Hanns; Shaul, Diana; Svehla, Drazen; Touboul, Pierre; Wang, Gang; Wu, An-Ming; Zakharov, Alexander F.
2012-10-01
ASTROD I is a planned interplanetary space mission with multiple goals. The primary aims are: to test General Relativity with an improvement in sensitivity of over 3 orders of magnitude, improving our understanding of gravity and aiding the development of a new quantum gravity theory; to measure key solar system parameters with increased accuracy, advancing solar physics and our knowledge of the solar system; and to measure the time rate of change of the gravitational constant with an order of magnitude improvement and the anomalous Pioneer acceleration, thereby probing dark matter and dark energy gravitationally. It is envisaged as the first in a series of ASTROD missions. ASTROD I will consist of one spacecraft carrying a telescope, four lasers, two event timers and a clock. Two-way, two-wavelength laser pulse ranging will be used between the spacecraft in a solar orbit and deep space laser stations on Earth, to achieve the ASTROD I goals.For this mission, accurate pulse timing with an ultra-stable clock, and a drag-free spacecraft with reliable inertial sensor are required. T2L2 has demonstrated the required accurate pulse timing; rubidium clock on board Galileo has mostly demonstrated the required clock stability; the accelerometer on board GOCE has paved the way for achieving the reliable inertial sensor; the demonstration of LISA Pathfinder will provide an excellent platform for the implementation of the ASTROD I drag-free spacecraft. These European activities comprise the pillars for building up the mission and make the technologies needed ready. A second mission, ASTROD or ASTROD-GW (depending on the results of ASTROD I), is envisaged as a three-spacecraft mission which, in the case of ASTROD, would test General Relativity to one part per billion, enable detection of solar g-modes, measure the solar Lense-Thirring effect to 10 parts per million, and probe gravitational waves at frequencies below the LISA bandwidth, or in the case of ASTROD-GW, would be dedicated to probe gravitational waves at frequencies below the LISA bandwidth to 100 nHz and to detect solar g-mode oscillations. In the third phase (Super-ASTROD), larger orbits could be implemented to map the outer solar system and to probe primordial gravitational-waves at frequencies below the ASTROD bandwidth. This paper on ASTROD I is based on our 2010 proposal submitted for the ESA call for class-M mission proposals, and is a sequel and an update to our previous paper (Appouchaux et al., Exp Astron 23:491-527, 2009; designated as Paper I) which was based on our last proposal submitted for the 2007 ESA call. In this paper, we present our orbit selection with one Venus swing-by together with orbit simulation. In Paper I, our orbit choice is with two Venus swing-bys. The present choice takes shorter time (about 250 days) to reach the opposite side of the Sun. We also present a preliminary design of the optical bench, and elaborate on the solar physics goals with the radiation monitor payload. We discuss telescope size, trade-offs of drag-free sensitivities, thermal issues and present an outlook.
Radiation Environment Modeling for Spacecraft Design: New Model Developments
NASA Technical Reports Server (NTRS)
Barth, Janet; Xapsos, Mike; Lauenstein, Jean-Marie; Ladbury, Ray
2006-01-01
A viewgraph presentation on various new space radiation environment models for spacecraft design is described. The topics include: 1) The Space Radiatio Environment; 2) Effects of Space Environments on Systems; 3) Space Radiatio Environment Model Use During Space Mission Development and Operations; 4) Space Radiation Hazards for Humans; 5) "Standard" Space Radiation Environment Models; 6) Concerns about Standard Models; 7) Inadequacies of Current Models; 8) Development of New Models; 9) New Model Developments: Proton Belt Models; 10) Coverage of New Proton Models; 11) Comparison of TPM-1, PSB97, AP-8; 12) New Model Developments: Electron Belt Models; 13) Coverage of New Electron Models; 14) Comparison of "Worst Case" POLE, CRESELE, and FLUMIC Models with the AE-8 Model; 15) New Model Developments: Galactic Cosmic Ray Model; 16) Comparison of NASA, MSU, CIT Models with ACE Instrument Data; 17) New Model Developmemts: Solar Proton Model; 18) Comparison of ESP, JPL91, KIng/Stassinopoulos, and PSYCHIC Models; 19) New Model Developments: Solar Heavy Ion Model; 20) Comparison of CREME96 to CREDO Measurements During 2000 and 2002; 21) PSYCHIC Heavy ion Model; 22) Model Standardization; 23) Working Group Meeting on New Standard Radiation Belt and Space Plasma Models; and 24) Summary.
Hong, Sehee; Kim, Soyoung
2018-01-01
There are basically two modeling approaches applicable to analyzing an actor-partner interdependence model: the multilevel modeling (hierarchical linear model) and the structural equation modeling. This article explains how to use these two models in analyzing an actor-partner interdependence model and how these two approaches work differently. As an empirical example, marital conflict data were used to analyze an actor-partner interdependence model. The multilevel modeling and the structural equation modeling produced virtually identical estimates for a basic model. However, the structural equation modeling approach allowed more realistic assumptions on measurement errors and factor loadings, rendering better model fit indices.
[Analysis of the stability and adaptability of near infrared spectra qualitative analysis model].
Cao, Wu; Li, Wei-jun; Wang, Ping; Zhang, Li-ping
2014-06-01
The stability and adaptability of model of near infrared spectra qualitative analysis were studied. Method of separate modeling can significantly improve the stability and adaptability of model; but its ability of improving adaptability of model is limited. Method of joint modeling can not only improve the adaptability of the model, but also the stability of model, at the same time, compared to separate modeling, the method can shorten the modeling time, reduce the modeling workload; extend the term of validity of model, and improve the modeling efficiency. The experiment of model adaptability shows that, the correct recognition rate of separate modeling method is relatively low, which can not meet the requirements of application, and joint modeling method can reach the correct recognition rate of 90%, and significantly enhances the recognition effect. The experiment of model stability shows that, the identification results of model by joint modeling are better than the model by separate modeling, and has good application value.
1992-12-01
suspect :mat, -n2 extent predict:.on cas jas ccsiziveiv crrei:=e amonc e v:arious models, :he fandom *.;aik, learn ha r ur e, i;<ea- variable and Bemis...Functions, Production Rate Adjustment Model, Learning Curve Model. Random Walk Model. Bemis Model. Evaluating Model Bias, Cost Prediction Bias. Cost...of four cost progress models--a random walk model, the tradiuonai learning curve model, a production rate model Ifixed-variable model). and a model
Experience with turbulence interaction and turbulence-chemistry models at Fluent Inc.
NASA Technical Reports Server (NTRS)
Choudhury, D.; Kim, S. E.; Tselepidakis, D. P.; Missaghi, M.
1995-01-01
This viewgraph presentation discusses (1) turbulence modeling: challenges in turbulence modeling, desirable attributes of turbulence models, turbulence models in FLUENT, and examples using FLUENT; and (2) combustion modeling: turbulence-chemistry interaction and FLUENT equilibrium model. As of now, three turbulence models are provided: the conventional k-epsilon model, the renormalization group model, and the Reynolds-stress model. The renormalization group k-epsilon model has broadened the range of applicability of two-equation turbulence models. The Reynolds-stress model has proved useful for strongly anisotropic flows such as those encountered in cyclones, swirlers, and combustors. Issues remain, such as near-wall closure, with all classes of models.
ERIC Educational Resources Information Center
Freeman, Thomas J.
This paper discusses six different models of organizational structure and leadership, including the scalar chain or pyramid model, the continuum model, the grid model, the linking pin model, the contingency model, and the circle or democratic model. Each model is examined in a separate section that describes the model and its development, lists…
SUMMA and Model Mimicry: Understanding Differences Among Land Models
NASA Astrophysics Data System (ADS)
Nijssen, B.; Nearing, G. S.; Ou, G.; Clark, M. P.
2016-12-01
Model inter-comparison and model ensemble experiments suffer from an inability to explain the mechanisms behind differences in model outcomes. We can clearly demonstrate that the models are different, but we cannot necessarily identify the reasons why, because most models exhibit myriad differences in process representations, model parameterizations, model parameters and numerical solution methods. This inability to identify the reasons for differences in model performance hampers our understanding and limits model improvement, because we cannot easily identify the most promising paths forward. We have developed the Structure for Unifying Multiple Modeling Alternatives (SUMMA) to allow for controlled experimentation with model construction, numerical techniques, and parameter values and therefore isolate differences in model outcomes to specific choices during the model development process. In developing SUMMA, we recognized that hydrologic models can be thought of as individual instantiations of a master modeling template that is based on a common set of conservation equations for energy and water. Given this perspective, SUMMA provides a unified approach to hydrologic modeling that integrates different modeling methods into a consistent structure with the ability to instantiate alternative hydrologic models at runtime. Here we employ SUMMA to revisit a previous multi-model experiment and demonstrate its use for understanding differences in model performance. Specifically, we implement SUMMA to mimic the spread of behaviors exhibited by the land models that participated in the Protocol for the Analysis of Land Surface Models (PALS) Land Surface Model Benchmarking Evaluation Project (PLUMBER) and draw conclusions about the relative performance of specific model parameterizations for water and energy fluxes through the soil-vegetation continuum. SUMMA's ability to mimic the spread of model ensembles and the behavior of individual models can be an important tool in focusing model development and improvement efforts.
Seven Modeling Perspectives on Teaching and Learning: Some Interrelations and Cognitive Effects
ERIC Educational Resources Information Center
Easley, J. A., Jr.
1977-01-01
The categories of models associated with the seven perspectives are designated as combinatorial models, sampling models, cybernetic models, game models, critical thinking models, ordinary language analysis models, and dynamic structural models. (DAG)
NASA Astrophysics Data System (ADS)
Clark, Martyn; Essery, Richard
2017-04-01
When faced with the complex and interdisciplinary challenge of building process-based land models, different modelers make different decisions at different points in the model development process. These modeling decisions are generally based on several considerations, including fidelity (e.g., what approaches faithfully simulate observed processes), complexity (e.g., which processes should be represented explicitly), practicality (e.g., what is the computational cost of the model simulations; are there sufficient resources to implement the desired modeling concepts), and data availability (e.g., is there sufficient data to force and evaluate models). Consequently the research community, comprising modelers of diverse background, experience, and modeling philosophy, has amassed a wide range of models, which differ in almost every aspect of their conceptualization and implementation. Model comparison studies have been undertaken to explore model differences, but have not been able to meaningfully attribute inter-model differences in predictive ability to individual model components because there are often too many structural and implementation differences among the different models considered. As a consequence, model comparison studies to date have provided limited insight into the causes of differences in model behavior, and model development has often relied on the inspiration and experience of individual modelers rather than on a systematic analysis of model shortcomings. This presentation will summarize the use of "multiple-hypothesis" modeling frameworks to understand differences in process-based snow models. Multiple-hypothesis frameworks define a master modeling template, and include a a wide variety of process parameterizations and spatial configurations that are used in existing models. Such frameworks provide the capability to decompose complex models into the individual decisions that are made as part of model development, and evaluate each decision in isolation. It is hence possible to attribute differences in system-scale model predictions to individual modeling decisions, providing scope to mimic the behavior of existing models, understand why models differ, characterize model uncertainty, and identify productive pathways to model improvement. Results will be presented applying multiple hypothesis frameworks to snow model comparison projects, including PILPS, SnowMIP, and the upcoming ESM-SnowMIP project.
Research on Multi - Person Parallel Modeling Method Based on Integrated Model Persistent Storage
NASA Astrophysics Data System (ADS)
Qu, MingCheng; Wu, XiangHu; Tao, YongChao; Liu, Ying
2018-03-01
This paper mainly studies the multi-person parallel modeling method based on the integrated model persistence storage. The integrated model refers to a set of MDDT modeling graphics system, which can carry out multi-angle, multi-level and multi-stage description of aerospace general embedded software. Persistent storage refers to converting the data model in memory into a storage model and converting the storage model into a data model in memory, where the data model refers to the object model and the storage model is a binary stream. And multi-person parallel modeling refers to the need for multi-person collaboration, the role of separation, and even real-time remote synchronization modeling.
Constructive Epistemic Modeling: A Hierarchical Bayesian Model Averaging Method
NASA Astrophysics Data System (ADS)
Tsai, F. T. C.; Elshall, A. S.
2014-12-01
Constructive epistemic modeling is the idea that our understanding of a natural system through a scientific model is a mental construct that continually develops through learning about and from the model. Using the hierarchical Bayesian model averaging (HBMA) method [1], this study shows that segregating different uncertain model components through a BMA tree of posterior model probabilities, model prediction, within-model variance, between-model variance and total model variance serves as a learning tool [2]. First, the BMA tree of posterior model probabilities permits the comparative evaluation of the candidate propositions of each uncertain model component. Second, systemic model dissection is imperative for understanding the individual contribution of each uncertain model component to the model prediction and variance. Third, the hierarchical representation of the between-model variance facilitates the prioritization of the contribution of each uncertain model component to the overall model uncertainty. We illustrate these concepts using the groundwater modeling of a siliciclastic aquifer-fault system. The sources of uncertainty considered are from geological architecture, formation dip, boundary conditions and model parameters. The study shows that the HBMA analysis helps in advancing knowledge about the model rather than forcing the model to fit a particularly understanding or merely averaging several candidate models. [1] Tsai, F. T.-C., and A. S. Elshall (2013), Hierarchical Bayesian model averaging for hydrostratigraphic modeling: Uncertainty segregation and comparative evaluation. Water Resources Research, 49, 5520-5536, doi:10.1002/wrcr.20428. [2] Elshall, A.S., and F. T.-C. Tsai (2014). Constructive epistemic modeling of groundwater flow with geological architecture and boundary condition uncertainty under Bayesian paradigm, Journal of Hydrology, 517, 105-119, doi: 10.1016/j.jhydrol.2014.05.027.
ERIC Educational Resources Information Center
Thelen, Mark H.; And Others
1977-01-01
Assesses the influence of model consequences on perceived model affect and, conversely, assesses the influence of model affect on perceived model consequences. Also appraises the influence of model consequences and model affect on perceived model attractiveness, perceived model competence, and perceived task attractiveness. (Author/RK)
Bayesian Model Averaging of Artificial Intelligence Models for Hydraulic Conductivity Estimation
NASA Astrophysics Data System (ADS)
Nadiri, A.; Chitsazan, N.; Tsai, F. T.; Asghari Moghaddam, A.
2012-12-01
This research presents a Bayesian artificial intelligence model averaging (BAIMA) method that incorporates multiple artificial intelligence (AI) models to estimate hydraulic conductivity and evaluate estimation uncertainties. Uncertainty in the AI model outputs stems from error in model input as well as non-uniqueness in selecting different AI methods. Using one single AI model tends to bias the estimation and underestimate uncertainty. BAIMA employs Bayesian model averaging (BMA) technique to address the issue of using one single AI model for estimation. BAIMA estimates hydraulic conductivity by averaging the outputs of AI models according to their model weights. In this study, the model weights were determined using the Bayesian information criterion (BIC) that follows the parsimony principle. BAIMA calculates the within-model variances to account for uncertainty propagation from input data to AI model output. Between-model variances are evaluated to account for uncertainty due to model non-uniqueness. We employed Takagi-Sugeno fuzzy logic (TS-FL), artificial neural network (ANN) and neurofuzzy (NF) to estimate hydraulic conductivity for the Tasuj plain aquifer, Iran. BAIMA combined three AI models and produced better fitting than individual models. While NF was expected to be the best AI model owing to its utilization of both TS-FL and ANN models, the NF model is nearly discarded by the parsimony principle. The TS-FL model and the ANN model showed equal importance although their hydraulic conductivity estimates were quite different. This resulted in significant between-model variances that are normally ignored by using one AI model.
A Smart Modeling Framework for Integrating BMI-enabled Models as Web Services
NASA Astrophysics Data System (ADS)
Jiang, P.; Elag, M.; Kumar, P.; Peckham, S. D.; Liu, R.; Marini, L.; Hsu, L.
2015-12-01
Serviced-oriented computing provides an opportunity to couple web service models using semantic web technology. Through this approach, models that are exposed as web services can be conserved in their own local environment, thus making it easy for modelers to maintain and update the models. In integrated modeling, the serviced-oriented loose-coupling approach requires (1) a set of models as web services, (2) the model metadata describing the external features of a model (e.g., variable name, unit, computational grid, etc.) and (3) a model integration framework. We present the architecture of coupling web service models that are self-describing by utilizing a smart modeling framework. We expose models that are encapsulated with CSDMS (Community Surface Dynamics Modeling System) Basic Model Interfaces (BMI) as web services. The BMI-enabled models are self-describing by uncovering models' metadata through BMI functions. After a BMI-enabled model is serviced, a client can initialize, execute and retrieve the meta-information of the model by calling its BMI functions over the web. Furthermore, a revised version of EMELI (Peckham, 2015), an Experimental Modeling Environment for Linking and Interoperability, is chosen as the framework for coupling BMI-enabled web service models. EMELI allows users to combine a set of component models into a complex model by standardizing model interface using BMI as well as providing a set of utilities smoothing the integration process (e.g., temporal interpolation). We modify the original EMELI so that the revised modeling framework is able to initialize, execute and find the dependencies of the BMI-enabled web service models. By using the revised EMELI, an example will be presented on integrating a set of topoflow model components that are BMI-enabled and exposed as web services. Reference: Peckham, S.D. (2014) EMELI 1.0: An experimental smart modeling framework for automatic coupling of self-describing models, Proceedings of HIC 2014, 11th International Conf. on Hydroinformatics, New York, NY.
Curtis, Gary P.; Lu, Dan; Ye, Ming
2015-01-01
While Bayesian model averaging (BMA) has been widely used in groundwater modeling, it is infrequently applied to groundwater reactive transport modeling because of multiple sources of uncertainty in the coupled hydrogeochemical processes and because of the long execution time of each model run. To resolve these problems, this study analyzed different levels of uncertainty in a hierarchical way, and used the maximum likelihood version of BMA, i.e., MLBMA, to improve the computational efficiency. This study demonstrates the applicability of MLBMA to groundwater reactive transport modeling in a synthetic case in which twenty-seven reactive transport models were designed to predict the reactive transport of hexavalent uranium (U(VI)) based on observations at a former uranium mill site near Naturita, CO. These reactive transport models contain three uncertain model components, i.e., parameterization of hydraulic conductivity, configuration of model boundary, and surface complexation reactions that simulate U(VI) adsorption. These uncertain model components were aggregated into the alternative models by integrating a hierarchical structure into MLBMA. The modeling results of the individual models and MLBMA were analyzed to investigate their predictive performance. The predictive logscore results show that MLBMA generally outperforms the best model, suggesting that using MLBMA is a sound strategy to achieve more robust model predictions relative to a single model. MLBMA works best when the alternative models are structurally distinct and have diverse model predictions. When correlation in model structure exists, two strategies were used to improve predictive performance by retaining structurally distinct models or assigning smaller prior model probabilities to correlated models. Since the synthetic models were designed using data from the Naturita site, the results of this study are expected to provide guidance for real-world modeling. Limitations of applying MLBMA to the synthetic study and future real-world modeling are discussed.
NASA Astrophysics Data System (ADS)
Wang, S.; Peters-Lidard, C. D.; Mocko, D. M.; Kumar, S.; Nearing, G. S.; Arsenault, K. R.; Geiger, J. V.
2014-12-01
Model integration bridges the data flow between modeling frameworks and models. However, models usually do not fit directly into a particular modeling environment, if not designed for it. An example includes implementing different types of models into the NASA Land Information System (LIS), a software framework for land-surface modeling and data assimilation. Model implementation requires scientific knowledge and software expertise and may take a developer months to learn LIS and model software structure. Debugging and testing of the model implementation is also time-consuming due to not fully understanding LIS or the model. This time spent is costly for research and operational projects. To address this issue, an approach has been developed to automate model integration into LIS. With this in mind, a general model interface was designed to retrieve forcing inputs, parameters, and state variables needed by the model and to provide as state variables and outputs to LIS. Every model can be wrapped to comply with the interface, usually with a FORTRAN 90 subroutine. Development efforts need only knowledge of the model and basic programming skills. With such wrappers, the logic is the same for implementing all models. Code templates defined for this general model interface could be re-used with any specific model. Therefore, the model implementation can be done automatically. An automated model implementation toolkit was developed with Microsoft Excel and its built-in VBA language. It allows model specifications in three worksheets and contains FORTRAN 90 code templates in VBA programs. According to the model specification, the toolkit generates data structures and procedures within FORTRAN modules and subroutines, which transfer data between LIS and the model wrapper. Model implementation is standardized, and about 80 - 90% of the development load is reduced. In this presentation, the automated model implementation approach is described along with LIS programming interfaces, the general model interface and five case studies, including a regression model, Noah-MP, FASST, SAC-HTET/SNOW-17, and FLake. These different models vary in complexity with software structure. Also, we will describe how these complexities were overcome through using this approach and results of model benchmarks within LIS.
Literature review of models on tire-pavement interaction noise
NASA Astrophysics Data System (ADS)
Li, Tan; Burdisso, Ricardo; Sandu, Corina
2018-04-01
Tire-pavement interaction noise (TPIN) becomes dominant at speeds above 40 km/h for passenger vehicles and 70 km/h for trucks. Several models have been developed to describe and predict the TPIN. However, these models do not fully reveal the physical mechanisms or predict TPIN accurately. It is well known that all the models have both strengths and weaknesses, and different models fit different investigation purposes or conditions. The numerous papers that present these models are widely scattered among thousands of journals, and it is difficult to get the complete picture of the status of research in this area. This review article aims at presenting the history and current state of TPIN models systematically, making it easier to identify and distribute the key knowledge and opinions, and providing insight into the future research trend in this field. In this work, over 2000 references related to TPIN were collected, and 74 models were reviewed from nearly 200 selected references; these were categorized into deterministic models (37), statistical models (18), and hybrid models (19). The sections explaining the models are self-contained with key principles, equations, and illustrations included. The deterministic models were divided into three sub-categories: conventional physics models, finite element and boundary element models, and computational fluid dynamics models; the statistical models were divided into three sub-categories: traditional regression models, principal component analysis models, and fuzzy curve-fitting models; the hybrid models were divided into three sub-categories: tire-pavement interface models, mechanism separation models, and noise propagation models. At the end of each category of models, a summary table is presented to compare these models with the key information extracted. Readers may refer to these tables to find models of their interest. The strengths and weaknesses of the models in different categories were then analyzed. Finally, the modeling trend and future direction in this area are given.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ajami, N K; Duan, Q; Gao, X
2005-04-11
This paper examines several multi-model combination techniques: the Simple Multi-model Average (SMA), the Multi-Model Super Ensemble (MMSE), Modified Multi-Model Super Ensemble (M3SE) and the Weighted Average Method (WAM). These model combination techniques were evaluated using the results from the Distributed Model Intercomparison Project (DMIP), an international project sponsored by the National Weather Service (NWS) Office of Hydrologic Development (OHD). All of the multi-model combination results were obtained using uncalibrated DMIP model outputs and were compared against the best uncalibrated as well as the best calibrated individual model results. The purpose of this study is to understand how different combination techniquesmore » affect the skill levels of the multi-model predictions. This study revealed that the multi-model predictions obtained from uncalibrated single model predictions are generally better than any single member model predictions, even the best calibrated single model predictions. Furthermore, more sophisticated multi-model combination techniques that incorporated bias correction steps work better than simple multi-model average predictions or multi-model predictions without bias correction.« less
Expert models and modeling processes associated with a computer-modeling tool
NASA Astrophysics Data System (ADS)
Zhang, Baohui; Liu, Xiufeng; Krajcik, Joseph S.
2006-07-01
Holding the premise that the development of expertise is a continuous process, this study concerns expert models and modeling processes associated with a modeling tool called Model-It. Five advanced Ph.D. students in environmental engineering and public health used Model-It to create and test models of water quality. Using think aloud technique and video recording, we captured their computer screen modeling activities and thinking processes. We also interviewed them the day following their modeling sessions to further probe the rationale of their modeling practices. We analyzed both the audio-video transcripts and the experts' models. We found the experts' modeling processes followed the linear sequence built in the modeling program with few instances of moving back and forth. They specified their goals up front and spent a long time thinking through an entire model before acting. They specified relationships with accurate and convincing evidence. Factors (i.e., variables) in expert models were clustered, and represented by specialized technical terms. Based on the above findings, we made suggestions for improving model-based science teaching and learning using Model-It.
Illustrating a Model-Game-Model Paradigm for Using Human Wargames in Analysis
2017-02-01
Working Paper Illustrating a Model- Game -Model Paradigm for Using Human Wargames in Analysis Paul K. Davis RAND National Security Research...paper proposes and illustrates an analysis-centric paradigm (model- game -model or what might be better called model-exercise-model in some cases) for...to involve stakehold- ers in model development from the outset. The model- game -model paradigm was illustrated in an application to crisis planning
NASA Astrophysics Data System (ADS)
Ichii, K.; Suzuki, T.; Kato, T.; Ito, A.; Hajima, T.; Ueyama, M.; Sasai, T.; Hirata, R.; Saigusa, N.; Ohtani, Y.; Takagi, K.
2010-07-01
Terrestrial biosphere models show large differences when simulating carbon and water cycles, and reducing these differences is a priority for developing more accurate estimates of the condition of terrestrial ecosystems and future climate change. To reduce uncertainties and improve the understanding of their carbon budgets, we investigated the utility of the eddy flux datasets to improve model simulations and reduce variabilities among multi-model outputs of terrestrial biosphere models in Japan. Using 9 terrestrial biosphere models (Support Vector Machine - based regressions, TOPS, CASA, VISIT, Biome-BGC, DAYCENT, SEIB, LPJ, and TRIFFID), we conducted two simulations: (1) point simulations at four eddy flux sites in Japan and (2) spatial simulations for Japan with a default model (based on original settings) and a modified model (based on model parameter tuning using eddy flux data). Generally, models using default model settings showed large deviations in model outputs from observation with large model-by-model variability. However, after we calibrated the model parameters using eddy flux data (GPP, RE and NEP), most models successfully simulated seasonal variations in the carbon cycle, with less variability among models. We also found that interannual variations in the carbon cycle are mostly consistent among models and observations. Spatial analysis also showed a large reduction in the variability among model outputs. This study demonstrated that careful validation and calibration of models with available eddy flux data reduced model-by-model differences. Yet, site history, analysis of model structure changes, and more objective procedure of model calibration should be included in the further analysis.
Conceptual and logical level of database modeling
NASA Astrophysics Data System (ADS)
Hunka, Frantisek; Matula, Jiri
2016-06-01
Conceptual and logical levels form the top most levels of database modeling. Usually, ORM (Object Role Modeling) and ER diagrams are utilized to capture the corresponding schema. The final aim of business process modeling is to store its results in the form of database solution. For this reason, value oriented business process modeling which utilizes ER diagram to express the modeling entities and relationships between them are used. However, ER diagrams form the logical level of database schema. To extend possibilities of different business process modeling methodologies, the conceptual level of database modeling is needed. The paper deals with the REA value modeling approach to business process modeling using ER-diagrams, and derives conceptual model utilizing ORM modeling approach. Conceptual model extends possibilities for value modeling to other business modeling approaches.
BiGG Models: A platform for integrating, standardizing and sharing genome-scale models
King, Zachary A.; Lu, Justin; Drager, Andreas; ...
2015-10-17
In this study, genome-scale metabolic models are mathematically structured knowledge bases that can be used to predict metabolic pathway usage and growth phenotypes. Furthermore, they can generate and test hypotheses when integrated with experimental data. To maximize the value of these models, centralized repositories of high-quality models must be established, models must adhere to established standards and model components must be linked to relevant databases. Tools for model visualization further enhance their utility. To meet these needs, we present BiGG Models (http://bigg.ucsd.edu), a completely redesigned Biochemical, Genetic and Genomic knowledge base. BiGG Models contains more than 75 high-quality, manually-curated genome-scalemore » metabolic models. On the website, users can browse, search and visualize models. BiGG Models connects genome-scale models to genome annotations and external databases. Reaction and metabolite identifiers have been standardized across models to conform to community standards and enable rapid comparison across models. Furthermore, BiGG Models provides a comprehensive application programming interface for accessing BiGG Models with modeling and analysis tools. As a resource for highly curated, standardized and accessible models of metabolism, BiGG Models will facilitate diverse systems biology studies and support knowledge-based analysis of diverse experimental data.« less
BiGG Models: A platform for integrating, standardizing and sharing genome-scale models
King, Zachary A.; Lu, Justin; Dräger, Andreas; Miller, Philip; Federowicz, Stephen; Lerman, Joshua A.; Ebrahim, Ali; Palsson, Bernhard O.; Lewis, Nathan E.
2016-01-01
Genome-scale metabolic models are mathematically-structured knowledge bases that can be used to predict metabolic pathway usage and growth phenotypes. Furthermore, they can generate and test hypotheses when integrated with experimental data. To maximize the value of these models, centralized repositories of high-quality models must be established, models must adhere to established standards and model components must be linked to relevant databases. Tools for model visualization further enhance their utility. To meet these needs, we present BiGG Models (http://bigg.ucsd.edu), a completely redesigned Biochemical, Genetic and Genomic knowledge base. BiGG Models contains more than 75 high-quality, manually-curated genome-scale metabolic models. On the website, users can browse, search and visualize models. BiGG Models connects genome-scale models to genome annotations and external databases. Reaction and metabolite identifiers have been standardized across models to conform to community standards and enable rapid comparison across models. Furthermore, BiGG Models provides a comprehensive application programming interface for accessing BiGG Models with modeling and analysis tools. As a resource for highly curated, standardized and accessible models of metabolism, BiGG Models will facilitate diverse systems biology studies and support knowledge-based analysis of diverse experimental data. PMID:26476456
NASA Astrophysics Data System (ADS)
Yue, Songshan; Chen, Min; Wen, Yongning; Lu, Guonian
2016-04-01
Earth environment is extremely complicated and constantly changing; thus, it is widely accepted that the use of a single geo-analysis model cannot accurately represent all details when solving complex geo-problems. Over several years of research, numerous geo-analysis models have been developed. However, a collaborative barrier between model providers and model users still exists. The development of cloud computing has provided a new and promising approach for sharing and integrating geo-analysis models across an open web environment. To share and integrate these heterogeneous models, encapsulation studies should be conducted that are aimed at shielding original execution differences to create services which can be reused in the web environment. Although some model service standards (such as Web Processing Service (WPS) and Geo Processing Workflow (GPW)) have been designed and developed to help researchers construct model services, various problems regarding model encapsulation remain. (1) The descriptions of geo-analysis models are complicated and typically require rich-text descriptions and case-study illustrations, which are difficult to fully represent within a single web request (such as the GetCapabilities and DescribeProcess operations in the WPS standard). (2) Although Web Service technologies can be used to publish model services, model users who want to use a geo-analysis model and copy the model service into another computer still encounter problems (e.g., they cannot access the model deployment dependencies information). This study presents a strategy for encapsulating geo-analysis models to reduce problems encountered when sharing models between model providers and model users and supports the tasks with different web service standards (e.g., the WPS standard). A description method for heterogeneous geo-analysis models is studied. Based on the model description information, the methods for encapsulating the model-execution program to model services and for describing model-service deployment information are also included in the proposed strategy. Hence, the model-description interface, model-execution interface and model-deployment interface are studied to help model providers and model users more easily share, reuse and integrate geo-analysis models in an open web environment. Finally, a prototype system is established, and the WPS standard is employed as an example to verify the capability and practicability of the model-encapsulation strategy. The results show that it is more convenient for modellers to share and integrate heterogeneous geo-analysis models in cloud computing platforms.
Object-oriented biomedical system modelling--the language.
Hakman, M; Groth, T
1999-11-01
The paper describes a new object-oriented biomedical continuous system modelling language (OOBSML). It is fully object-oriented and supports model inheritance, encapsulation, and model component instantiation and behaviour polymorphism. Besides the traditional differential and algebraic equation expressions the language includes also formal expressions for documenting models and defining model quantity types and quantity units. It supports explicit definition of model input-, output- and state quantities, model components and component connections. The OOBSML model compiler produces self-contained, independent, executable model components that can be instantiated and used within other OOBSML models and/or stored within model and model component libraries. In this way complex models can be structured as multilevel, multi-component model hierarchies. Technically the model components produced by the OOBSML compiler are executable computer code objects based on distributed object and object request broker technology. This paper includes both the language tutorial and the formal language syntax and semantic description.
ERIC Educational Resources Information Center
Tay, Louis; Ali, Usama S.; Drasgow, Fritz; Williams, Bruce
2011-01-01
This study investigated the relative model-data fit of an ideal point item response theory (IRT) model (the generalized graded unfolding model [GGUM]) and dominance IRT models (e.g., the two-parameter logistic model [2PLM] and Samejima's graded response model [GRM]) to simulated dichotomous and polytomous data generated from each of these models.…
NASA Astrophysics Data System (ADS)
Roberts, Michael J.; Braun, Noah O.; Sinclair, Thomas R.; Lobell, David B.; Schlenker, Wolfram
2017-09-01
We compare predictions of a simple process-based crop model (Soltani and Sinclair 2012), a simple statistical model (Schlenker and Roberts 2009), and a combination of both models to actual maize yields on a large, representative sample of farmer-managed fields in the Corn Belt region of the United States. After statistical post-model calibration, the process model (Simple Simulation Model, or SSM) predicts actual outcomes slightly better than the statistical model, but the combined model performs significantly better than either model. The SSM, statistical model and combined model all show similar relationships with precipitation, while the SSM better accounts for temporal patterns of precipitation, vapor pressure deficit and solar radiation. The statistical and combined models show a more negative impact associated with extreme heat for which the process model does not account. Due to the extreme heat effect, predicted impacts under uniform climate change scenarios are considerably more severe for the statistical and combined models than for the process-based model.
An empirical model to forecast solar wind velocity through statistical modeling
NASA Astrophysics Data System (ADS)
Gao, Y.; Ridley, A. J.
2013-12-01
The accurate prediction of the solar wind velocity has been a major challenge in the space weather community. Previous studies proposed many empirical and semi-empirical models to forecast the solar wind velocity based on either the historical observations, e.g. the persistence model, or the instantaneous observations of the sun, e.g. the Wang-Sheeley-Arge model. In this study, we use the one-minute WIND data from January 1995 to August 2012 to investigate and compare the performances of 4 models often used in literature, here referred to as the null model, the persistence model, the one-solar-rotation-ago model, and the Wang-Sheeley-Arge model. It is found that, measured by root mean square error, the persistence model gives the most accurate predictions within two days. Beyond two days, the Wang-Sheeley-Arge model serves as the best model, though it only slightly outperforms the null model and the one-solar-rotation-ago model. Finally, we apply the least-square regression to linearly combine the null model, the persistence model, and the one-solar-rotation-ago model to propose a 'general persistence model'. By comparing its performance against the 4 aforementioned models, it is found that the accuracy of the general persistence model outperforms the other 4 models within five days. Due to its great simplicity and superb performance, we believe that the general persistence model can serve as a benchmark in the forecast of solar wind velocity and has the potential to be modified to arrive at better models.
A Primer for Model Selection: The Decisive Role of Model Complexity
NASA Astrophysics Data System (ADS)
Höge, Marvin; Wöhling, Thomas; Nowak, Wolfgang
2018-03-01
Selecting a "best" model among several competing candidate models poses an often encountered problem in water resources modeling (and other disciplines which employ models). For a modeler, the best model fulfills a certain purpose best (e.g., flood prediction), which is typically assessed by comparing model simulations to data (e.g., stream flow). Model selection methods find the "best" trade-off between good fit with data and model complexity. In this context, the interpretations of model complexity implied by different model selection methods are crucial, because they represent different underlying goals of modeling. Over the last decades, numerous model selection criteria have been proposed, but modelers who primarily want to apply a model selection criterion often face a lack of guidance for choosing the right criterion that matches their goal. We propose a classification scheme for model selection criteria that helps to find the right criterion for a specific goal, i.e., which employs the correct complexity interpretation. We identify four model selection classes which seek to achieve high predictive density, low predictive error, high model probability, or shortest compression of data. These goals can be achieved by following either nonconsistent or consistent model selection and by either incorporating a Bayesian parameter prior or not. We allocate commonly used criteria to these four classes, analyze how they represent model complexity and what this means for the model selection task. Finally, we provide guidance on choosing the right type of criteria for specific model selection tasks. (A quick guide through all key points is given at the end of the introduction.)
Women's Endorsement of Models of Sexual Response: Correlates and Predictors.
Nowosielski, Krzysztof; Wróbel, Beata; Kowalczyk, Robert
2016-02-01
Few studies have investigated endorsement of female sexual response models, and no single model has been accepted as a normative description of women's sexual response. The aim of the study was to establish how women from a population-based sample endorse current theoretical models of the female sexual response--the linear models and circular model (partial and composite Basson models)--as well as predictors of endorsement. Accordingly, 174 heterosexual women aged 18-55 years were included in a cross-sectional study: 74 women diagnosed with female sexual dysfunction (FSD) based on DSM-5 criteria and 100 non-dysfunctional women. The description of sexual response models was used to divide subjects into four subgroups: linear (Masters-Johnson and Kaplan models), circular (partial Basson model), mixed (linear and circular models in similar proportions, reflective of the composite Basson model), and a different model. Women were asked to choose which of the models best described their pattern of sexual response and how frequently they engaged in each model. Results showed that 28.7% of women endorsed the linear models, 19.5% the partial Basson model, 40.8% the composite Basson model, and 10.9% a different model. Women with FSD endorsed the partial Basson model and a different model more frequently than did non-dysfunctional controls. Individuals who were dissatisfied with a partner as a lover were more likely to endorse a different model. Based on the results, we concluded that the majority of women endorsed a mixed model combining the circular response with the possibility of an innate desire triggering a linear response. Further, relationship difficulties, not FSD, predicted model endorsement.
The Use of Modeling-Based Text to Improve Students' Modeling Competencies
ERIC Educational Resources Information Center
Jong, Jing-Ping; Chiu, Mei-Hung; Chung, Shiao-Lan
2015-01-01
This study investigated the effects of a modeling-based text on 10th graders' modeling competencies. Fifteen 10th graders read a researcher-developed modeling-based science text on the ideal gas law that included explicit descriptions and representations of modeling processes (i.e., model selection, model construction, model validation, model…
Performance and Architecture Lab Modeling Tool
DOE Office of Scientific and Technical Information (OSTI.GOV)
2014-06-19
Analytical application performance models are critical for diagnosing performance-limiting resources, optimizing systems, and designing machines. Creating models, however, is difficult. Furthermore, models are frequently expressed in forms that are hard to distribute and validate. The Performance and Architecture Lab Modeling tool, or Palm, is a modeling tool designed to make application modeling easier. Palm provides a source code modeling annotation language. Not only does the modeling language divide the modeling task into sub problems, it formally links an application's source code with its model. This link is important because a model's purpose is to capture application behavior. Furthermore, this linkmore » makes it possible to define rules for generating models according to source code organization. Palm generates hierarchical models according to well-defined rules. Given an application, a set of annotations, and a representative execution environment, Palm will generate the same model. A generated model is a an executable program whose constituent parts directly correspond to the modeled application. Palm generates models by combining top-down (human-provided) semantic insight with bottom-up static and dynamic analysis. A model's hierarchy is defined by static and dynamic source code structure. Because Palm coordinates models and source code, Palm's models are 'first-class' and reproducible. Palm automates common modeling tasks. For instance, Palm incorporates measurements to focus attention, represent constant behavior, and validate models. Palm's workflow is as follows. The workflow's input is source code annotated with Palm modeling annotations. The most important annotation models an instance of a block of code. Given annotated source code, the Palm Compiler produces executables and the Palm Monitor collects a representative performance profile. The Palm Generator synthesizes a model based on the static and dynamic mapping of annotations to program behavior. The model -- an executable program -- is a hierarchical composition of annotation functions, synthesized functions, statistics for runtime values, and performance measurements.« less
Lu, Dan; Ye, Ming; Curtis, Gary P.
2015-08-01
While Bayesian model averaging (BMA) has been widely used in groundwater modeling, it is infrequently applied to groundwater reactive transport modeling because of multiple sources of uncertainty in the coupled hydrogeochemical processes and because of the long execution time of each model run. To resolve these problems, this study analyzed different levels of uncertainty in a hierarchical way, and used the maximum likelihood version of BMA, i.e., MLBMA, to improve the computational efficiency. Our study demonstrates the applicability of MLBMA to groundwater reactive transport modeling in a synthetic case in which twenty-seven reactive transport models were designed to predict themore » reactive transport of hexavalent uranium (U(VI)) based on observations at a former uranium mill site near Naturita, CO. Moreover, these reactive transport models contain three uncertain model components, i.e., parameterization of hydraulic conductivity, configuration of model boundary, and surface complexation reactions that simulate U(VI) adsorption. These uncertain model components were aggregated into the alternative models by integrating a hierarchical structure into MLBMA. The modeling results of the individual models and MLBMA were analyzed to investigate their predictive performance. The predictive logscore results show that MLBMA generally outperforms the best model, suggesting that using MLBMA is a sound strategy to achieve more robust model predictions relative to a single model. MLBMA works best when the alternative models are structurally distinct and have diverse model predictions. When correlation in model structure exists, two strategies were used to improve predictive performance by retaining structurally distinct models or assigning smaller prior model probabilities to correlated models. Since the synthetic models were designed using data from the Naturita site, the results of this study are expected to provide guidance for real-world modeling. Finally, limitations of applying MLBMA to the synthetic study and future real-world modeling are discussed.« less
Takagi-Sugeno-Kang fuzzy models of the rainfall-runoff transformation
NASA Astrophysics Data System (ADS)
Jacquin, A. P.; Shamseldin, A. Y.
2009-04-01
Fuzzy inference systems, or fuzzy models, are non-linear models that describe the relation between the inputs and the output of a real system using a set of fuzzy IF-THEN rules. This study deals with the application of Takagi-Sugeno-Kang type fuzzy models to the development of rainfall-runoff models operating on a daily basis, using a system based approach. The models proposed are classified in two types, each intended to account for different kinds of dominant non-linear effects in the rainfall-runoff relationship. Fuzzy models type 1 are intended to incorporate the effect of changes in the prevailing soil moisture content, while fuzzy models type 2 address the phenomenon of seasonality. Each model type consists of five fuzzy models of increasing complexity; the most complex fuzzy model of each model type includes all the model components found in the remaining fuzzy models of the respective type. The models developed are applied to data of six catchments from different geographical locations and sizes. Model performance is evaluated in terms of two measures of goodness of fit, namely the Nash-Sutcliffe criterion and the index of volumetric fit. The results of the fuzzy models are compared with those of the Simple Linear Model, the Linear Perturbation Model and the Nearest Neighbour Linear Perturbation Model, which use similar input information. Overall, the results of this study indicate that Takagi-Sugeno-Kang fuzzy models are a suitable alternative for modelling the rainfall-runoff relationship. However, it is also observed that increasing the complexity of the model structure does not necessarily produce an improvement in the performance of the fuzzy models. The relative importance of the different model components in determining the model performance is evaluated through sensitivity analysis of the model parameters in the accompanying study presented in this meeting. Acknowledgements: We would like to express our gratitude to Prof. Kieran M. O'Connor from the National University of Ireland, Galway, for providing the data used in this study.
A simple computational algorithm of model-based choice preference.
Toyama, Asako; Katahira, Kentaro; Ohira, Hideki
2017-08-01
A broadly used computational framework posits that two learning systems operate in parallel during the learning of choice preferences-namely, the model-free and model-based reinforcement-learning systems. In this study, we examined another possibility, through which model-free learning is the basic system and model-based information is its modulator. Accordingly, we proposed several modified versions of a temporal-difference learning model to explain the choice-learning process. Using the two-stage decision task developed by Daw, Gershman, Seymour, Dayan, and Dolan (2011), we compared their original computational model, which assumes a parallel learning process, and our proposed models, which assume a sequential learning process. Choice data from 23 participants showed a better fit with the proposed models. More specifically, the proposed eligibility adjustment model, which assumes that the environmental model can weight the degree of the eligibility trace, can explain choices better under both model-free and model-based controls and has a simpler computational algorithm than the original model. In addition, the forgetting learning model and its variation, which assume changes in the values of unchosen actions, substantially improved the fits to the data. Overall, we show that a hybrid computational model best fits the data. The parameters used in this model succeed in capturing individual tendencies with respect to both model use in learning and exploration behavior. This computational model provides novel insights into learning with interacting model-free and model-based components.
Airborne Wireless Communication Modeling and Analysis with MATLAB
2014-03-27
research develops a physical layer model that combines antenna modeling using computational electromagnetics and the two-ray propagation model to...predict the received signal strength. The antenna is modeled with triangular patches and analyzed by extending the antenna modeling algorithm by Sergey...7 2.7. Propagation Modeling : Statistical Models ............................................................8 2.8. Antenna Modeling
Marginal and Random Intercepts Models for Longitudinal Binary Data with Examples from Criminology
ERIC Educational Resources Information Center
Long, Jeffrey D.; Loeber, Rolf; Farrington, David P.
2009-01-01
Two models for the analysis of longitudinal binary data are discussed: the marginal model and the random intercepts model. In contrast to the linear mixed model (LMM), the two models for binary data are not subsumed under a single hierarchical model. The marginal model provides group-level information whereas the random intercepts model provides…
EpiModel: An R Package for Mathematical Modeling of Infectious Disease over Networks.
Jenness, Samuel M; Goodreau, Steven M; Morris, Martina
2018-04-01
Package EpiModel provides tools for building, simulating, and analyzing mathematical models for the population dynamics of infectious disease transmission in R. Several classes of models are included, but the unique contribution of this software package is a general stochastic framework for modeling the spread of epidemics on networks. EpiModel integrates recent advances in statistical methods for network analysis (temporal exponential random graph models) that allow the epidemic modeling to be grounded in empirical data on contacts that can spread infection. This article provides an overview of both the modeling tools built into EpiModel , designed to facilitate learning for students new to modeling, and the application programming interface for extending package EpiModel , designed to facilitate the exploration of novel research questions for advanced modelers.
EpiModel: An R Package for Mathematical Modeling of Infectious Disease over Networks
Jenness, Samuel M.; Goodreau, Steven M.; Morris, Martina
2018-01-01
Package EpiModel provides tools for building, simulating, and analyzing mathematical models for the population dynamics of infectious disease transmission in R. Several classes of models are included, but the unique contribution of this software package is a general stochastic framework for modeling the spread of epidemics on networks. EpiModel integrates recent advances in statistical methods for network analysis (temporal exponential random graph models) that allow the epidemic modeling to be grounded in empirical data on contacts that can spread infection. This article provides an overview of both the modeling tools built into EpiModel, designed to facilitate learning for students new to modeling, and the application programming interface for extending package EpiModel, designed to facilitate the exploration of novel research questions for advanced modelers. PMID:29731699
Model compilation: An approach to automated model derivation
NASA Technical Reports Server (NTRS)
Keller, Richard M.; Baudin, Catherine; Iwasaki, Yumi; Nayak, Pandurang; Tanaka, Kazuo
1990-01-01
An approach is introduced to automated model derivation for knowledge based systems. The approach, model compilation, involves procedurally generating the set of domain models used by a knowledge based system. With an implemented example, how this approach can be used to derive models of different precision and abstraction is illustrated, and models are tailored to different tasks, from a given set of base domain models. In particular, two implemented model compilers are described, each of which takes as input a base model that describes the structure and behavior of a simple electromechanical device, the Reaction Wheel Assembly of NASA's Hubble Space Telescope. The compilers transform this relatively general base model into simple task specific models for troubleshooting and redesign, respectively, by applying a sequence of model transformations. Each transformation in this sequence produces an increasingly more specialized model. The compilation approach lessens the burden of updating and maintaining consistency among models by enabling their automatic regeneration.
A composite computational model of liver glucose homeostasis. I. Building the composite model.
Hetherington, J; Sumner, T; Seymour, R M; Li, L; Rey, M Varela; Yamaji, S; Saffrey, P; Margoninski, O; Bogle, I D L; Finkelstein, A; Warner, A
2012-04-07
A computational model of the glucagon/insulin-driven liver glucohomeostasis function, focusing on the buffering of glucose into glycogen, has been developed. The model exemplifies an 'engineering' approach to modelling in systems biology, and was produced by linking together seven component models of separate aspects of the physiology. The component models use a variety of modelling paradigms and degrees of simplification. Model parameters were determined by an iterative hybrid of fitting to high-scale physiological data, and determination from small-scale in vitro experiments or molecular biological techniques. The component models were not originally designed for inclusion within such a composite model, but were integrated, with modification, using our published modelling software and computational frameworks. This approach facilitates the development of large and complex composite models, although, inevitably, some compromises must be made when composing the individual models. Composite models of this form have not previously been demonstrated.
NASA Technical Reports Server (NTRS)
Kral, Linda D.; Ladd, John A.; Mani, Mori
1995-01-01
The objective of this viewgraph presentation is to evaluate turbulence models for integrated aircraft components such as the forebody, wing, inlet, diffuser, nozzle, and afterbody. The one-equation models have replaced the algebraic models as the baseline turbulence models. The Spalart-Allmaras one-equation model consistently performs better than the Baldwin-Barth model, particularly in the log-layer and free shear layers. Also, the Sparlart-Allmaras model is not grid dependent like the Baldwin-Barth model. No general turbulence model exists for all engineering applications. The Spalart-Allmaras one-equation model and the Chien k-epsilon models are the preferred turbulence models. Although the two-equation models often better predict the flow field, they may take from two to five times the CPU time. Future directions are in further benchmarking the Menter blended k-w/k-epsilon and algorithmic improvements to reduce CPU time of the two-equation model.
The determination of third order linear models from a seventh order nonlinear jet engine model
NASA Technical Reports Server (NTRS)
Lalonde, Rick J.; Hartley, Tom T.; De Abreu-Garcia, J. Alex
1989-01-01
Results are presented that demonstrate how good reduced-order models can be obtained directly by recursive parameter identification using input/output (I/O) data of high-order nonlinear systems. Three different methods of obtaining a third-order linear model from a seventh-order nonlinear turbojet engine model are compared. The first method is to obtain a linear model from the original model and then reduce the linear model by standard reduction techniques such as residualization and balancing. The second method is to identify directly a third-order linear model by recursive least-squares parameter estimation using I/O data of the original model. The third method is to obtain a reduced-order model from the original model and then linearize the reduced model. Frequency responses are used as the performance measure to evaluate the reduced models. The reduced-order models along with their Bode plots are presented for comparison purposes.
BioModels: expanding horizons to include more modelling approaches and formats
Nguyen, Tung V N; Graesslin, Martin; Hälke, Robert; Ali, Raza; Schramm, Jochen; Wimalaratne, Sarala M; Kothamachu, Varun B; Rodriguez, Nicolas; Swat, Maciej J; Eils, Jurgen; Eils, Roland; Laibe, Camille; Chelliah, Vijayalakshmi
2018-01-01
Abstract BioModels serves as a central repository of mathematical models representing biological processes. It offers a platform to make mathematical models easily shareable across the systems modelling community, thereby supporting model reuse. To facilitate hosting a broader range of model formats derived from diverse modelling approaches and tools, a new infrastructure for BioModels has been developed that is available at http://www.ebi.ac.uk/biomodels. This new system allows submitting and sharing of a wide range of models with improved support for formats other than SBML. It also offers a version-control backed environment in which authors and curators can work collaboratively to curate models. This article summarises the features available in the current system and discusses the potential benefit they offer to the users over the previous system. In summary, the new portal broadens the scope of models accepted in BioModels and supports collaborative model curation which is crucial for model reproducibility and sharing. PMID:29106614
NASA Astrophysics Data System (ADS)
Justi, Rosária S.; Gilbert, John K.
2002-04-01
In this paper, the role of modelling in the teaching and learning of science is reviewed. In order to represent what is entailed in modelling, a 'model of modelling' framework is proposed. Five phases in moving towards a full capability in modelling are established by a review of the literature: learning models; learning to use models; learning how to revise models; learning to reconstruct models; learning to construct models de novo. In order to identify the knowledge and skills that science teachers think are needed to produce a model successfully, a semi-structured interview study was conducted with 39 Brazilian serving science teachers: 10 teaching at the 'fundamental' level (6-14 years); 10 teaching at the 'medium'-level (15-17 years); 10 undergraduate pre-service 'medium'-level teachers; 9 university teachers of chemistry. Their responses are used to establish what is entailed in implementing the 'model of modelling' framework. The implications for students, teachers, and for teacher education, of moving through the five phases of capability, are discussed.
Aspinall, Richard
2004-08-01
This paper develops an approach to modelling land use change that links model selection and multi-model inference with empirical models and GIS. Land use change is frequently studied, and understanding gained, through a process of modelling that is an empirical analysis of documented changes in land cover or land use patterns. The approach here is based on analysis and comparison of multiple models of land use patterns using model selection and multi-model inference. The approach is illustrated with a case study of rural housing as it has developed for part of Gallatin County, Montana, USA. A GIS contains the location of rural housing on a yearly basis from 1860 to 2000. The database also documents a variety of environmental and socio-economic conditions. A general model of settlement development describes the evolution of drivers of land use change and their impacts in the region. This model is used to develop a series of different models reflecting drivers of change at different periods in the history of the study area. These period specific models represent a series of multiple working hypotheses describing (a) the effects of spatial variables as a representation of social, economic and environmental drivers of land use change, and (b) temporal changes in the effects of the spatial variables as the drivers of change evolve over time. Logistic regression is used to calibrate and interpret these models and the models are then compared and evaluated with model selection techniques. Results show that different models are 'best' for the different periods. The different models for different periods demonstrate that models are not invariant over time which presents challenges for validation and testing of empirical models. The research demonstrates (i) model selection as a mechanism for rating among many plausible models that describe land cover or land use patterns, (ii) inference from a set of models rather than from a single model, (iii) that models can be developed based on hypothesised relationships based on consideration of underlying and proximate causes of change, and (iv) that models are not invariant over time.
NASA Astrophysics Data System (ADS)
Aktan, Mustafa B.
The purpose of this study was to investigate prospective science teachers' knowledge and understanding of models and modeling, and their attitudes towards the use of models in science teaching through the following research questions: What knowledge do prospective science teachers have about models and modeling in science? What understandings about the nature of models do these teachers hold as a result of their educational training? What perceptions and attitudes do these teachers hold about the use of models in their teaching? Two main instruments, semi-structured in-depth interviewing and an open-item questionnaire, were used to obtain data from the participants. The data were analyzed from an interpretative phenomenological perspective and grounded theory methods. Earlier studies on in-service science teachers' understanding about the nature of models and modeling revealed that variations exist among teachers' limited yet diverse understanding of scientific models. The results of this study indicated that variations also existed among prospective science teachers' understanding of the concept of model and the nature of models. Apparently the participants' knowledge of models and modeling was limited and they viewed models as materialistic examples and representations. I found that the teachers believed the purpose of a model is to make phenomena more accessible and more understandable. They defined models by referring to an example, a representation, or a simplified version of the real thing. I found no evidence of negative attitudes towards use of models among the participants. Although the teachers valued the idea that scientific models are important aspects of science teaching and learning, and showed positive attitudes towards the use of models in their teaching, certain factors like level of learner, time, lack of modeling experience, and limited knowledge of models appeared to be affecting their perceptions negatively. Implications for the development of science teaching and teacher education programs are discussed. Directions for future research are suggested. Overall, based on the results, I suggest that prospective science teachers should engage in more modeling activities through their preparation programs, gain more modeling experience, and collaborate with their colleagues to better understand and implement scientific models in science teaching.
Validation of Groundwater Models: Meaningful or Meaningless?
NASA Astrophysics Data System (ADS)
Konikow, L. F.
2003-12-01
Although numerical simulation models are valuable tools for analyzing groundwater systems, their predictive accuracy is limited. People who apply groundwater flow or solute-transport models, as well as those who make decisions based on model results, naturally want assurance that a model is "valid." To many people, model validation implies some authentication of the truth or accuracy of the model. History matching is often presented as the basis for model validation. Although such model calibration is a necessary modeling step, it is simply insufficient for model validation. Because of parameter uncertainty and solution non-uniqueness, declarations of validation (or verification) of a model are not meaningful. Post-audits represent a useful means to assess the predictive accuracy of a site-specific model, but they require the existence of long-term monitoring data. Model testing may yield invalidation, but that is an opportunity to learn and to improve the conceptual and numerical models. Examples of post-audits and of the application of a solute-transport model to a radioactive waste disposal site illustrate deficiencies in model calibration, prediction, and validation.
Royle, J. Andrew; Dorazio, Robert M.
2008-01-01
A guide to data collection, modeling and inference strategies for biological survey data using Bayesian and classical statistical methods. This book describes a general and flexible framework for modeling and inference in ecological systems based on hierarchical models, with a strict focus on the use of probability models and parametric inference. Hierarchical models represent a paradigm shift in the application of statistics to ecological inference problems because they combine explicit models of ecological system structure or dynamics with models of how ecological systems are observed. The principles of hierarchical modeling are developed and applied to problems in population, metapopulation, community, and metacommunity systems. The book provides the first synthetic treatment of many recent methodological advances in ecological modeling and unifies disparate methods and procedures. The authors apply principles of hierarchical modeling to ecological problems, including * occurrence or occupancy models for estimating species distribution * abundance models based on many sampling protocols, including distance sampling * capture-recapture models with individual effects * spatial capture-recapture models based on camera trapping and related methods * population and metapopulation dynamic models * models of biodiversity, community structure and dynamics.
Using the Model Coupling Toolkit to couple earth system models
Warner, J.C.; Perlin, N.; Skyllingstad, E.D.
2008-01-01
Continued advances in computational resources are providing the opportunity to operate more sophisticated numerical models. Additionally, there is an increasing demand for multidisciplinary studies that include interactions between different physical processes. Therefore there is a strong desire to develop coupled modeling systems that utilize existing models and allow efficient data exchange and model control. The basic system would entail model "1" running on "M" processors and model "2" running on "N" processors, with efficient exchange of model fields at predetermined synchronization intervals. Here we demonstrate two coupled systems: the coupling of the ocean circulation model Regional Ocean Modeling System (ROMS) to the surface wave model Simulating WAves Nearshore (SWAN), and the coupling of ROMS to the atmospheric model Coupled Ocean Atmosphere Prediction System (COAMPS). Both coupled systems use the Model Coupling Toolkit (MCT) as a mechanism for operation control and inter-model distributed memory transfer of model variables. In this paper we describe requirements and other options for model coupling, explain the MCT library, ROMS, SWAN and COAMPS models, methods for grid decomposition and sparse matrix interpolation, and provide an example from each coupled system. Methods presented in this paper are clearly applicable for coupling of other types of models. ?? 2008 Elsevier Ltd. All rights reserved.
Generalized Multilevel Structural Equation Modeling
ERIC Educational Resources Information Center
Rabe-Hesketh, Sophia; Skrondal, Anders; Pickles, Andrew
2004-01-01
A unifying framework for generalized multilevel structural equation modeling is introduced. The models in the framework, called generalized linear latent and mixed models (GLLAMM), combine features of generalized linear mixed models (GLMM) and structural equation models (SEM) and consist of a response model and a structural model for the latent…
Frequentist Model Averaging in Structural Equation Modelling.
Jin, Shaobo; Ankargren, Sebastian
2018-06-04
Model selection from a set of candidate models plays an important role in many structural equation modelling applications. However, traditional model selection methods introduce extra randomness that is not accounted for by post-model selection inference. In the current study, we propose a model averaging technique within the frequentist statistical framework. Instead of selecting an optimal model, the contributions of all candidate models are acknowledged. Valid confidence intervals and a [Formula: see text] test statistic are proposed. A simulation study shows that the proposed method is able to produce a robust mean-squared error, a better coverage probability, and a better goodness-of-fit test compared to model selection. It is an interesting compromise between model selection and the full model.
Premium analysis for copula model: A case study for Malaysian motor insurance claims
NASA Astrophysics Data System (ADS)
Resti, Yulia; Ismail, Noriszura; Jaaman, Saiful Hafizah
2014-06-01
This study performs premium analysis for copula models with regression marginals. For illustration purpose, the copula models are fitted to the Malaysian motor insurance claims data. In this study, we consider copula models from Archimedean and Elliptical families, and marginal distributions of Gamma and Inverse Gaussian regression models. The simulated results from independent model, which is obtained from fitting regression models separately to each claim category, and dependent model, which is obtained from fitting copula models to all claim categories, are compared. The results show that the dependent model using Frank copula is the best model since the risk premiums estimated under this model are closely approximate to the actual claims experience relative to the other copula models.
2006-03-01
models, the thesis applies a biological model, the Lotka - Volterra predator- prey model, to a highly suggestive case study, that of the Irish Republican...Model, Irish Republican Army, Sinn Féin, Lotka - Volterra Predator Prey Model, Recruitment, British Army 16. PRICE CODE 17. SECURITY CLASSIFICATION OF...weaknesses of sociological and biological models, the thesis applies a biological model, the Lotka - Volterra predator-prey model, to a highly suggestive
Right-Sizing Statistical Models for Longitudinal Data
Wood, Phillip K.; Steinley, Douglas; Jackson, Kristina M.
2015-01-01
Arguments are proposed that researchers using longitudinal data should consider more and less complex statistical model alternatives to their initially chosen techniques in an effort to “right-size” the model to the data at hand. Such model comparisons may alert researchers who use poorly fitting overly parsimonious models to more complex better fitting alternatives, and, alternatively, may identify more parsimonious alternatives to overly complex (and perhaps empirically under-identified and/or less powerful) statistical models. A general framework is proposed for considering (often nested) relationships between a variety of psychometric and growth curve models. A three-step approach is proposed in which models are evaluated based on the number and patterning of variance components prior to selection of better-fitting growth models that explain both mean and variation/covariation patterns. The orthogonal, free-curve slope-intercept (FCSI) growth model is considered as a general model which includes, as special cases, many models including the Factor Mean model (FM, McArdle & Epstein, 1987), McDonald's (1967) linearly constrained factor model, Hierarchical Linear Models (HLM), Repeated Measures MANOVA, and the Linear Slope Intercept (LinearSI) Growth Model. The FCSI model, in turn, is nested within the Tuckerized factor model. The approach is illustrated by comparing alternative models in a longitudinal study of children's vocabulary and by comparison of several candidate parametric growth and chronometric models in a Monte Carlo study. PMID:26237507
Right-sizing statistical models for longitudinal data.
Wood, Phillip K; Steinley, Douglas; Jackson, Kristina M
2015-12-01
Arguments are proposed that researchers using longitudinal data should consider more and less complex statistical model alternatives to their initially chosen techniques in an effort to "right-size" the model to the data at hand. Such model comparisons may alert researchers who use poorly fitting, overly parsimonious models to more complex, better-fitting alternatives and, alternatively, may identify more parsimonious alternatives to overly complex (and perhaps empirically underidentified and/or less powerful) statistical models. A general framework is proposed for considering (often nested) relationships between a variety of psychometric and growth curve models. A 3-step approach is proposed in which models are evaluated based on the number and patterning of variance components prior to selection of better-fitting growth models that explain both mean and variation-covariation patterns. The orthogonal free curve slope intercept (FCSI) growth model is considered a general model that includes, as special cases, many models, including the factor mean (FM) model (McArdle & Epstein, 1987), McDonald's (1967) linearly constrained factor model, hierarchical linear models (HLMs), repeated-measures multivariate analysis of variance (MANOVA), and the linear slope intercept (linearSI) growth model. The FCSI model, in turn, is nested within the Tuckerized factor model. The approach is illustrated by comparing alternative models in a longitudinal study of children's vocabulary and by comparing several candidate parametric growth and chronometric models in a Monte Carlo study. (c) 2015 APA, all rights reserved).
Model averaging techniques for quantifying conceptual model uncertainty.
Singh, Abhishek; Mishra, Srikanta; Ruskauff, Greg
2010-01-01
In recent years a growing understanding has emerged regarding the need to expand the modeling paradigm to include conceptual model uncertainty for groundwater models. Conceptual model uncertainty is typically addressed by formulating alternative model conceptualizations and assessing their relative likelihoods using statistical model averaging approaches. Several model averaging techniques and likelihood measures have been proposed in the recent literature for this purpose with two broad categories--Monte Carlo-based techniques such as Generalized Likelihood Uncertainty Estimation or GLUE (Beven and Binley 1992) and criterion-based techniques that use metrics such as the Bayesian and Kashyap Information Criteria (e.g., the Maximum Likelihood Bayesian Model Averaging or MLBMA approach proposed by Neuman 2003) and Akaike Information Criterion-based model averaging (AICMA) (Poeter and Anderson 2005). These different techniques can often lead to significantly different relative model weights and ranks because of differences in the underlying statistical assumptions about the nature of model uncertainty. This paper provides a comparative assessment of the four model averaging techniques (GLUE, MLBMA with KIC, MLBMA with BIC, and AIC-based model averaging) mentioned above for the purpose of quantifying the impacts of model uncertainty on groundwater model predictions. Pros and cons of each model averaging technique are examined from a practitioner's perspective using two groundwater modeling case studies. Recommendations are provided regarding the use of these techniques in groundwater modeling practice.
Examination of various turbulence models for application in liquid rocket thrust chambers
NASA Technical Reports Server (NTRS)
Hung, R. J.
1991-01-01
There is a large variety of turbulence models available. These models include direct numerical simulation, large eddy simulation, Reynolds stress/flux model, zero equation model, one equation model, two equation k-epsilon model, multiple-scale model, etc. Each turbulence model contains different physical assumptions and requirements. The natures of turbulence are randomness, irregularity, diffusivity and dissipation. The capabilities of the turbulence models, including physical strength, weakness, limitations, as well as numerical and computational considerations, are reviewed. Recommendations are made for the potential application of a turbulence model in thrust chamber and performance prediction programs. The full Reynolds stress model is recommended. In a workshop, specifically called for the assessment of turbulence models for applications in liquid rocket thrust chambers, most of the experts present were also in favor of the recommendation of the Reynolds stress model.
Comparative study of turbulence models in predicting hypersonic inlet flows
NASA Technical Reports Server (NTRS)
Kapoor, Kamlesh; Anderson, Bernhard H.; Shaw, Robert J.
1992-01-01
A numerical study was conducted to analyze the performance of different turbulence models when applied to the hypersonic NASA P8 inlet. Computational results from the PARC2D code, which solves the full two-dimensional Reynolds-averaged Navier-Stokes equation, were compared with experimental data. The zero-equation models considered for the study were the Baldwin-Lomax model, the Thomas model, and a combination of the Baldwin-Lomax and Thomas models; the two-equation models considered were the Chien model, the Speziale model (both low Reynolds number), and the Launder and Spalding model (high Reynolds number). The Thomas model performed best among the zero-equation models, and predicted good pressure distributions. The Chien and Speziale models compared wery well with the experimental data, and performed better than the Thomas model near the walls.
Comparative study of turbulence models in predicting hypersonic inlet flows
NASA Technical Reports Server (NTRS)
Kapoor, Kamlesh; Anderson, Bernhard H.; Shaw, Robert J.
1992-01-01
A numerical study was conducted to analyze the performance of different turbulence models when applied to the hypersonic NASA P8 inlet. Computational results from the PARC2D code, which solves the full two-dimensional Reynolds-averaged Navier-Stokes equation, were compared with experimental data. The zero-equation models considered for the study were the Baldwin-Lomax model, the Thomas model, and a combination of the Baldwin-Lomax and Thomas models; the two-equation models considered were the Chien model, the Speziale model (both low Reynolds number), and the Launder and Spalding model (high Reynolds number). The Thomas model performed best among the zero-equation models, and predicted good pressure distributions. The Chien and Speziale models compared very well with the experimental data, and performed better than the Thomas model near the walls.
Lv, Yan; Yan, Bin; Wang, Lin; Lou, Dong-hua
2012-04-01
To analyze the reliability of the dento-maxillary models created by cone-beam CT and rapid prototyping (RP). Plaster models were obtained from 20 orthodontic patients who had been scanned by cone-beam CT and 3-D models were formed after the calculation and reconstruction of software. Then, computerized composite models (RP models) were produced by rapid prototyping technique. The crown widths, dental arch widths and dental arch lengths on each plaster model, 3-D model and RP model were measured, followed by statistical analysis with SPSS17.0 software package. For crown widths, dental arch lengths and crowding, there were significant differences(P<0.05) among the 3 models, but the dental arch widths were on the contrary. Measurements on 3-D models were significantly smaller than those on other two models(P<0.05). Compared with 3-D models, RP models had more numbers which were not significantly different from those on plaster models(P>0.05). The regression coefficient among three models were significantly different(P<0.01), ranging from 0.8 to 0.9. But between RP and plaster models was bigger than that between 3-D and plaster models. There is high consistency within 3 models, while some differences were accepted in clinic. Therefore, it is possible to substitute 3-D and RP models for plaster models in order to save storage space and improve efficiency.
NASA Astrophysics Data System (ADS)
Peckham, S. D.
2013-12-01
Model coupling frameworks like CSDMS (Community Surface Dynamics Modeling System) and ESMF (Earth System Modeling Framework) have developed mechanisms that allow heterogeneous sets of process models to be assembled in a plug-and-play manner to create composite "system models". These mechanisms facilitate code reuse, but must simultaneously satisfy many different design criteria. They must be able to mediate or compensate for differences between the process models, such as their different programming languages, computational grids, time-stepping schemes, variable names and variable units. However, they must achieve this interoperability in a way that: (1) is noninvasive, requiring only relatively small and isolated changes to the original source code, (2) does not significantly reduce performance, (3) is not time-consuming or confusing for a model developer to implement, (4) can very easily be updated to accommodate new versions of a given process model and (5) does not shift the burden of providing model interoperability to the model developers, e.g. by requiring them to provide their output in specific forms that meet the input requirements of other models. In tackling these design challenges, model framework developers have learned that the best solution is to provide each model with a simple, standardized interface, i.e. a set of standardized functions that make the model: (1) fully-controllable by a caller (e.g. a model framework) and (2) self-describing. Model control functions are separate functions that allow a caller to initialize the model, advance the model's state variables in time and finalize the model. Model description functions allow a caller to retrieve detailed information on the model's input and output variables, its computational grid and its timestepping scheme. If the caller is a modeling framework, it can compare the answers to these queries with similar answers from other process models in a collection and then automatically call framework service components as necessary to mediate the differences between the coupled models. This talk will first review two key products of the CSDMS project, namely a standardized model interface called the Basic Model Interface (BMI) and the CSDMS Standard Names. The standard names are used in conjunction with BMI to provide a semantic matching mechanism that allows output variables from one process model to be reliably used as input variables to other process models in a collection. They include not just a standardized naming scheme for model variables, but also a standardized set of terms for describing the attributes and assumptions of a given model. To illustrate the power of standardized model interfaces and metadata, a smart, light-weight modeling framework written in Python will be introduced that can automatically (without user intervention) couple a set of BMI-enabled hydrologic process components together to create a spatial hydrologic model. The same mechanisms could also be used to provide seamless integration (import/export) of data and models.
A model-averaging method for assessing groundwater conceptual model uncertainty.
Ye, Ming; Pohlmann, Karl F; Chapman, Jenny B; Pohll, Greg M; Reeves, Donald M
2010-01-01
This study evaluates alternative groundwater models with different recharge and geologic components at the northern Yucca Flat area of the Death Valley Regional Flow System (DVRFS), USA. Recharge over the DVRFS has been estimated using five methods, and five geological interpretations are available at the northern Yucca Flat area. Combining the recharge and geological components together with additional modeling components that represent other hydrogeological conditions yields a total of 25 groundwater flow models. As all the models are plausible given available data and information, evaluating model uncertainty becomes inevitable. On the other hand, hydraulic parameters (e.g., hydraulic conductivity) are uncertain in each model, giving rise to parametric uncertainty. Propagation of the uncertainty in the models and model parameters through groundwater modeling causes predictive uncertainty in model predictions (e.g., hydraulic head and flow). Parametric uncertainty within each model is assessed using Monte Carlo simulation, and model uncertainty is evaluated using the model averaging method. Two model-averaging techniques (on the basis of information criteria and GLUE) are discussed. This study shows that contribution of model uncertainty to predictive uncertainty is significantly larger than that of parametric uncertainty. For the recharge and geological components, uncertainty in the geological interpretations has more significant effect on model predictions than uncertainty in the recharge estimates. In addition, weighted residuals vary more for the different geological models than for different recharge models. Most of the calibrated observations are not important for discriminating between the alternative models, because their weighted residuals vary only slightly from one model to another.
Meta-Modeling: A Knowledge-Based Approach to Facilitating Model Construction and Reuse
NASA Technical Reports Server (NTRS)
Keller, Richard M.; Dungan, Jennifer L.
1997-01-01
In this paper, we introduce a new modeling approach called meta-modeling and illustrate its practical applicability to the construction of physically-based ecosystem process models. As a critical adjunct to modeling codes meta-modeling requires explicit specification of certain background information related to the construction and conceptual underpinnings of a model. This information formalizes the heretofore tacit relationship between the mathematical modeling code and the underlying real-world phenomena being investigated, and gives insight into the process by which the model was constructed. We show how the explicit availability of such information can make models more understandable and reusable and less subject to misinterpretation. In particular, background information enables potential users to better interpret an implemented ecosystem model without direct assistance from the model author. Additionally, we show how the discipline involved in specifying background information leads to improved management of model complexity and fewer implementation errors. We illustrate the meta-modeling approach in the context of the Scientists' Intelligent Graphical Modeling Assistant (SIGMA) a new model construction environment. As the user constructs a model using SIGMA the system adds appropriate background information that ties the executable model to the underlying physical phenomena under investigation. Not only does this information improve the understandability of the final model it also serves to reduce the overall time and programming expertise necessary to initially build and subsequently modify models. Furthermore, SIGMA's use of background knowledge helps eliminate coding errors resulting from scientific and dimensional inconsistencies that are otherwise difficult to avoid when building complex models. As a. demonstration of SIGMA's utility, the system was used to reimplement and extend a well-known forest ecosystem dynamics model: Forest-BGC.
10. MOVABLE BED SEDIMENTATION MODELS. DOGTOOTH BEND MODEL (MODEL SCALE: ...
10. MOVABLE BED SEDIMENTATION MODELS. DOGTOOTH BEND MODEL (MODEL SCALE: 1' = 400' HORIZONTAL, 1' = 100' VERTICAL), AND GREENVILLE BRIDGE MODEL (MODEL SCALE: 1' = 360' HORIZONTAL, 1' = 100' VERTICAL). - Waterways Experiment Station, Hydraulics Laboratory, Halls Ferry Road, 2 miles south of I-20, Vicksburg, Warren County, MS
Bayesian Data-Model Fit Assessment for Structural Equation Modeling
ERIC Educational Resources Information Center
Levy, Roy
2011-01-01
Bayesian approaches to modeling are receiving an increasing amount of attention in the areas of model construction and estimation in factor analysis, structural equation modeling (SEM), and related latent variable models. However, model diagnostics and model criticism remain relatively understudied aspects of Bayesian SEM. This article describes…
Evolution of computational models in BioModels Database and the Physiome Model Repository.
Scharm, Martin; Gebhardt, Tom; Touré, Vasundra; Bagnacani, Andrea; Salehzadeh-Yazdi, Ali; Wolkenhauer, Olaf; Waltemath, Dagmar
2018-04-12
A useful model is one that is being (re)used. The development of a successful model does not finish with its publication. During reuse, models are being modified, i.e. expanded, corrected, and refined. Even small changes in the encoding of a model can, however, significantly affect its interpretation. Our motivation for the present study is to identify changes in models and make them transparent and traceable. We analysed 13734 models from BioModels Database and the Physiome Model Repository. For each model, we studied the frequencies and types of updates between its first and latest release. To demonstrate the impact of changes, we explored the history of a Repressilator model in BioModels Database. We observed continuous updates in the majority of models. Surprisingly, even the early models are still being modified. We furthermore detected that many updates target annotations, which improves the information one can gain from models. To support the analysis of changes in model repositories we developed MoSt, an online tool for visualisations of changes in models. The scripts used to generate the data and figures for this study are available from GitHub https://github.com/binfalse/BiVeS-StatsGenerator and as a Docker image at https://hub.docker.com/r/binfalse/bives-statsgenerator/ . The website https://most.bio.informatik.uni-rostock.de/ provides interactive access to model versions and their evolutionary statistics. The reuse of models is still impeded by a lack of trust and documentation. A detailed and transparent documentation of all aspects of the model, including its provenance, will improve this situation. Knowledge about a model's provenance can avoid the repetition of mistakes that others already faced. More insights are gained into how the system evolves from initial findings to a profound understanding. We argue that it is the responsibility of the maintainers of model repositories to offer transparent model provenance to their users.
NASA Astrophysics Data System (ADS)
Li, J.
2017-12-01
Large-watershed flood simulation and forecasting is very important for a distributed hydrological model in the application. There are some challenges including the model's spatial resolution effect, model performance and accuracy and so on. To cope with the challenge of the model's spatial resolution effect, different model resolution including 1000m*1000m, 600m*600m, 500m*500m, 400m*400m, 200m*200m were used to build the distributed hydrological model—Liuxihe model respectively. The purpose is to find which one is the best resolution for Liuxihe model in Large-watershed flood simulation and forecasting. This study sets up a physically based distributed hydrological model for flood forecasting of the Liujiang River basin in south China. Terrain data digital elevation model (DEM), soil type and land use type are downloaded from the website freely. The model parameters are optimized by using an improved Particle Swarm Optimization(PSO) algorithm; And parameter optimization could reduce the parameter uncertainty that exists for physically deriving model parameters. The different model resolution (200m*200m—1000m*1000m ) are proposed for modeling the Liujiang River basin flood with the Liuxihe model in this study. The best model's spatial resolution effect for flood simulation and forecasting is 200m*200m.And with the model's spatial resolution reduction, the model performance and accuracy also become worse and worse. When the model resolution is 1000m*1000m, the flood simulation and forecasting result is the worst, also the river channel divided based on this resolution is differs from the actual one. To keep the model with an acceptable performance, minimum model spatial resolution is needed. The suggested threshold model spatial resolution for modeling the Liujiang River basin flood is a 500m*500m grid cell, but the model spatial resolution with a 200m*200m grid cell is recommended in this study to keep the model at a best performance.
Computational Models for Calcium-Mediated Astrocyte Functions.
Manninen, Tiina; Havela, Riikka; Linne, Marja-Leena
2018-01-01
The computational neuroscience field has heavily concentrated on the modeling of neuronal functions, largely ignoring other brain cells, including one type of glial cell, the astrocytes. Despite the short history of modeling astrocytic functions, we were delighted about the hundreds of models developed so far to study the role of astrocytes, most often in calcium dynamics, synchronization, information transfer, and plasticity in vitro , but also in vascular events, hyperexcitability, and homeostasis. Our goal here is to present the state-of-the-art in computational modeling of astrocytes in order to facilitate better understanding of the functions and dynamics of astrocytes in the brain. Due to the large number of models, we concentrated on a hundred models that include biophysical descriptions for calcium signaling and dynamics in astrocytes. We categorized the models into four groups: single astrocyte models, astrocyte network models, neuron-astrocyte synapse models, and neuron-astrocyte network models to ease their use in future modeling projects. We characterized the models based on which earlier models were used for building the models and which type of biological entities were described in the astrocyte models. Features of the models were compared and contrasted so that similarities and differences were more readily apparent. We discovered that most of the models were basically generated from a small set of previously published models with small variations. However, neither citations to all the previous models with similar core structure nor explanations of what was built on top of the previous models were provided, which made it possible, in some cases, to have the same models published several times without an explicit intention to make new predictions about the roles of astrocytes in brain functions. Furthermore, only a few of the models are available online which makes it difficult to reproduce the simulation results and further develop the models. Thus, we would like to emphasize that only via reproducible research are we able to build better computational models for astrocytes, which truly advance science. Our study is the first to characterize in detail the biophysical and biochemical mechanisms that have been modeled for astrocytes.
Computational Models for Calcium-Mediated Astrocyte Functions
Manninen, Tiina; Havela, Riikka; Linne, Marja-Leena
2018-01-01
The computational neuroscience field has heavily concentrated on the modeling of neuronal functions, largely ignoring other brain cells, including one type of glial cell, the astrocytes. Despite the short history of modeling astrocytic functions, we were delighted about the hundreds of models developed so far to study the role of astrocytes, most often in calcium dynamics, synchronization, information transfer, and plasticity in vitro, but also in vascular events, hyperexcitability, and homeostasis. Our goal here is to present the state-of-the-art in computational modeling of astrocytes in order to facilitate better understanding of the functions and dynamics of astrocytes in the brain. Due to the large number of models, we concentrated on a hundred models that include biophysical descriptions for calcium signaling and dynamics in astrocytes. We categorized the models into four groups: single astrocyte models, astrocyte network models, neuron-astrocyte synapse models, and neuron-astrocyte network models to ease their use in future modeling projects. We characterized the models based on which earlier models were used for building the models and which type of biological entities were described in the astrocyte models. Features of the models were compared and contrasted so that similarities and differences were more readily apparent. We discovered that most of the models were basically generated from a small set of previously published models with small variations. However, neither citations to all the previous models with similar core structure nor explanations of what was built on top of the previous models were provided, which made it possible, in some cases, to have the same models published several times without an explicit intention to make new predictions about the roles of astrocytes in brain functions. Furthermore, only a few of the models are available online which makes it difficult to reproduce the simulation results and further develop the models. Thus, we would like to emphasize that only via reproducible research are we able to build better computational models for astrocytes, which truly advance science. Our study is the first to characterize in detail the biophysical and biochemical mechanisms that have been modeled for astrocytes. PMID:29670517
Breuer, L.; Huisman, J.A.; Willems, P.; Bormann, H.; Bronstert, A.; Croke, B.F.W.; Frede, H.-G.; Graff, T.; Hubrechts, L.; Jakeman, A.J.; Kite, G.; Lanini, J.; Leavesley, G.; Lettenmaier, D.P.; Lindstrom, G.; Seibert, J.; Sivapalan, M.; Viney, N.R.
2009-01-01
This paper introduces the project on 'Assessing the impact of land use change on hydrology by ensemble modeling (LUCHEM)' that aims at investigating the envelope of predictions on changes in hydrological fluxes due to land use change. As part of a series of four papers, this paper outlines the motivation and setup of LUCHEM, and presents a model intercomparison for the present-day simulation results. Such an intercomparison provides a valuable basis to investigate the effects of different model structures on model predictions and paves the ground for the analysis of the performance of multi-model ensembles and the reliability of the scenario predictions in companion papers. In this study, we applied a set of 10 lumped, semi-lumped and fully distributed hydrological models that have been previously used in land use change studies to the low mountainous Dill catchment, Germany. Substantial differences in model performance were observed with Nash-Sutcliffe efficiencies ranging from 0.53 to 0.92. Differences in model performance were attributed to (1) model input data, (2) model calibration and (3) the physical basis of the models. The models were applied with two sets of input data: an original and a homogenized data set. This homogenization of precipitation, temperature and leaf area index was performed to reduce the variation between the models. Homogenization improved the comparability of model simulations and resulted in a reduced average bias, although some variation in model data input remained. The effect of the physical differences between models on the long-term water balance was mainly attributed to differences in how models represent evapotranspiration. Semi-lumped and lumped conceptual models slightly outperformed the fully distributed and physically based models. This was attributed to the automatic model calibration typically used for this type of models. Overall, however, we conclude that there was no superior model if several measures of model performance are considered and that all models are suitable to participate in further multi-model ensemble set-ups and land use change scenario investigations. ?? 2008 Elsevier Ltd. All rights reserved.
Benchmarking test of empirical root water uptake models
NASA Astrophysics Data System (ADS)
dos Santos, Marcos Alex; de Jong van Lier, Quirijn; van Dam, Jos C.; Freire Bezerra, Andre Herman
2017-01-01
Detailed physical models describing root water uptake (RWU) are an important tool for the prediction of RWU and crop transpiration, but the hydraulic parameters involved are hardly ever available, making them less attractive for many studies. Empirical models are more readily used because of their simplicity and the associated lower data requirements. The purpose of this study is to evaluate the capability of some empirical models to mimic the RWU distribution under varying environmental conditions predicted from numerical simulations with a detailed physical model. A review of some empirical models used as sub-models in ecohydrological models is presented, and alternative empirical RWU models are proposed. All these empirical models are analogous to the standard Feddes model, but differ in how RWU is partitioned over depth or how the transpiration reduction function is defined. The parameters of the empirical models are determined by inverse modelling of simulated depth-dependent RWU. The performance of the empirical models and their optimized empirical parameters depends on the scenario. The standard empirical Feddes model only performs well in scenarios with low root length density R, i.e. for scenarios with low RWU compensation
. For medium and high R, the Feddes RWU model cannot mimic properly the root uptake dynamics as predicted by the physical model. The Jarvis RWU model in combination with the Feddes reduction function (JMf) only provides good predictions for low and medium R scenarios. For high R, it cannot mimic the uptake patterns predicted by the physical model. Incorporating a newly proposed reduction function into the Jarvis model improved RWU predictions. Regarding the ability of the models to predict plant transpiration, all models accounting for compensation show good performance. The Akaike information criterion (AIC) indicates that the Jarvis (2010) model (JMII), with no empirical parameters to be estimated, is the best model
. The proposed models are better in predicting RWU patterns similar to the physical model. The statistical indices point to them as the best alternatives for mimicking RWU predictions of the physical model.
Modeling uncertainty: quicksand for water temperature modeling
Bartholow, John M.
2003-01-01
Uncertainty has been a hot topic relative to science generally, and modeling specifically. Modeling uncertainty comes in various forms: measured data, limited model domain, model parameter estimation, model structure, sensitivity to inputs, modelers themselves, and users of the results. This paper will address important components of uncertainty in modeling water temperatures, and discuss several areas that need attention as the modeling community grapples with how to incorporate uncertainty into modeling without getting stuck in the quicksand that prevents constructive contributions to policy making. The material, and in particular the reference, are meant to supplement the presentation given at this conference.
Energy modeling. Volume 2: Inventory and details of state energy models
NASA Astrophysics Data System (ADS)
Melcher, A. G.; Underwood, R. G.; Weber, J. C.; Gist, R. L.; Holman, R. P.; Donald, D. W.
1981-05-01
An inventory of energy models developed by or for state governments is presented, and certain models are discussed in depth. These models address a variety of purposes such as: supply or demand of energy or of certain types of energy; emergency management of energy; and energy economics. Ten models are described. The purpose, use, and history of the model is discussed, and information is given on the outputs, inputs, and mathematical structure of the model. The models include five models dealing with energy demand, one of which is econometric and four of which are econometric-engineering end-use models.
NASA Astrophysics Data System (ADS)
Peckham, Scott
2016-04-01
Over the last decade, model coupling frameworks like CSDMS (Community Surface Dynamics Modeling System) and ESMF (Earth System Modeling Framework) have developed mechanisms that make it much easier for modelers to connect heterogeneous sets of process models in a plug-and-play manner to create composite "system models". These mechanisms greatly simplify code reuse, but must simultaneously satisfy many different design criteria. They must be able to mediate or compensate for differences between the process models, such as their different programming languages, computational grids, time-stepping schemes, variable names and variable units. However, they must achieve this interoperability in a way that: (1) is noninvasive, requiring only relatively small and isolated changes to the original source code, (2) does not significantly reduce performance, (3) is not time-consuming or confusing for a model developer to implement, (4) can very easily be updated to accommodate new versions of a given process model and (5) does not shift the burden of providing model interoperability to the model developers. In tackling these design challenges, model framework developers have learned that the best solution is to provide each model with a simple, standardized interface, i.e. a set of standardized functions that make the model: (1) fully-controllable by a caller (e.g. a model framework) and (2) self-describing with standardized metadata. Model control functions are separate functions that allow a caller to initialize the model, advance the model's state variables in time and finalize the model. Model description functions allow a caller to retrieve detailed information on the model's input and output variables, its computational grid and its timestepping scheme. If the caller is a modeling framework, it can use the self description functions to learn about each process model in a collection to be coupled and then automatically call framework service components (e.g. regridders, time interpolators and unit converters) as necessary to mediate the differences between them so they can work together. This talk will first review two key products of the CSDMS project, namely a standardized model interface called the Basic Model Interface (BMI) and the CSDMS Standard Names. The standard names are used in conjunction with BMI to provide a semantic matching mechanism that allows output variables from one process model or data set to be reliably used as input variables to other process models in a collection. They include not just a standardized naming scheme for model variables, but also a standardized set of terms for describing the attributes and assumptions of a given model. Recent efforts to bring powerful uncertainty analysis and inverse modeling toolkits such as DAKOTA into modeling frameworks will also be described. This talk will conclude with an overview of several related modeling projects that have been funded by NSF's EarthCube initiative, namely the Earth System Bridge, OntoSoft and GeoSemantics projects.
[A review on research of land surface water and heat fluxes].
Sun, Rui; Liu, Changming
2003-03-01
Many field experiments were done, and soil-vegetation-atmosphere transfer(SVAT) models were stablished to estimate land surface heat fluxes. In this paper, the processes of experimental research on land surface water and heat fluxes are reviewed, and three kinds of SVAT model(single layer model, two layer model and multi-layer model) are analyzed. Remote sensing data are widely used to estimate land surface heat fluxes. Based on remote sensing and energy balance equation, different models such as simplified model, single layer model, extra resistance model, crop water stress index model and two source resistance model are developed to estimate land surface heat fluxes and evapotranspiration. These models are also analyzed in this paper.
Examination of simplified travel demand model. [Internal volume forecasting model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, R.L. Jr.; McFarlane, W.J.
1978-01-01
A simplified travel demand model, the Internal Volume Forecasting (IVF) model, proposed by Low in 1972 is evaluated as an alternative to the conventional urban travel demand modeling process. The calibration of the IVF model for a county-level study area in Central Wisconsin results in what appears to be a reasonable model; however, analysis of the structure of the model reveals two primary mis-specifications. Correction of the mis-specifications leads to a simplified gravity model version of the conventional urban travel demand models. Application of the original IVF model to ''forecast'' 1960 traffic volumes based on the model calibrated for 1970more » produces accurate estimates. Shortcut and ad hoc models may appear to provide reasonable results in both the base and horizon years; however, as shown by the IVF mode, such models will not always provide a reliable basis for transportation planning and investment decisions.« less
MPTinR: analysis of multinomial processing tree models in R.
Singmann, Henrik; Kellen, David
2013-06-01
We introduce MPTinR, a software package developed for the analysis of multinomial processing tree (MPT) models. MPT models represent a prominent class of cognitive measurement models for categorical data with applications in a wide variety of fields. MPTinR is the first software for the analysis of MPT models in the statistical programming language R, providing a modeling framework that is more flexible than standalone software packages. MPTinR also introduces important features such as (1) the ability to calculate the Fisher information approximation measure of model complexity for MPT models, (2) the ability to fit models for categorical data outside the MPT model class, such as signal detection models, (3) a function for model selection across a set of nested and nonnested candidate models (using several model selection indices), and (4) multicore fitting. MPTinR is available from the Comprehensive R Archive Network at http://cran.r-project.org/web/packages/MPTinR/ .
Latent log-linear models for handwritten digit classification.
Deselaers, Thomas; Gass, Tobias; Heigold, Georg; Ney, Hermann
2012-06-01
We present latent log-linear models, an extension of log-linear models incorporating latent variables, and we propose two applications thereof: log-linear mixture models and image deformation-aware log-linear models. The resulting models are fully discriminative, can be trained efficiently, and the model complexity can be controlled. Log-linear mixture models offer additional flexibility within the log-linear modeling framework. Unlike previous approaches, the image deformation-aware model directly considers image deformations and allows for a discriminative training of the deformation parameters. Both are trained using alternating optimization. For certain variants, convergence to a stationary point is guaranteed and, in practice, even variants without this guarantee converge and find models that perform well. We tune the methods on the USPS data set and evaluate on the MNIST data set, demonstrating the generalization capabilities of our proposed models. Our models, although using significantly fewer parameters, are able to obtain competitive results with models proposed in the literature.
Understanding and Predicting Urban Propagation Losses
2009-09-01
6. Extended Hata Model ..........................22 7. Modified Hata Model ..........................22 8. Walfisch – Ikegami Model...39 4. COST (Extended) Hata Model ...................40 5. Modified Hata Model ..........................41 6. Walfisch- Ikegami Model...47 1. Scenario One – Walfisch- Ikegami Model ........51 2. Scenario Two – Modified Hata Model ...........52 3. Scenario Three – Urban Hata
A Framework for Sharing and Integrating Remote Sensing and GIS Models Based on Web Service
Chen, Zeqiang; Lin, Hui; Chen, Min; Liu, Deer; Bao, Ying; Ding, Yulin
2014-01-01
Sharing and integrating Remote Sensing (RS) and Geographic Information System/Science (GIS) models are critical for developing practical application systems. Facilitating model sharing and model integration is a problem for model publishers and model users, respectively. To address this problem, a framework based on a Web service for sharing and integrating RS and GIS models is proposed in this paper. The fundamental idea of the framework is to publish heterogeneous RS and GIS models into standard Web services for sharing and interoperation and then to integrate the RS and GIS models using Web services. For the former, a “black box” and a visual method are employed to facilitate the publishing of the models as Web services. For the latter, model integration based on the geospatial workflow and semantic supported marching method is introduced. Under this framework, model sharing and integration is applied for developing the Pearl River Delta water environment monitoring system. The results show that the framework can facilitate model sharing and model integration for model publishers and model users. PMID:24901016
A framework for sharing and integrating remote sensing and GIS models based on Web service.
Chen, Zeqiang; Lin, Hui; Chen, Min; Liu, Deer; Bao, Ying; Ding, Yulin
2014-01-01
Sharing and integrating Remote Sensing (RS) and Geographic Information System/Science (GIS) models are critical for developing practical application systems. Facilitating model sharing and model integration is a problem for model publishers and model users, respectively. To address this problem, a framework based on a Web service for sharing and integrating RS and GIS models is proposed in this paper. The fundamental idea of the framework is to publish heterogeneous RS and GIS models into standard Web services for sharing and interoperation and then to integrate the RS and GIS models using Web services. For the former, a "black box" and a visual method are employed to facilitate the publishing of the models as Web services. For the latter, model integration based on the geospatial workflow and semantic supported marching method is introduced. Under this framework, model sharing and integration is applied for developing the Pearl River Delta water environment monitoring system. The results show that the framework can facilitate model sharing and model integration for model publishers and model users.
NASA Astrophysics Data System (ADS)
Zhu, Wei; Timmermans, Harry
2011-06-01
Models of geographical choice behavior have been dominantly based on rational choice models, which assume that decision makers are utility-maximizers. Rational choice models may be less appropriate as behavioral models when modeling decisions in complex environments in which decision makers may simplify the decision problem using heuristics. Pedestrian behavior in shopping streets is an example. We therefore propose a modeling framework for pedestrian shopping behavior incorporating principles of bounded rationality. We extend three classical heuristic rules (conjunctive, disjunctive and lexicographic rule) by introducing threshold heterogeneity. The proposed models are implemented using data on pedestrian behavior in Wang Fujing Street, the city center of Beijing, China. The models are estimated and compared with multinomial logit models and mixed logit models. Results show that the heuristic models are the best for all the decisions that are modeled. Validation tests are carried out through multi-agent simulation by comparing simulated spatio-temporal agent behavior with the observed pedestrian behavior. The predictions of heuristic models are slightly better than those of the multinomial logit models.
The Sim-SEQ Project: Comparison of Selected Flow Models for the S-3 Site
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mukhopadhyay, Sumit; Doughty, Christine A.; Bacon, Diana H.
Sim-SEQ is an international initiative on model comparison for geologic carbon sequestration, with an objective to understand and, if possible, quantify model uncertainties. Model comparison efforts in Sim-SEQ are at present focusing on one specific field test site, hereafter referred to as the Sim-SEQ Study site (or S-3 site). Within Sim-SEQ, different modeling teams are developing conceptual models of CO2 injection at the S-3 site. In this paper, we select five flow models of the S-3 site and provide a qualitative comparison of their attributes and predictions. These models are based on five different simulators or modeling approaches: TOUGH2/EOS7C, STOMP-CO2e,more » MoReS, TOUGH2-MP/ECO2N, and VESA. In addition to model-to-model comparison, we perform a limited model-to-data comparison, and illustrate how model choices impact model predictions. We conclude the paper by making recommendations for model refinement that are likely to result in less uncertainty in model predictions.« less
Jardine, Bartholomew; Raymond, Gary M; Bassingthwaighte, James B
2015-01-01
The Modular Program Constructor (MPC) is an open-source Java based modeling utility, built upon JSim's Mathematical Modeling Language (MML) ( http://www.physiome.org/jsim/) that uses directives embedded in model code to construct larger, more complicated models quickly and with less error than manually combining models. A major obstacle in writing complex models for physiological processes is the large amount of time it takes to model the myriad processes taking place simultaneously in cells, tissues, and organs. MPC replaces this task with code-generating algorithms that take model code from several different existing models and produce model code for a new JSim model. This is particularly useful during multi-scale model development where many variants are to be configured and tested against data. MPC encodes and preserves information about how a model is built from its simpler model modules, allowing the researcher to quickly substitute or update modules for hypothesis testing. MPC is implemented in Java and requires JSim to use its output. MPC source code and documentation are available at http://www.physiome.org/software/MPC/.
Comparison of dark energy models after Planck 2015
NASA Astrophysics Data System (ADS)
Xu, Yue-Yao; Zhang, Xin
2016-11-01
We make a comparison for ten typical, popular dark energy models according to their capabilities of fitting the current observational data. The observational data we use in this work include the JLA sample of type Ia supernovae observation, the Planck 2015 distance priors of cosmic microwave background observation, the baryon acoustic oscillations measurements, and the direct measurement of the Hubble constant. Since the models have different numbers of parameters, in order to make a fair comparison, we employ the Akaike and Bayesian information criteria to assess the worth of the models. The analysis results show that, according to the capability of explaining observations, the cosmological constant model is still the best one among all the dark energy models. The generalized Chaplygin gas model, the constant w model, and the α dark energy model are worse than the cosmological constant model, but still are good models compared to others. The holographic dark energy model, the new generalized Chaplygin gas model, and the Chevalliear-Polarski-Linder model can still fit the current observations well, but from an economically feasible perspective, they are not so good. The new agegraphic dark energy model, the Dvali-Gabadadze-Porrati model, and the Ricci dark energy model are excluded by the current observations.
Parametric regression model for survival data: Weibull regression model as an example
2016-01-01
Weibull regression model is one of the most popular forms of parametric regression model that it provides estimate of baseline hazard function, as well as coefficients for covariates. Because of technical difficulties, Weibull regression model is seldom used in medical literature as compared to the semi-parametric proportional hazard model. To make clinical investigators familiar with Weibull regression model, this article introduces some basic knowledge on Weibull regression model and then illustrates how to fit the model with R software. The SurvRegCensCov package is useful in converting estimated coefficients to clinical relevant statistics such as hazard ratio (HR) and event time ratio (ETR). Model adequacy can be assessed by inspecting Kaplan-Meier curves stratified by categorical variable. The eha package provides an alternative method to model Weibull regression model. The check.dist() function helps to assess goodness-of-fit of the model. Variable selection is based on the importance of a covariate, which can be tested using anova() function. Alternatively, backward elimination starting from a full model is an efficient way for model development. Visualization of Weibull regression model after model development is interesting that it provides another way to report your findings. PMID:28149846
Inner Magnetosphere Modeling at the CCMC: Ring Current, Radiation Belt and Magnetic Field Mapping
NASA Astrophysics Data System (ADS)
Rastaetter, L.; Mendoza, A. M.; Chulaki, A.; Kuznetsova, M. M.; Zheng, Y.
2013-12-01
Modeling of the inner magnetosphere has entered center stage with the launch of the Van Allen Probes (RBSP) in 2012. The Community Coordinated Modeling Center (CCMC) has drastically improved its offerings of inner magnetosphere models that cover energetic particles in the Earth's ring current and radiation belts. Models added to the CCMC include the stand-alone Comprehensive Inner Magnetosphere-Ionosphere (CIMI) model by M.C. Fok, the Rice Convection Model (RCM) by R. Wolf and S. Sazykin and numerous versions of the Tsyganenko magnetic field model (T89, T96, T01quiet, TS05). These models join the LANL* model by Y. Yu hat was offered for instant run earlier in the year. In addition to these stand-alone models, the Comprehensive Ring Current Model (CRCM) by M.C. Fok and N. Buzulukova joined as a component of the Space Weather Modeling Framework (SWMF) in the magnetosphere model run-on-request category. We present modeling results of the ring current and radiation belt models and demonstrate tracking of satellites such as RBSP. Calculations using the magnetic field models include mappings to the magnetic equator or to minimum-B positions and the determination of foot points in the ionosphere.
Kim, Steven B; Kodell, Ralph L; Moon, Hojin
2014-03-01
In chemical and microbial risk assessments, risk assessors fit dose-response models to high-dose data and extrapolate downward to risk levels in the range of 1-10%. Although multiple dose-response models may be able to fit the data adequately in the experimental range, the estimated effective dose (ED) corresponding to an extremely small risk can be substantially different from model to model. In this respect, model averaging (MA) provides more robustness than a single dose-response model in the point and interval estimation of an ED. In MA, accounting for both data uncertainty and model uncertainty is crucial, but addressing model uncertainty is not achieved simply by increasing the number of models in a model space. A plausible set of models for MA can be characterized by goodness of fit and diversity surrounding the truth. We propose a diversity index (DI) to balance between these two characteristics in model space selection. It addresses a collective property of a model space rather than individual performance of each model. Tuning parameters in the DI control the size of the model space for MA. © 2013 Society for Risk Analysis.
Joe H. Scott; Robert E. Burgan
2005-01-01
This report describes a new set of standard fire behavior fuel models for use with Rothermel's surface fire spread model and the relationship of the new set to the original set of 13 fire behavior fuel models. To assist with transition to using the new fuel models, a fuel model selection guide, fuel model crosswalk, and set of fuel model photos are provided.
Wang, Juan; Wang, Jian Lin; Liu, Jia Bin; Jiang, Wen; Zhao, Chang Xing
2017-06-18
The dynamic variations of evapotranspiration (ET) and weather data during summer maize growing season in 2013-2015 were monitored with eddy covariance system, and the applicability of two operational models (FAO-PM model and KP-PM model) based on the Penman-Monteith model were analyzed. Firstly, the key parameters in the two models were calibrated with the measured data in 2013 and 2014; secondly, the daily ET in 2015 calculated by the FAO-PM model and KP-PM model was compared to the observed ET, respectively. Finally, the coefficients in the KP-PM model were further revised with the coefficients calculated according to the different growth stages, and the performance of the revised KP-PM model was also evaluated. These statistical parameters indicated that the calculated daily ET for 2015 by the FAO-PM model was closer to the observed ET than that by the KP-PM model. The daily ET calculated from the revised KP-PM model for daily ET was more accurate than that from the FAO-PM model. It was also found that the key parameters in the two models were correlated with weather conditions, so the calibration was necessary before using the models to predict the ET. The above results could provide some guidelines on predicting ET with the two models.
Implementation of Dryden Continuous Turbulence Model into Simulink for LSA-02 Flight Test Simulation
NASA Astrophysics Data System (ADS)
Ichwanul Hakim, Teuku Mohd; Arifianto, Ony
2018-04-01
Turbulence is a movement of air on small scale in the atmosphere that caused by instabilities of pressure and temperature distribution. Turbulence model is integrated into flight mechanical model as an atmospheric disturbance. Common turbulence model used in flight mechanical model are Dryden and Von Karman model. In this minor research, only Dryden continuous turbulence model were made. Dryden continuous turbulence model has been implemented, it refers to the military specification MIL-HDBK-1797. The model was implemented into Matlab Simulink. The model will be integrated with flight mechanical model to observe response of the aircraft when it is flight through turbulence field. The turbulence model is characterized by multiplying the filter which are generated from power spectral density with band-limited Gaussian white noise input. In order to ensure that the model provide a good result, model verification has been done by comparing the implemented model with the similar model that is provided in aerospace blockset. The result shows that there are some difference for 2 linear velocities (vg and wg), and 3 angular rate (pg, qg and rg). The difference is instantly caused by different determination of turbulence scale length which is used in aerospace blockset. With the adjustment of turbulence length in the implemented model, both model result the similar output.
THE EARTH SYSTEM PREDICTION SUITE: Toward a Coordinated U.S. Modeling Capability
Theurich, Gerhard; DeLuca, C.; Campbell, T.; Liu, F.; Saint, K.; Vertenstein, M.; Chen, J.; Oehmke, R.; Doyle, J.; Whitcomb, T.; Wallcraft, A.; Iredell, M.; Black, T.; da Silva, AM; Clune, T.; Ferraro, R.; Li, P.; Kelley, M.; Aleinov, I.; Balaji, V.; Zadeh, N.; Jacob, R.; Kirtman, B.; Giraldo, F.; McCarren, D.; Sandgathe, S.; Peckham, S.; Dunlap, R.
2017-01-01
The Earth System Prediction Suite (ESPS) is a collection of flagship U.S. weather and climate models and model components that are being instrumented to conform to interoperability conventions, documented to follow metadata standards, and made available either under open source terms or to credentialed users. The ESPS represents a culmination of efforts to create a common Earth system model architecture, and the advent of increasingly coordinated model development activities in the U.S. ESPS component interfaces are based on the Earth System Modeling Framework (ESMF), community-developed software for building and coupling models, and the National Unified Operational Prediction Capability (NUOPC) Layer, a set of ESMF-based component templates and interoperability conventions. This shared infrastructure simplifies the process of model coupling by guaranteeing that components conform to a set of technical and semantic behaviors. The ESPS encourages distributed, multi-agency development of coupled modeling systems, controlled experimentation and testing, and exploration of novel model configurations, such as those motivated by research involving managed and interactive ensembles. ESPS codes include the Navy Global Environmental Model (NavGEM), HYbrid Coordinate Ocean Model (HYCOM), and Coupled Ocean Atmosphere Mesoscale Prediction System (COAMPS®); the NOAA Environmental Modeling System (NEMS) and the Modular Ocean Model (MOM); the Community Earth System Model (CESM); and the NASA ModelE climate model and GEOS-5 atmospheric general circulation model. PMID:29568125
THE EARTH SYSTEM PREDICTION SUITE: Toward a Coordinated U.S. Modeling Capability.
Theurich, Gerhard; DeLuca, C; Campbell, T; Liu, F; Saint, K; Vertenstein, M; Chen, J; Oehmke, R; Doyle, J; Whitcomb, T; Wallcraft, A; Iredell, M; Black, T; da Silva, A M; Clune, T; Ferraro, R; Li, P; Kelley, M; Aleinov, I; Balaji, V; Zadeh, N; Jacob, R; Kirtman, B; Giraldo, F; McCarren, D; Sandgathe, S; Peckham, S; Dunlap, R
2016-07-01
The Earth System Prediction Suite (ESPS) is a collection of flagship U.S. weather and climate models and model components that are being instrumented to conform to interoperability conventions, documented to follow metadata standards, and made available either under open source terms or to credentialed users. The ESPS represents a culmination of efforts to create a common Earth system model architecture, and the advent of increasingly coordinated model development activities in the U.S. ESPS component interfaces are based on the Earth System Modeling Framework (ESMF), community-developed software for building and coupling models, and the National Unified Operational Prediction Capability (NUOPC) Layer, a set of ESMF-based component templates and interoperability conventions. This shared infrastructure simplifies the process of model coupling by guaranteeing that components conform to a set of technical and semantic behaviors. The ESPS encourages distributed, multi-agency development of coupled modeling systems, controlled experimentation and testing, and exploration of novel model configurations, such as those motivated by research involving managed and interactive ensembles. ESPS codes include the Navy Global Environmental Model (NavGEM), HYbrid Coordinate Ocean Model (HYCOM), and Coupled Ocean Atmosphere Mesoscale Prediction System (COAMPS ® ); the NOAA Environmental Modeling System (NEMS) and the Modular Ocean Model (MOM); the Community Earth System Model (CESM); and the NASA ModelE climate model and GEOS-5 atmospheric general circulation model.
The Earth System Prediction Suite: Toward a Coordinated U.S. Modeling Capability
NASA Technical Reports Server (NTRS)
Theurich, Gerhard; DeLuca, C.; Campbell, T.; Liu, F.; Saint, K.; Vertenstein, M.; Chen, J.; Oehmke, R.; Doyle, J.; Whitcomb, T.;
2016-01-01
The Earth System Prediction Suite (ESPS) is a collection of flagship U.S. weather and climate models and model components that are being instrumented to conform to interoperability conventions, documented to follow metadata standards, and made available either under open source terms or to credentialed users.The ESPS represents a culmination of efforts to create a common Earth system model architecture, and the advent of increasingly coordinated model development activities in the U.S. ESPS component interfaces are based on the Earth System Modeling Framework (ESMF), community-developed software for building and coupling models, and the National Unified Operational Prediction Capability (NUOPC) Layer, a set of ESMF-based component templates and interoperability conventions. This shared infrastructure simplifies the process of model coupling by guaranteeing that components conform to a set of technical and semantic behaviors. The ESPS encourages distributed, multi-agency development of coupled modeling systems, controlled experimentation and testing, and exploration of novel model configurations, such as those motivated by research involving managed and interactive ensembles. ESPS codes include the Navy Global Environmental Model (NavGEM), HYbrid Coordinate Ocean Model (HYCOM), and Coupled Ocean Atmosphere Mesoscale Prediction System (COAMPS); the NOAA Environmental Modeling System (NEMS) and the Modular Ocean Model (MOM); the Community Earth System Model (CESM); and the NASA ModelE climate model and GEOS-5 atmospheric general circulation model.
The Earth System Prediction Suite: Toward a Coordinated U.S. Modeling Capability
Theurich, Gerhard; DeLuca, C.; Campbell, T.; ...
2016-08-22
The Earth System Prediction Suite (ESPS) is a collection of flagship U.S. weather and climate models and model components that are being instrumented to conform to interoperability conventions, documented to follow metadata standards, and made available either under open-source terms or to credentialed users. Furthermore, the ESPS represents a culmination of efforts to create a common Earth system model architecture, and the advent of increasingly coordinated model development activities in the United States. ESPS component interfaces are based on the Earth System Modeling Framework (ESMF), community-developed software for building and coupling models, and the National Unified Operational Prediction Capability (NUOPC)more » Layer, a set of ESMF-based component templates and interoperability conventions. Our shared infrastructure simplifies the process of model coupling by guaranteeing that components conform to a set of technical and semantic behaviors. The ESPS encourages distributed, multiagency development of coupled modeling systems; controlled experimentation and testing; and exploration of novel model configurations, such as those motivated by research involving managed and interactive ensembles. ESPS codes include the Navy Global Environmental Model (NAVGEM), the Hybrid Coordinate Ocean Model (HYCOM), and the Coupled Ocean–Atmosphere Mesoscale Prediction System (COAMPS); the NOAA Environmental Modeling System (NEMS) and the Modular Ocean Model (MOM); the Community Earth System Model (CESM); and the NASA ModelE climate model and the Goddard Earth Observing System Model, version 5 (GEOS-5), atmospheric general circulation model.« less
The Earth System Prediction Suite: Toward a Coordinated U.S. Modeling Capability
DOE Office of Scientific and Technical Information (OSTI.GOV)
Theurich, Gerhard; DeLuca, C.; Campbell, T.
The Earth System Prediction Suite (ESPS) is a collection of flagship U.S. weather and climate models and model components that are being instrumented to conform to interoperability conventions, documented to follow metadata standards, and made available either under open-source terms or to credentialed users. Furthermore, the ESPS represents a culmination of efforts to create a common Earth system model architecture, and the advent of increasingly coordinated model development activities in the United States. ESPS component interfaces are based on the Earth System Modeling Framework (ESMF), community-developed software for building and coupling models, and the National Unified Operational Prediction Capability (NUOPC)more » Layer, a set of ESMF-based component templates and interoperability conventions. Our shared infrastructure simplifies the process of model coupling by guaranteeing that components conform to a set of technical and semantic behaviors. The ESPS encourages distributed, multiagency development of coupled modeling systems; controlled experimentation and testing; and exploration of novel model configurations, such as those motivated by research involving managed and interactive ensembles. ESPS codes include the Navy Global Environmental Model (NAVGEM), the Hybrid Coordinate Ocean Model (HYCOM), and the Coupled Ocean–Atmosphere Mesoscale Prediction System (COAMPS); the NOAA Environmental Modeling System (NEMS) and the Modular Ocean Model (MOM); the Community Earth System Model (CESM); and the NASA ModelE climate model and the Goddard Earth Observing System Model, version 5 (GEOS-5), atmospheric general circulation model.« less
An ontology for component-based models of water resource systems
NASA Astrophysics Data System (ADS)
Elag, Mostafa; Goodall, Jonathan L.
2013-08-01
Component-based modeling is an approach for simulating water resource systems where a model is composed of a set of components, each with a defined modeling objective, interlinked through data exchanges. Component-based modeling frameworks are used within the hydrologic, atmospheric, and earth surface dynamics modeling communities. While these efforts have been advancing, it has become clear that the water resources modeling community in particular, and arguably the larger earth science modeling community as well, faces a challenge of fully and precisely defining the metadata for model components. The lack of a unified framework for model component metadata limits interoperability between modeling communities and the reuse of models across modeling frameworks due to ambiguity about the model and its capabilities. To address this need, we propose an ontology for water resources model components that describes core concepts and relationships using the Web Ontology Language (OWL). The ontology that we present, which is termed the Water Resources Component (WRC) ontology, is meant to serve as a starting point that can be refined over time through engagement by the larger community until a robust knowledge framework for water resource model components is achieved. This paper presents the methodology used to arrive at the WRC ontology, the WRC ontology itself, and examples of how the ontology can aid in component-based water resources modeling by (i) assisting in identifying relevant models, (ii) encouraging proper model coupling, and (iii) facilitating interoperability across earth science modeling frameworks.
Shafizadeh-Moghadam, Hossein; Valavi, Roozbeh; Shahabi, Himan; Chapi, Kamran; Shirzadi, Ataollah
2018-07-01
In this research, eight individual machine learning and statistical models are implemented and compared, and based on their results, seven ensemble models for flood susceptibility assessment are introduced. The individual models included artificial neural networks, classification and regression trees, flexible discriminant analysis, generalized linear model, generalized additive model, boosted regression trees, multivariate adaptive regression splines, and maximum entropy, and the ensemble models were Ensemble Model committee averaging (EMca), Ensemble Model confidence interval Inferior (EMciInf), Ensemble Model confidence interval Superior (EMciSup), Ensemble Model to estimate the coefficient of variation (EMcv), Ensemble Model to estimate the mean (EMmean), Ensemble Model to estimate the median (EMmedian), and Ensemble Model based on weighted mean (EMwmean). The data set covered 201 flood events in the Haraz watershed (Mazandaran province in Iran) and 10,000 randomly selected non-occurrence points. Among the individual models, the Area Under the Receiver Operating Characteristic (AUROC), which showed the highest value, belonged to boosted regression trees (0.975) and the lowest value was recorded for generalized linear model (0.642). On the other hand, the proposed EMmedian resulted in the highest accuracy (0.976) among all models. In spite of the outstanding performance of some models, nevertheless, variability among the prediction of individual models was considerable. Therefore, to reduce uncertainty, creating more generalizable, more stable, and less sensitive models, ensemble forecasting approaches and in particular the EMmedian is recommended for flood susceptibility assessment. Copyright © 2018 Elsevier Ltd. All rights reserved.
Exploring Several Methods of Groundwater Model Selection
NASA Astrophysics Data System (ADS)
Samani, Saeideh; Ye, Ming; Asghari Moghaddam, Asghar
2017-04-01
Selecting reliable models for simulating groundwater flow and solute transport is essential to groundwater resources management and protection. This work is to explore several model selection methods for avoiding over-complex and/or over-parameterized groundwater models. We consider six groundwater flow models with different numbers (6, 10, 10, 13, 13 and 15) of model parameters. These models represent alternative geological interpretations, recharge estimates, and boundary conditions at a study site in Iran. The models were developed with Model Muse, and calibrated against observations of hydraulic head using UCODE. Model selection was conducted by using the following four approaches: (1) Rank the models using their root mean square error (RMSE) obtained after UCODE-based model calibration, (2) Calculate model probability using GLUE method, (3) Evaluate model probability using model selection criteria (AIC, AICc, BIC, and KIC), and (4) Evaluate model weights using the Fuzzy Multi-Criteria-Decision-Making (MCDM) approach. MCDM is based on the fuzzy analytical hierarchy process (AHP) and fuzzy technique for order performance, which is to identify the ideal solution by a gradual expansion from the local to the global scale of model parameters. The KIC and MCDM methods are superior to other methods, as they consider not only the fit between observed and simulated data and the number of parameter, but also uncertainty in model parameters. Considering these factors can prevent from occurring over-complexity and over-parameterization, when selecting the appropriate groundwater flow models. These methods selected, as the best model, one with average complexity (10 parameters) and the best parameter estimation (model 3).
Hou, Zeyu; Lu, Wenxi; Xue, Haibo; Lin, Jin
2017-08-01
Surrogate-based simulation-optimization technique is an effective approach for optimizing the surfactant enhanced aquifer remediation (SEAR) strategy for clearing DNAPLs. The performance of the surrogate model, which is used to replace the simulation model for the aim of reducing computation burden, is the key of corresponding researches. However, previous researches are generally based on a stand-alone surrogate model, and rarely make efforts to improve the approximation accuracy of the surrogate model to the simulation model sufficiently by combining various methods. In this regard, we present set pair analysis (SPA) as a new method to build ensemble surrogate (ES) model, and conducted a comparative research to select a better ES modeling pattern for the SEAR strategy optimization problems. Surrogate models were developed using radial basis function artificial neural network (RBFANN), support vector regression (SVR), and Kriging. One ES model is assembling RBFANN model, SVR model, and Kriging model using set pair weights according their performance, and the other is assembling several Kriging (the best surrogate modeling method of three) models built with different training sample datasets. Finally, an optimization model, in which the ES model was embedded, was established to obtain the optimal remediation strategy. The results showed the residuals of the outputs between the best ES model and simulation model for 100 testing samples were lower than 1.5%. Using an ES model instead of the simulation model was critical for considerably reducing the computation time of simulation-optimization process and maintaining high computation accuracy simultaneously. Copyright © 2017 Elsevier B.V. All rights reserved.
Models Archive and ModelWeb at NSSDC
NASA Astrophysics Data System (ADS)
Bilitza, D.; Papitashvili, N.; King, J. H.
2002-05-01
In addition to its large data holdings, NASA's National Space Science Data Center (NSSDC) also maintains an archive of space physics models for public use (ftp://nssdcftp.gsfc.nasa.gov/models/). The more than 60 model entries cover a wide range of parameters from the atmosphere, to the ionosphere, to the magnetosphere, to the heliosphere. The models are primarily empirical models developed by the respective model authors based on long data records from ground and space experiments. An online model catalog (http://nssdc.gsfc.nasa.gov/space/model/) provides information about these and other models and links to the model software if available. We will briefly review the existing model holdings and highlight some of its usages and users. In response to a growing need by the user community, NSSDC began to develop web-interfaces for the most frequently requested models. These interfaces enable users to compute and plot model parameters online for the specific conditions that they are interested in. Currently included in the Modelweb system (http://nssdc.gsfc.nasa.gov/space/model/) are the following models: the International Reference Ionosphere (IRI) model, the Mass Spectrometer Incoherent Scatter (MSIS) E90 model, the International Geomagnetic Reference Field (IGRF) and the AP/AE-8 models for the radiation belt electrons and protons. User accesses to both systems have been steadily increasing over the last years with occasional spikes prior to large scientific meetings. The current monthly rate is between 5,000 to 10,000 accesses for either system; in February 2002 13,872 accesses were recorded to the Modelsweb and 7092 accesses to the models archive.
NASA Astrophysics Data System (ADS)
Knoben, Wouter; Woods, Ross; Freer, Jim
2016-04-01
Conceptual hydrologic models consist of a certain arrangement of spatial and temporal dynamics consisting of stores, fluxes and transformation functions, depending on the modeller's choices and intended use. They have the advantages of being computationally efficient, being relatively easy model structures to reconfigure and having relatively low input data demands. This makes them well-suited for large-scale and large-sample hydrology, where appropriately representing the dominant hydrologic functions of a catchment is a main concern. Given these requirements, the number of parameters in the model cannot be too high, to avoid equifinality and identifiability issues. This limits the number and level of complexity of dominant hydrologic processes the model can represent. Specific purposes and places thus require a specific model and this has led to an abundance of conceptual hydrologic models. No structured overview of these models exists and there is no clear method to select appropriate model structures for different catchments. This study is a first step towards creating an overview of the elements that make up conceptual models, which may later assist a modeller in finding an appropriate model structure for a given catchment. To this end, this study brings together over 30 past and present conceptual models. The reviewed model structures are simply different configurations of three basic model elements (stores, fluxes and transformation functions), depending on the hydrologic processes the models are intended to represent. Differences also exist in the inner workings of the stores, fluxes and transformations, i.e. the mathematical formulations that describe each model element's intended behaviour. We investigate the hypothesis that different model structures can produce similar behavioural simulations. This can clarify the overview of model elements by grouping elements which are similar, which can improve model structure selection.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brewer, Shannon K.; Worthington, Thomas A.; Mollenhauer, Robert
Ecohydrology combines empiricism, data analytics, and the integration of models to characterize linkages between ecological and hydrological processes. A challenge for practitioners is determining which models best generalizes heterogeneity in hydrological behaviour, including water fluxes across spatial and temporal scales, integrating environmental and socio–economic activities to determine best watershed management practices and data requirements. We conducted a literature review and synthesis of hydrologic, hydraulic, water quality, and ecological models designed for solving interdisciplinary questions. We reviewed 1,275 papers and identified 178 models that have the capacity to answer an array of research questions about ecohydrology or ecohydraulics. Of these models,more » 43 were commonly applied due to their versatility, accessibility, user–friendliness, and excellent user–support. Forty–one of 43 reviewed models were linked to at least 1 other model especially: Water Quality Analysis Simulation Program (linked to 21 other models), Soil and Water Assessment Tool (19), and Hydrologic Engineering Center's River Analysis System (15). However, model integration was still relatively infrequent. There was substantial variation in model applications, possibly an artefact of the regional focus of research questions, simplicity of use, quality of user–support efforts, or a limited understanding of model applicability. Simply increasing the interoperability of model platforms, transformation of models to user–friendly forms, increasing user–support, defining the reliability and risk associated with model results, and increasing awareness of model applicability may promote increased use of models across subdisciplines. Furthermore, the current availability of models allows an array of interdisciplinary questions to be addressed, and model choice relates to several factors including research objective, model complexity, ability to link to other models, and interface choice.« less
Brewer, Shannon K.; Worthington, Thomas; Mollenhauer, Robert; Stewart, David; McManamay, Ryan; Guertault, Lucie; Moore, Desiree
2018-01-01
Ecohydrology combines empiricism, data analytics, and the integration of models to characterize linkages between ecological and hydrological processes. A challenge for practitioners is determining which models best generalizes heterogeneity in hydrological behaviour, including water fluxes across spatial and temporal scales, integrating environmental and socio‐economic activities to determine best watershed management practices and data requirements. We conducted a literature review and synthesis of hydrologic, hydraulic, water quality, and ecological models designed for solving interdisciplinary questions. We reviewed 1,275 papers and identified 178 models that have the capacity to answer an array of research questions about ecohydrology or ecohydraulics. Of these models, 43 were commonly applied due to their versatility, accessibility, user‐friendliness, and excellent user‐support. Forty‐one of 43 reviewed models were linked to at least 1 other model especially: Water Quality Analysis Simulation Program (linked to 21 other models), Soil and Water Assessment Tool (19), and Hydrologic Engineering Center's River Analysis System (15). However, model integration was still relatively infrequent. There was substantial variation in model applications, possibly an artefact of the regional focus of research questions, simplicity of use, quality of user‐support efforts, or a limited understanding of model applicability. Simply increasing the interoperability of model platforms, transformation of models to user‐friendly forms, increasing user‐support, defining the reliability and risk associated with model results, and increasing awareness of model applicability may promote increased use of models across subdisciplines. Nonetheless, the current availability of models allows an array of interdisciplinary questions to be addressed, and model choice relates to several factors including research objective, model complexity, ability to link to other models, and interface choice.
Hedenstierna, Sofia; Halldin, Peter
2008-04-15
A finite element (FE) model of the human neck with incorporated continuum or discrete muscles was used to simulate experimental impacts in rear, frontal, and lateral directions. The aim of this study was to determine how a continuum muscle model influences the impact behavior of a FE human neck model compared with a discrete muscle model. Most FE neck models used for impact analysis today include a spring element musculature and are limited to discrete geometries and nodal output results. A solid-element muscle model was thought to improve the behavior of the model by adding properties such as tissue inertia and compressive stiffness and by improving the geometry. It would also predict the strain distribution within the continuum elements. A passive continuum muscle model with nonlinear viscoelastic materials was incorporated into the KTH neck model together with active spring muscles and used in impact simulations. The resulting head and vertebral kinematics was compared with the results from a discrete muscle model as well as volunteer corridors. The muscle strain prediction was compared between the 2 muscle models. The head and vertebral kinematics were within the volunteer corridors for both models when activated. The continuum model behaved more stiffly than the discrete model and needed less active force to fit the experimental results. The largest difference was seen in the rear impact. The strain predicted by the continuum model was lower than for the discrete model. The continuum muscle model stiffened the response of the KTH neck model compared with a discrete model, and the strain prediction in the muscles was improved.
Brewer, Shannon K.; Worthington, Thomas A.; Mollenhauer, Robert; ...
2018-04-06
Ecohydrology combines empiricism, data analytics, and the integration of models to characterize linkages between ecological and hydrological processes. A challenge for practitioners is determining which models best generalizes heterogeneity in hydrological behaviour, including water fluxes across spatial and temporal scales, integrating environmental and socio–economic activities to determine best watershed management practices and data requirements. We conducted a literature review and synthesis of hydrologic, hydraulic, water quality, and ecological models designed for solving interdisciplinary questions. We reviewed 1,275 papers and identified 178 models that have the capacity to answer an array of research questions about ecohydrology or ecohydraulics. Of these models,more » 43 were commonly applied due to their versatility, accessibility, user–friendliness, and excellent user–support. Forty–one of 43 reviewed models were linked to at least 1 other model especially: Water Quality Analysis Simulation Program (linked to 21 other models), Soil and Water Assessment Tool (19), and Hydrologic Engineering Center's River Analysis System (15). However, model integration was still relatively infrequent. There was substantial variation in model applications, possibly an artefact of the regional focus of research questions, simplicity of use, quality of user–support efforts, or a limited understanding of model applicability. Simply increasing the interoperability of model platforms, transformation of models to user–friendly forms, increasing user–support, defining the reliability and risk associated with model results, and increasing awareness of model applicability may promote increased use of models across subdisciplines. Furthermore, the current availability of models allows an array of interdisciplinary questions to be addressed, and model choice relates to several factors including research objective, model complexity, ability to link to other models, and interface choice.« less
2014-01-01
Background Protein model quality assessment is an essential component of generating and using protein structural models. During the Tenth Critical Assessment of Techniques for Protein Structure Prediction (CASP10), we developed and tested four automated methods (MULTICOM-REFINE, MULTICOM-CLUSTER, MULTICOM-NOVEL, and MULTICOM-CONSTRUCT) that predicted both local and global quality of protein structural models. Results MULTICOM-REFINE was a clustering approach that used the average pairwise structural similarity between models to measure the global quality and the average Euclidean distance between a model and several top ranked models to measure the local quality. MULTICOM-CLUSTER and MULTICOM-NOVEL were two new support vector machine-based methods of predicting both the local and global quality of a single protein model. MULTICOM-CONSTRUCT was a new weighted pairwise model comparison (clustering) method that used the weighted average similarity between models in a pool to measure the global model quality. Our experiments showed that the pairwise model assessment methods worked better when a large portion of models in the pool were of good quality, whereas single-model quality assessment methods performed better on some hard targets when only a small portion of models in the pool were of reasonable quality. Conclusions Since digging out a few good models from a large pool of low-quality models is a major challenge in protein structure prediction, single model quality assessment methods appear to be poised to make important contributions to protein structure modeling. The other interesting finding was that single-model quality assessment scores could be used to weight the models by the consensus pairwise model comparison method to improve its accuracy. PMID:24731387
Cao, Renzhi; Wang, Zheng; Cheng, Jianlin
2014-04-15
Protein model quality assessment is an essential component of generating and using protein structural models. During the Tenth Critical Assessment of Techniques for Protein Structure Prediction (CASP10), we developed and tested four automated methods (MULTICOM-REFINE, MULTICOM-CLUSTER, MULTICOM-NOVEL, and MULTICOM-CONSTRUCT) that predicted both local and global quality of protein structural models. MULTICOM-REFINE was a clustering approach that used the average pairwise structural similarity between models to measure the global quality and the average Euclidean distance between a model and several top ranked models to measure the local quality. MULTICOM-CLUSTER and MULTICOM-NOVEL were two new support vector machine-based methods of predicting both the local and global quality of a single protein model. MULTICOM-CONSTRUCT was a new weighted pairwise model comparison (clustering) method that used the weighted average similarity between models in a pool to measure the global model quality. Our experiments showed that the pairwise model assessment methods worked better when a large portion of models in the pool were of good quality, whereas single-model quality assessment methods performed better on some hard targets when only a small portion of models in the pool were of reasonable quality. Since digging out a few good models from a large pool of low-quality models is a major challenge in protein structure prediction, single model quality assessment methods appear to be poised to make important contributions to protein structure modeling. The other interesting finding was that single-model quality assessment scores could be used to weight the models by the consensus pairwise model comparison method to improve its accuracy.
Replicating Health Economic Models: Firm Foundations or a House of Cards?
Bermejo, Inigo; Tappenden, Paul; Youn, Ji-Hee
2017-11-01
Health economic evaluation is a framework for the comparative analysis of the incremental health gains and costs associated with competing decision alternatives. The process of developing health economic models is usually complex, financially expensive and time-consuming. For these reasons, model development is sometimes based on previous model-based analyses; this endeavour is usually referred to as model replication. Such model replication activity may involve the comprehensive reproduction of an existing model or 'borrowing' all or part of a previously developed model structure. Generally speaking, the replication of an existing model may require substantially less effort than developing a new de novo model by bypassing, or undertaking in only a perfunctory manner, certain aspects of model development such as the development of a complete conceptual model and/or comprehensive literature searching for model parameters. A further motivation for model replication may be to draw on the credibility or prestige of previous analyses that have been published and/or used to inform decision making. The acceptability and appropriateness of replicating models depends on the decision-making context: there exists a trade-off between the 'savings' afforded by model replication and the potential 'costs' associated with reduced model credibility due to the omission of certain stages of model development. This paper provides an overview of the different levels of, and motivations for, replicating health economic models, and discusses the advantages, disadvantages and caveats associated with this type of modelling activity. Irrespective of whether replicated models should be considered appropriate or not, complete replicability is generally accepted as a desirable property of health economic models, as reflected in critical appraisal checklists and good practice guidelines. To this end, the feasibility of comprehensive model replication is explored empirically across a small number of recent case studies. Recommendations are put forward for improving reporting standards to enhance comprehensive model replicability.
Reducing hydrologic model uncertainty in monthly streamflow predictions using multimodel combination
NASA Astrophysics Data System (ADS)
Li, Weihua; Sankarasubramanian, A.
2012-12-01
Model errors are inevitable in any prediction exercise. One approach that is currently gaining attention in reducing model errors is by combining multiple models to develop improved predictions. The rationale behind this approach primarily lies on the premise that optimal weights could be derived for each model so that the developed multimodel predictions will result in improved predictions. A new dynamic approach (MM-1) to combine multiple hydrological models by evaluating their performance/skill contingent on the predictor state is proposed. We combine two hydrological models, "abcd" model and variable infiltration capacity (VIC) model, to develop multimodel streamflow predictions. To quantify precisely under what conditions the multimodel combination results in improved predictions, we compare multimodel scheme MM-1 with optimal model combination scheme (MM-O) by employing them in predicting the streamflow generated from a known hydrologic model (abcd model orVICmodel) with heteroscedastic error variance as well as from a hydrologic model that exhibits different structure than that of the candidate models (i.e., "abcd" model or VIC model). Results from the study show that streamflow estimated from single models performed better than multimodels under almost no measurement error. However, under increased measurement errors and model structural misspecification, both multimodel schemes (MM-1 and MM-O) consistently performed better than the single model prediction. Overall, MM-1 performs better than MM-O in predicting the monthly flow values as well as in predicting extreme monthly flows. Comparison of the weights obtained from each candidate model reveals that as measurement errors increase, MM-1 assigns weights equally for all the models, whereas MM-O assigns higher weights for always the best-performing candidate model under the calibration period. Applying the multimodel algorithms for predicting streamflows over four different sites revealed that MM-1 performs better than all single models and optimal model combination scheme, MM-O, in predicting the monthly flows as well as the flows during wetter months.
NASA Astrophysics Data System (ADS)
Oursland, Mark David
This study compared the modeling achievement of students receiving mathematical modeling instruction using the computer microworld, Interactive Physics, and students receiving instruction using physical objects. Modeling instruction included activities where students applied the (a) linear model to a variety of situations, (b) linear model to two-rate situations with a constant rate, (c) quadratic model to familiar geometric figures. Both quantitative and qualitative methods were used to analyze achievement differences between students (a) receiving different methods of modeling instruction, (b) with different levels of beginning modeling ability, or (c) with different levels of computer literacy. Student achievement was analyzed quantitatively through a three-factor analysis of variance where modeling instruction, beginning modeling ability, and computer literacy were used as the three independent factors. The SOLO (Structure of the Observed Learning Outcome) assessment framework was used to design written modeling assessment instruments to measure the students' modeling achievement. The same three independent factors were used to collect and analyze the interviews and observations of student behaviors. Both methods of modeling instruction used the data analysis approach to mathematical modeling. The instructional lessons presented problem situations where students were asked to collect data, analyze the data, write a symbolic mathematical equation, and use equation to solve the problem. The researcher recommends the following practice for modeling instruction based on the conclusions of this study. A variety of activities with a common structure are needed to make explicit the modeling process of applying a standard mathematical model. The modeling process is influenced strongly by prior knowledge of the problem context and previous modeling experiences. The conclusions of this study imply that knowledge of the properties about squares improved the students' ability to model a geometric problem more than instruction in data analysis modeling. The uses of computer microworlds such as Interactive Physics in conjunction with cooperative groups are a viable method of modeling instruction.
A physical data model for fields and agents
NASA Astrophysics Data System (ADS)
de Jong, Kor; de Bakker, Merijn; Karssenberg, Derek
2016-04-01
Two approaches exist in simulation modeling: agent-based and field-based modeling. In agent-based (or individual-based) simulation modeling, the entities representing the system's state are represented by objects, which are bounded in space and time. Individual objects, like an animal, a house, or a more abstract entity like a country's economy, have properties representing their state. In an agent-based model this state is manipulated. In field-based modeling, the entities representing the system's state are represented by fields. Fields capture the state of a continuous property within a spatial extent, examples of which are elevation, atmospheric pressure, and water flow velocity. With respect to the technology used to create these models, the domains of agent-based and field-based modeling have often been separate worlds. In environmental modeling, widely used logical data models include feature data models for point, line and polygon objects, and the raster data model for fields. Simulation models are often either agent-based or field-based, even though the modeled system might contain both entities that are better represented by individuals and entities that are better represented by fields. We think that the reason for this dichotomy in kinds of models might be that the traditional object and field data models underlying those models are relatively low level. We have developed a higher level conceptual data model for representing both non-spatial and spatial objects, and spatial fields (De Bakker et al. 2016). Based on this conceptual data model we designed a logical and physical data model for representing many kinds of data, including the kinds used in earth system modeling (e.g. hydrological and ecological models). The goal of this work is to be able to create high level code and tools for the creation of models in which entities are representable by both objects and fields. Our conceptual data model is capable of representing the traditional feature data models and the raster data model, among many other data models. Our physical data model is capable of storing a first set of kinds of data, like omnipresent scalars, mobile spatio-temporal points and property values, and spatio-temporal rasters. With our poster we will provide an overview of the physical data model expressed in HDF5 and show examples of how it can be used to capture both object- and field-based information. References De Bakker, M, K. de Jong, D. Karssenberg. 2016. A conceptual data model and language for fields and agents. European Geosciences Union, EGU General Assembly, 2016, Vienna.
Students' Models of Curve Fitting: A Models and Modeling Perspective
ERIC Educational Resources Information Center
Gupta, Shweta
2010-01-01
The Models and Modeling Perspectives (MMP) has evolved out of research that began 26 years ago. MMP researchers use Model Eliciting Activities (MEAs) to elicit students' mental models. In this study MMP was used as the conceptual framework to investigate the nature of students' models of curve fitting in a problem-solving environment consisting of…
Modeling Information Accumulation in Psychological Tests Using Item Response Times
ERIC Educational Resources Information Center
Ranger, Jochen; Kuhn, Jörg-Tobias
2015-01-01
In this article, a latent trait model is proposed for the response times in psychological tests. The latent trait model is based on the linear transformation model and subsumes popular models from survival analysis, like the proportional hazards model and the proportional odds model. Core of the model is the assumption that an unspecified monotone…
Climate and atmospheric modeling studies
NASA Technical Reports Server (NTRS)
1992-01-01
The climate and atmosphere modeling research programs have concentrated on the development of appropriate atmospheric and upper ocean models, and preliminary applications of these models. Principal models are a one-dimensional radiative-convective model, a three-dimensional global model, and an upper ocean model. Principal applications were the study of the impact of CO2, aerosols, and the solar 'constant' on climate.
Models in Science Education: Applications of Models in Learning and Teaching Science
ERIC Educational Resources Information Center
Ornek, Funda
2008-01-01
In this paper, I discuss different types of models in science education and applications of them in learning and teaching science, in particular physics. Based on the literature, I categorize models as conceptual and mental models according to their characteristics. In addition to these models, there is another model called "physics model" by the…
Computer-Aided Modeling and Analysis of Power Processing Systems (CAMAPPS). Phase 1: Users handbook
NASA Technical Reports Server (NTRS)
Kim, S.; Lee, J.; Cho, B. H.; Lee, F. C.
1986-01-01
The EASY5 macro component models developed for the spacecraft power system simulation are described. A brief explanation about how to use the macro components with the EASY5 Standard Components to build a specific system is given through an example. The macro components are ordered according to the following functional group: converter power stage models, compensator models, current-feedback models, constant frequency control models, load models, solar array models, and shunt regulator models. Major equations, a circuit model, and a program listing are provided for each macro component.
Vector models and generalized SYK models
Peng, Cheng
2017-05-23
Here, we consider the relation between SYK-like models and vector models by studying a toy model where a tensor field is coupled with a vector field. By integrating out the tensor field, the toy model reduces to the Gross-Neveu model in 1 dimension. On the other hand, a certain perturbation can be turned on and the toy model flows to an SYK-like model at low energy. Furthermore, a chaotic-nonchaotic phase transition occurs as the sign of the perturbation is altered. We further study similar models that possess chaos and enhanced reparameterization symmetries.
Validation of the PVSyst Performance Model for the Concentrix CPV Technology
NASA Astrophysics Data System (ADS)
Gerstmaier, Tobias; Gomez, María; Gombert, Andreas; Mermoud, André; Lejeune, Thibault
2011-12-01
The accuracy of the two-stage PVSyst model for the Concentrix CPV Technology is determined by comparing modeled to measured values. For both stages, i) the module model and ii) the power plant model, the underlying approaches are explained and methods for obtaining the model parameters are presented. The performance of both models is quantified using 19 months of outdoor measurements for the module model and 9 months of measurements at four different sites for the power plant model. Results are presented by giving statistical quantities for the model accuracy.
Comparative Protein Structure Modeling Using MODELLER
Webb, Benjamin; Sali, Andrej
2016-01-01
Comparative protein structure modeling predicts the three-dimensional structure of a given protein sequence (target) based primarily on its alignment to one or more proteins of known structure (templates). The prediction process consists of fold assignment, target-template alignment, model building, and model evaluation. This unit describes how to calculate comparative models using the program MODELLER and how to use the ModBase database of such models, and discusses all four steps of comparative modeling, frequently observed errors, and some applications. Modeling lactate dehydrogenase from Trichomonas vaginalis (TvLDH) is described as an example. The download and installation of the MODELLER software is also described. PMID:27322406
A comparative study of turbulence models in predicting hypersonic inlet flows
NASA Technical Reports Server (NTRS)
Kapoor, Kamlesh
1993-01-01
A computational study has been conducted to evaluate the performance of various turbulence models. The NASA P8 inlet, which represents cruise condition of a typical hypersonic air-breathing vehicle, was selected as a test case for the study; the PARC2D code, which solves the full two dimensional Reynolds-averaged Navier-Stokes equations, was used. Results are presented for a total of six versions of zero- and two-equation turbulence models. Zero-equation models tested are the Baldwin-Lomax model, the Thomas model, and a combination of the two. Two-equation models tested are low-Reynolds number models (the Chien model and the Speziale model) and a high-Reynolds number model (the Launder and Spalding model).
NASA Astrophysics Data System (ADS)
Clark, Martyn P.; Bierkens, Marc F. P.; Samaniego, Luis; Woods, Ross A.; Uijlenhoet, Remko; Bennett, Katrina E.; Pauwels, Valentijn R. N.; Cai, Xitian; Wood, Andrew W.; Peters-Lidard, Christa D.
2017-07-01
The diversity in hydrologic models has historically led to great controversy on the correct
approach to process-based hydrologic modeling, with debates centered on the adequacy of process parameterizations, data limitations and uncertainty, and computational constraints on model analysis. In this paper, we revisit key modeling challenges on requirements to (1) define suitable model equations, (2) define adequate model parameters, and (3) cope with limitations in computing power. We outline the historical modeling challenges, provide examples of modeling advances that address these challenges, and define outstanding research needs. We illustrate how modeling advances have been made by groups using models of different type and complexity, and we argue for the need to more effectively use our diversity of modeling approaches in order to advance our collective quest for physically realistic hydrologic models.
NASA Astrophysics Data System (ADS)
Clark, M. P.; Nijssen, B.; Wood, A.; Mizukami, N.; Newman, A. J.
2017-12-01
The diversity in hydrologic models has historically led to great controversy on the "correct" approach to process-based hydrologic modeling, with debates centered on the adequacy of process parameterizations, data limitations and uncertainty, and computational constraints on model analysis. In this paper, we revisit key modeling challenges on requirements to (1) define suitable model equations, (2) define adequate model parameters, and (3) cope with limitations in computing power. We outline the historical modeling challenges, provide examples of modeling advances that address these challenges, and define outstanding research needs. We illustrate how modeling advances have been made by groups using models of different type and complexity, and we argue for the need to more effectively use our diversity of modeling approaches in order to advance our collective quest for physically realistic hydrologic models.
Trapped Radiation Model Uncertainties: Model-Data and Model-Model Comparisons
NASA Technical Reports Server (NTRS)
Armstrong, T. W.; Colborn, B. L.
2000-01-01
The standard AP8 and AE8 models for predicting trapped proton and electron environments have been compared with several sets of flight data to evaluate model uncertainties. Model comparisons are made with flux and dose measurements made on various U.S. low-Earth orbit satellites (APEX, CRRES, DMSP, LDEF, NOAA) and Space Shuttle flights, on Russian satellites (Photon-8, Cosmos-1887, Cosmos-2044), and on the Russian Mir Space Station. This report gives the details of the model-data comparisons-summary results in terms of empirical model uncertainty factors that can be applied for spacecraft design applications are given in a combination report. The results of model-model comparisons are also presented from standard AP8 and AE8 model predictions compared with the European Space Agency versions of AP8 and AE8 and with Russian-trapped radiation models.
Trapped Radiation Model Uncertainties: Model-Data and Model-Model Comparisons
NASA Technical Reports Server (NTRS)
Armstrong, T. W.; Colborn, B. L.
2000-01-01
The standard AP8 and AE8 models for predicting trapped proton and electron environments have been compared with several sets of flight data to evaluate model uncertainties. Model comparisons are made with flux and dose measurements made on various U.S. low-Earth orbit satellites (APEX, CRRES, DMSP. LDEF, NOAA) and Space Shuttle flights, on Russian satellites (Photon-8, Cosmos-1887, Cosmos-2044), and on the Russian Mir space station. This report gives the details of the model-data comparisons -- summary results in terms of empirical model uncertainty factors that can be applied for spacecraft design applications are given in a companion report. The results of model-model comparisons are also presented from standard AP8 and AE8 model predictions compared with the European Space Agency versions of AP8 and AE8 and with Russian trapped radiation models.
Analysis of terahertz dielectric properties of pork tissue
NASA Astrophysics Data System (ADS)
Huang, Yuqing; Xie, Qiaoling; Sun, Ping
2017-10-01
Seeing that about 70% component of fresh biological tissues is water, many scientists try to use water models to describe the dielectric properties of biological tissues. The classical water dielectric models are Debye model, Double Debye model and Cole-Cole model. This work aims to determine a suitable model by comparing three models above with experimental data. These models are applied to fresh pork tissue. By means of least square method, the parameters of different models are fitted with the experimental data. Comparing different models on both dielectric function, the Cole-Cole model is verified the best to describe the experiments of pork tissue. The correction factor α of the Cole-Cole model is an important modification for biological tissues. So Cole-Cole model is supposed to be a priority selection to describe the dielectric properties for biological tissues in the terahertz range.
Dealing with dissatisfaction in mathematical modelling to integrate QFD and Kano’s model
NASA Astrophysics Data System (ADS)
Retno Sari Dewi, Dian; Debora, Joana; Edy Sianto, Martinus
2017-12-01
The purpose of the study is to implement the integration of Quality Function Deployment (QFD) and Kano’s Model into mathematical model. Voice of customer data in QFD was collected using questionnaire and the questionnaire was developed based on Kano’s model. Then the operational research methodology was applied to build the objective function and constraints in the mathematical model. The relationship between voice of customer and engineering characteristics was modelled using linier regression model. Output of the mathematical model would be detail of engineering characteristics. The objective function of this model is to maximize satisfaction and minimize dissatisfaction as well. Result of this model is 62% .The major contribution of this research is to implement the existing mathematical model to integrate QFD and Kano’s Model in the case study of shoe cabinet.
NASA Astrophysics Data System (ADS)
Plotnitsky, Arkady
2017-06-01
The history of mathematical modeling outside physics has been dominated by the use of classical mathematical models, C-models, primarily those of a probabilistic or statistical nature. More recently, however, quantum mathematical models, Q-models, based in the mathematical formalism of quantum theory have become more prominent in psychology, economics, and decision science. The use of Q-models in these fields remains controversial, in part because it is not entirely clear whether Q-models are necessary for dealing with the phenomena in question or whether C-models would still suffice. My aim, however, is not to assess the necessity of Q-models in these fields, but instead to reflect on what the possible applicability of Q-models may tell us about the corresponding phenomena there, vis-à-vis quantum phenomena in physics. In order to do so, I shall first discuss the key reasons for the use of Q-models in physics. In particular, I shall examine the fundamental principles that led to the development of quantum mechanics. Then I shall consider a possible role of similar principles in using Q-models outside physics. Psychology, economics, and decision science borrow already available Q-models from quantum theory, rather than derive them from their own internal principles, while quantum mechanics was derived from such principles, because there was no readily available mathematical model to handle quantum phenomena, although the mathematics ultimately used in quantum did in fact exist then. I shall argue, however, that the principle perspective on mathematical modeling outside physics might help us to understand better the role of Q-models in these fields and possibly to envision new models, conceptually analogous to but mathematically different from those of quantum theory, helpful or even necessary there or in physics itself. I shall suggest one possible type of such models, singularized probabilistic, SP, models, some of which are time-dependent, TDSP-models. The necessity of using such models may change the nature of mathematical modeling in science and, thus, the nature of science, as it happened in the case of Q-models, which not only led to a revolutionary transformation of physics but also opened new possibilities for scientific thinking and mathematical modeling beyond physics.
Vertically-Integrated Dual-Continuum Models for CO2 Injection in Fractured Aquifers
NASA Astrophysics Data System (ADS)
Tao, Y.; Guo, B.; Bandilla, K.; Celia, M. A.
2017-12-01
Injection of CO2 into a saline aquifer leads to a two-phase flow system, with supercritical CO2 and brine being the two fluid phases. Various modeling approaches, including fully three-dimensional (3D) models and vertical-equilibrium (VE) models, have been used to study the system. Almost all of that work has focused on unfractured formations. 3D models solve the governing equations in three dimensions and are applicable to generic geological formations. VE models assume rapid and complete buoyant segregation of the two fluid phases, resulting in vertical pressure equilibrium and allowing integration of the governing equations in the vertical dimension. This reduction in dimensionality makes VE models computationally more efficient, but the associated assumptions restrict the applicability of VE model to formations with moderate to high permeability. In this presentation, we extend the VE and 3D models for CO2 injection in fractured aquifers. This is done in the context of dual-continuum modeling, where the fractured formation is modeled as an overlap of two continuous domains, one representing the fractures and the other representing the rock matrix. Both domains are treated as porous media continua and can be modeled by either a VE or a 3D formulation. The transfer of fluid mass between rock matrix and fractures is represented by a mass transfer function connecting the two domains. We have developed a computational model that combines the VE and 3D models, where we use the VE model in the fractures, which typically have high permeability, and the 3D model in the less permeable rock matrix. A new mass transfer function is derived, which couples the VE and 3D models. The coupled VE-3D model can simulate CO2 injection and migration in fractured aquifers. Results from this model compare well with a full-3D model in which both the fractures and rock matrix are modeled with 3D models, with the hybrid VE-3D model having significantly reduced computational cost. In addition to the VE-3D model, we explore simplifications of the rock matrix domain by using sugar-cube and matchstick conceptualizations and develop VE-dual porosity and VE-matchstick models. These vertically-integrated dual-permeability and dual-porosity models provide a range of computationally efficient tools to model CO2 storage in fractured saline aquifers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
C. Harrington
2004-10-25
The purpose of this model report is to provide documentation of the conceptual and mathematical model (Ashplume) for atmospheric dispersal and subsequent deposition of ash on the land surface from a potential volcanic eruption at Yucca Mountain, Nevada. This report also documents the ash (tephra) redistribution conceptual model. These aspects of volcanism-related dose calculation are described in the context of the entire igneous disruptive events conceptual model in ''Characterize Framework for Igneous Activity'' (BSC 2004 [DIRS 169989], Section 6.1.1). The Ashplume conceptual model accounts for incorporation and entrainment of waste fuel particles associated with a hypothetical volcanic eruption through themore » Yucca Mountain repository and downwind transport of contaminated tephra. The Ashplume mathematical model describes the conceptual model in mathematical terms to allow for prediction of radioactive waste/ash deposition on the ground surface given that the hypothetical eruptive event occurs. This model report also describes the conceptual model for tephra redistribution from a basaltic cinder cone. Sensitivity analyses and model validation activities for the ash dispersal and redistribution models are also presented. Analyses documented in this model report update the previous documentation of the Ashplume mathematical model and its application to the Total System Performance Assessment (TSPA) for the License Application (TSPA-LA) igneous scenarios. This model report also documents the redistribution model product outputs based on analyses to support the conceptual model. In this report, ''Ashplume'' is used when referring to the atmospheric dispersal model and ''ASHPLUME'' is used when referencing the code of that model. Two analysis and model reports provide direct inputs to this model report, namely ''Characterize Eruptive Processes at Yucca Mountain, Nevada and Number of Waste Packages Hit by Igneous Intrusion''. This model report provides direct inputs to the TSPA, which uses the ASHPLUME software described and used in this model report. Thus, ASHPLUME software inputs are inputs to this model report for ASHPLUME runs in this model report. However, ASHPLUME software inputs are outputs of this model report for ASHPLUME runs by TSPA.« less
Predicting motor vehicle collisions using Bayesian neural network models: an empirical analysis.
Xie, Yuanchang; Lord, Dominique; Zhang, Yunlong
2007-09-01
Statistical models have frequently been used in highway safety studies. They can be utilized for various purposes, including establishing relationships between variables, screening covariates and predicting values. Generalized linear models (GLM) and hierarchical Bayes models (HBM) have been the most common types of model favored by transportation safety analysts. Over the last few years, researchers have proposed the back-propagation neural network (BPNN) model for modeling the phenomenon under study. Compared to GLMs and HBMs, BPNNs have received much less attention in highway safety modeling. The reasons are attributed to the complexity for estimating this kind of model as well as the problem related to "over-fitting" the data. To circumvent the latter problem, some statisticians have proposed the use of Bayesian neural network (BNN) models. These models have been shown to perform better than BPNN models while at the same time reducing the difficulty associated with over-fitting the data. The objective of this study is to evaluate the application of BNN models for predicting motor vehicle crashes. To accomplish this objective, a series of models was estimated using data collected on rural frontage roads in Texas. Three types of models were compared: BPNN, BNN and the negative binomial (NB) regression models. The results of this study show that in general both types of neural network models perform better than the NB regression model in terms of data prediction. Although the BPNN model can occasionally provide better or approximately equivalent prediction performance compared to the BNN model, in most cases its prediction performance is worse than the BNN model. In addition, the data fitting performance of the BPNN model is consistently worse than the BNN model, which suggests that the BNN model has better generalization abilities than the BPNN model and can effectively alleviate the over-fitting problem without significantly compromising the nonlinear approximation ability. The results also show that BNNs could be used for other useful analyses in highway safety, including the development of accident modification factors and for improving the prediction capabilities for evaluating different highway design alternatives.
Understanding seasonal variability of uncertainty in hydrological prediction
NASA Astrophysics Data System (ADS)
Li, M.; Wang, Q. J.
2012-04-01
Understanding uncertainty in hydrological prediction can be highly valuable for improving the reliability of streamflow prediction. In this study, a monthly water balance model, WAPABA, in a Bayesian joint probability with error models are presented to investigate the seasonal dependency of prediction error structure. A seasonal invariant error model, analogous to traditional time series analysis, uses constant parameters for model error and account for no seasonal variations. In contrast, a seasonal variant error model uses a different set of parameters for bias, variance and autocorrelation for each individual calendar month. Potential connection amongst model parameters from similar months is not considered within the seasonal variant model and could result in over-fitting and over-parameterization. A hierarchical error model further applies some distributional restrictions on model parameters within a Bayesian hierarchical framework. An iterative algorithm is implemented to expedite the maximum a posterior (MAP) estimation of a hierarchical error model. Three error models are applied to forecasting streamflow at a catchment in southeast Australia in a cross-validation analysis. This study also presents a number of statistical measures and graphical tools to compare the predictive skills of different error models. From probability integral transform histograms and other diagnostic graphs, the hierarchical error model conforms better to reliability when compared to the seasonal invariant error model. The hierarchical error model also generally provides the most accurate mean prediction in terms of the Nash-Sutcliffe model efficiency coefficient and the best probabilistic prediction in terms of the continuous ranked probability score (CRPS). The model parameters of the seasonal variant error model are very sensitive to each cross validation, while the hierarchical error model produces much more robust and reliable model parameters. Furthermore, the result of the hierarchical error model shows that most of model parameters are not seasonal variant except for error bias. The seasonal variant error model is likely to use more parameters than necessary to maximize the posterior likelihood. The model flexibility and robustness indicates that the hierarchical error model has great potential for future streamflow predictions.
Huang, Ming Xia; Wang, Jing; Tang, Jian Zhao; Yu, Qiang; Zhang, Jun; Xue, Qing Yu; Chang, Qing; Tan, Mei Xiu
2016-11-18
The suitability of four popular empirical and semi-empirical stomatal conductance models (Jarvis model, Ball-Berry model, Leuning model and Medlyn model) was evaluated based on para-llel observation data of leaf stomatal conductance, leaf net photosynthetic rate and meteorological factors during the vigorous growing period of potato and oil sunflower at Wuchuan experimental station in agro-pastoral ecotone in North China. It was found that there was a significant linear relationship between leaf stomatal conductance and leaf net photosynthetic rate for potato, whereas the linear relationship appeared weaker for oil sunflower. The results of model evaluation showed that Ball-Berry model performed best in simulating leaf stomatal conductance of potato, followed by Leuning model and Medlyn model, while Jarvis model was the last in the performance rating. The root-mean-square error (RMSE) was 0.0331, 0.0371, 0.0456 and 0.0794 mol·m -2 ·s -1 , the normalized root-mean-square error (NRMSE) was 26.8%, 30.0%, 36.9% and 64.3%, and R-squared (R 2 ) was 0.96, 0.61, 0.91 and 0.88 between simulated and observed leaf stomatal conductance of potato for Ball-Berry model, Leuning model, Medlyn model and Jarvis model, respectively. For leaf stomatal conductance of oil sunflower, Jarvis model performed slightly better than Leuning model, Ball-Berry model and Medlyn model. RMSE was 0.2221, 0.2534, 0.2547 and 0.2758 mol·m -2 ·s -1 , NRMSE was 40.3%, 46.0%, 46.2% and 50.1%, and R 2 was 0.38, 0.22, 0.23 and 0.20 between simulated and observed leaf stomatal conductance of oil sunflower for Jarvis model, Leuning model, Ball-Berry model and Medlyn model, respectively. The path analysis was conducted to identify effects of specific meteorological factors on leaf stomatal conductance. The diurnal variation of leaf stomatal conductance was principally affected by vapour pressure saturation deficit for both potato and oil sunflower. The model evaluation suggested that the stomatal conductance models for oil sunflower are to be improved in further research.
Evaluation of chiller modeling approaches and their usability for fault detection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sreedharan, Priya
Selecting the model is an important and essential step in model based fault detection and diagnosis (FDD). Several factors must be considered in model evaluation, including accuracy, training data requirements, calibration effort, generality, and computational requirements. All modeling approaches fall somewhere between pure first-principles models, and empirical models. The objective of this study was to evaluate different modeling approaches for their applicability to model based FDD of vapor compression air conditioning units, which are commonly known as chillers. Three different models were studied: two are based on first-principles and the third is empirical in nature. The first-principles models are themore » Gordon and Ng Universal Chiller model (2nd generation), and a modified version of the ASHRAE Primary Toolkit model, which are both based on first principles. The DOE-2 chiller model as implemented in CoolTools{trademark} was selected for the empirical category. The models were compared in terms of their ability to reproduce the observed performance of an older chiller operating in a commercial building, and a newer chiller in a laboratory. The DOE-2 and Gordon-Ng models were calibrated by linear regression, while a direct-search method was used to calibrate the Toolkit model. The ''CoolTools'' package contains a library of calibrated DOE-2 curves for a variety of different chillers, and was used to calibrate the building chiller to the DOE-2 model. All three models displayed similar levels of accuracy. Of the first principles models, the Gordon-Ng model has the advantage of being linear in the parameters, which allows more robust parameter estimation methods to be used and facilitates estimation of the uncertainty in the parameter values. The ASHRAE Toolkit Model may have advantages when refrigerant temperature measurements are also available. The DOE-2 model can be expected to have advantages when very limited data are available to calibrate the model, as long as one of the previously identified models in the CoolTools library matches the performance of the chiller in question.« less
PyMT: A Python package for model-coupling in the Earth sciences
NASA Astrophysics Data System (ADS)
Hutton, E.
2016-12-01
The current landscape of Earth-system models is not only broad in scientific scope, but also broad in type. On the one hand, the large variety of models is exciting, as it provides fertile ground for extending or linking models together in novel ways to answer new scientific questions. However, the heterogeneity in model type acts to inhibit model coupling, model development, or even model use. Existing models are written in a variety of programming languages, operate on different grids, use their own file formats (both for input and output), have different user interfaces, have their own time steps, etc. Each of these factors become obstructions to scientists wanting to couple, extend - or simply run - existing models. For scientists whose main focus may not be computer science these barriers become even larger and become significant logistical hurdles. And this is all before the scientific difficulties of coupling or running models are addressed. The CSDMS Python Modeling Toolkit (PyMT) was developed to help non-computer scientists deal with these sorts of modeling logistics. PyMT is the fundamental package the Community Surface Dynamics Modeling System uses for the coupling of models that expose the Basic Modeling Interface (BMI). It contains: Tools necessary for coupling models of disparate time and space scales (including grid mappers) Time-steppers that coordinate the sequencing of coupled models Exchange of data between BMI-enabled models Wrappers that automatically load BMI-enabled models into the PyMT framework Utilities that support open-source interfaces (UGRID, SGRID,CSDMS Standard Names, etc.) A collection of community-submitted models, written in a variety of programminglanguages, from a variety of process domains - but all usable from within the Python programming language A plug-in framework for adding additional BMI-enabled models to the framework In this presentation we intoduce the basics of the PyMT as well as provide an example of coupling models of different domains and grid types.
NASA Astrophysics Data System (ADS)
Santos, Léonard; Thirel, Guillaume; Perrin, Charles
2017-04-01
Errors made by hydrological models may come from a problem in parameter estimation, uncertainty on observed measurements, numerical problems and from the model conceptualization that simplifies the reality. Here we focus on this last issue of hydrological modeling. One of the solutions to reduce structural uncertainty is to use a multimodel method, taking advantage of the great number and the variability of existing hydrological models. In particular, because different models are not similarly good in all situations, using multimodel approaches can improve the robustness of modeled outputs. Traditionally, in hydrology, multimodel methods are based on the output of the model (the simulated flow series). The aim of this poster is to introduce a different approach based on the internal variables of the models. The method is inspired by the SUper MOdel (SUMO, van den Berge et al., 2011) developed for climatology. The idea of the SUMO method is to correct the internal variables of a model taking into account the values of the internal variables of (an)other model(s). This correction is made bilaterally between the different models. The ensemble of the different models constitutes a super model in which all the models exchange information on their internal variables with each other at each time step. Due to this continuity in the exchanges, this multimodel algorithm is more dynamic than traditional multimodel methods. The method will be first tested using two GR4J models (in a state-space representation) with different parameterizations. The results will be presented and compared to traditional multimodel methods that will serve as benchmarks. In the future, other rainfall-runoff models will be used in the super model. References van den Berge, L. A., Selten, F. M., Wiegerinck, W., and Duane, G. S. (2011). A multi-model ensemble method that combines imperfect models through learning. Earth System Dynamics, 2(1) :161-177.
Downscaling GISS ModelE Boreal Summer Climate over Africa
NASA Technical Reports Server (NTRS)
Druyan, Leonard M.; Fulakeza, Matthew
2015-01-01
The study examines the perceived added value of downscaling atmosphere-ocean global climate model simulations over Africa and adjacent oceans by a nested regional climate model. NASA/Goddard Institute for Space Studies (GISS) coupled ModelE simulations for June- September 1998-2002 are used to form lateral boundary conditions for synchronous simulations by the GISS RM3 regional climate model. The ModelE computational grid spacing is 2deg latitude by 2.5deg longitude and the RM3 grid spacing is 0.44deg. ModelE precipitation climatology for June-September 1998-2002 is shown to be a good proxy for 30-year means so results based on the 5-year sample are presumed to be generally representative. Comparison with observational evidence shows several discrepancies in ModelE configuration of the boreal summer inter-tropical convergence zone (ITCZ). One glaring shortcoming is that ModelE simulations do not advance the West African rain band northward during the summer to represent monsoon precipitation onset over the Sahel. Results for 1998-2002 show that onset simulation is an important added value produced by downscaling with RM3. ModelE Eastern South Atlantic Ocean computed sea-surface temperatures (SST) are some 4 K warmer than reanalysis, contributing to large positive biases in overlying surface air temperatures (Tsfc). ModelE Tsfc are also too warm over most of Africa. RM3 downscaling somewhat mitigates the magnitude of Tsfc biases over the African continent, it eliminates the ModelE double ITCZ over the Atlantic and it produces more realistic orographic precipitation maxima. Parallel ModelE and RM3 simulations with observed SST forcing (in place of the predicted ocean) lower Tsfc errors but have mixed impacts on circulation and precipitation biases. Downscaling improvements of the meridional movement of the rain band over West Africa and the configuration of orographic precipitation maxima are realized irrespective of the SST biases.
A tool for multi-scale modelling of the renal nephron
Nickerson, David P.; Terkildsen, Jonna R.; Hamilton, Kirk L.; Hunter, Peter J.
2011-01-01
We present the development of a tool, which provides users with the ability to visualize and interact with a comprehensive description of a multi-scale model of the renal nephron. A one-dimensional anatomical model of the nephron has been created and is used for visualization and modelling of tubule transport in various nephron anatomical segments. Mathematical models of nephron segments are embedded in the one-dimensional model. At the cellular level, these segment models use models encoded in CellML to describe cellular and subcellular transport kinetics. A web-based presentation environment has been developed that allows the user to visualize and navigate through the multi-scale nephron model, including simulation results, at the different spatial scales encompassed by the model description. The Zinc extension to Firefox is used to provide an interactive three-dimensional view of the tubule model and the native Firefox rendering of scalable vector graphics is used to present schematic diagrams for cellular and subcellular scale models. The model viewer is embedded in a web page that dynamically presents content based on user input. For example, when viewing the whole nephron model, the user might be presented with information on the various embedded segment models as they select them in the three-dimensional model view. Alternatively, the user chooses to focus the model viewer on a cellular model located in a particular nephron segment in order to view the various membrane transport proteins. Selecting a specific protein may then present the user with a description of the mathematical model governing the behaviour of that protein—including the mathematical model itself and various simulation experiments used to validate the model against the literature. PMID:22670210
An online model composition tool for system biology models
2013-01-01
Background There are multiple representation formats for Systems Biology computational models, and the Systems Biology Markup Language (SBML) is one of the most widely used. SBML is used to capture, store, and distribute computational models by Systems Biology data sources (e.g., the BioModels Database) and researchers. Therefore, there is a need for all-in-one web-based solutions that support advance SBML functionalities such as uploading, editing, composing, visualizing, simulating, querying, and browsing computational models. Results We present the design and implementation of the Model Composition Tool (Interface) within the PathCase-SB (PathCase Systems Biology) web portal. The tool helps users compose systems biology models to facilitate the complex process of merging systems biology models. We also present three tools that support the model composition tool, namely, (1) Model Simulation Interface that generates a visual plot of the simulation according to user’s input, (2) iModel Tool as a platform for users to upload their own models to compose, and (3) SimCom Tool that provides a side by side comparison of models being composed in the same pathway. Finally, we provide a web site that hosts BioModels Database models and a separate web site that hosts SBML Test Suite models. Conclusions Model composition tool (and the other three tools) can be used with little or no knowledge of the SBML document structure. For this reason, students or anyone who wants to learn about systems biology will benefit from the described functionalities. SBML Test Suite models will be a nice starting point for beginners. And, for more advanced purposes, users will able to access and employ models of the BioModels Database as well. PMID:24006914
A parsimonious dynamic model for river water quality assessment.
Mannina, Giorgio; Viviani, Gaspare
2010-01-01
Water quality modelling is of crucial importance for the assessment of physical, chemical, and biological changes in water bodies. Mathematical approaches to water modelling have become more prevalent over recent years. Different model types ranging from detailed physical models to simplified conceptual models are available. Actually, a possible middle ground between detailed and simplified models may be parsimonious models that represent the simplest approach that fits the application. The appropriate modelling approach depends on the research goal as well as on data available for correct model application. When there is inadequate data, it is mandatory to focus on a simple river water quality model rather than detailed ones. The study presents a parsimonious river water quality model to evaluate the propagation of pollutants in natural rivers. The model is made up of two sub-models: a quantity one and a quality one. The model employs a river schematisation that considers different stretches according to the geometric characteristics and to the gradient of the river bed. Each stretch is represented with a conceptual model of a series of linear channels and reservoirs. The channels determine the delay in the pollution wave and the reservoirs cause its dispersion. To assess the river water quality, the model employs four state variables: DO, BOD, NH(4), and NO. The model was applied to the Savena River (Italy), which is the focus of a European-financed project in which quantity and quality data were gathered. A sensitivity analysis of the model output to the model input or parameters was done based on the Generalised Likelihood Uncertainty Estimation methodology. The results demonstrate the suitability of such a model as a tool for river water quality management.
The cost of simplifying air travel when modeling disease spread.
Lessler, Justin; Kaufman, James H; Ford, Daniel A; Douglas, Judith V
2009-01-01
Air travel plays a key role in the spread of many pathogens. Modeling the long distance spread of infectious disease in these cases requires an air travel model. Highly detailed air transportation models can be over determined and computationally problematic. We compared the predictions of a simplified air transport model with those of a model of all routes and assessed the impact of differences on models of infectious disease. Using U.S. ticket data from 2007, we compared a simplified "pipe" model, in which individuals flow in and out of the air transport system based on the number of arrivals and departures from a given airport, to a fully saturated model where all routes are modeled individually. We also compared the pipe model to a "gravity" model where the probability of travel is scaled by physical distance; the gravity model did not differ significantly from the pipe model. The pipe model roughly approximated actual air travel, but tended to overestimate the number of trips between small airports and underestimate travel between major east and west coast airports. For most routes, the maximum number of false (or missed) introductions of disease is small (<1 per day) but for a few routes this rate is greatly underestimated by the pipe model. If our interest is in large scale regional and national effects of disease, the simplified pipe model may be adequate. If we are interested in specific effects of interventions on particular air routes or the time for the disease to reach a particular location, a more complex point-to-point model will be more accurate. For many problems a hybrid model that independently models some frequently traveled routes may be the best choice. Regardless of the model used, the effect of simplifications and sensitivity to errors in parameter estimation should be analyzed.
Risk prediction models of breast cancer: a systematic review of model performances.
Anothaisintawee, Thunyarat; Teerawattananon, Yot; Wiratkapun, Chollathip; Kasamesup, Vijj; Thakkinstian, Ammarin
2012-05-01
The number of risk prediction models has been increasingly developed, for estimating about breast cancer in individual women. However, those model performances are questionable. We therefore have conducted a study with the aim to systematically review previous risk prediction models. The results from this review help to identify the most reliable model and indicate the strengths and weaknesses of each model for guiding future model development. We searched MEDLINE (PubMed) from 1949 and EMBASE (Ovid) from 1974 until October 2010. Observational studies which constructed models using regression methods were selected. Information about model development and performance were extracted. Twenty-five out of 453 studies were eligible. Of these, 18 developed prediction models and 7 validated existing prediction models. Up to 13 variables were included in the models and sample sizes for each study ranged from 550 to 2,404,636. Internal validation was performed in four models, while five models had external validation. Gail and Rosner and Colditz models were the significant models which were subsequently modified by other scholars. Calibration performance of most models was fair to good (expected/observe ratio: 0.87-1.12), but discriminatory accuracy was poor to fair both in internal validation (concordance statistics: 0.53-0.66) and in external validation (concordance statistics: 0.56-0.63). Most models yielded relatively poor discrimination in both internal and external validation. This poor discriminatory accuracy of existing models might be because of a lack of knowledge about risk factors, heterogeneous subtypes of breast cancer, and different distributions of risk factors across populations. In addition the concordance statistic itself is insensitive to measure the improvement of discrimination. Therefore, the new method such as net reclassification index should be considered to evaluate the improvement of the performance of a new develop model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
M. A. Wasiolek
The purpose of this report is to document the biosphere model, the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), which describes radionuclide transport processes in the biosphere and associated human exposure that may arise as the result of radionuclide release from the geologic repository at Yucca Mountain. The biosphere model is one of the process models that support the Yucca Mountain Project (YMP) Total System Performance Assessment (TSPA) for the license application (LA), the TSPA-LA. The ERMYN model provides the capability of performing human radiation dose assessments. This report documents the biosphere model, which includes: (1) Describing the referencemore » biosphere, human receptor, exposure scenarios, and primary radionuclides for each exposure scenario (Section 6.1); (2) Developing a biosphere conceptual model using site-specific features, events, and processes (FEPs), the reference biosphere, the human receptor, and assumptions (Section 6.2 and Section 6.3); (3) Building a mathematical model using the biosphere conceptual model and published biosphere models (Sections 6.4 and 6.5); (4) Summarizing input parameters for the mathematical model, including the uncertainty associated with input values (Section 6.6); (5) Identifying improvements in the ERMYN model compared with the model used in previous biosphere modeling (Section 6.7); (6) Constructing an ERMYN implementation tool (model) based on the biosphere mathematical model using GoldSim stochastic simulation software (Sections 6.8 and 6.9); (7) Verifying the ERMYN model by comparing output from the software with hand calculations to ensure that the GoldSim implementation is correct (Section 6.10); and (8) Validating the ERMYN model by corroborating it with published biosphere models; comparing conceptual models, mathematical models, and numerical results (Section 7).« less
Microphysics in the Multi-Scale Modeling Systems with Unified Physics
NASA Technical Reports Server (NTRS)
Tao, Wei-Kuo; Chern, J.; Lamg, S.; Matsui, T.; Shen, B.; Zeng, X.; Shi, R.
2011-01-01
In recent years, exponentially increasing computer power has extended Cloud Resolving Model (CRM) integrations from hours to months, the number of computational grid points from less than a thousand to close to ten million. Three-dimensional models are now more prevalent. Much attention is devoted to precipitating cloud systems where the crucial 1-km scales are resolved in horizontal domains as large as 10,000 km in two-dimensions, and 1,000 x 1,000 km2 in three-dimensions. Cloud resolving models now provide statistical information useful for developing more realistic physically based parameterizations for climate models and numerical weather prediction models. It is also expected that NWP and mesoscale model can be run in grid size similar to cloud resolving model through nesting technique. Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (l) a cloud-resolving model (Goddard Cumulus Ensemble model, GCE model), (2) a regional scale model (a NASA unified weather research and forecast, WRF), (3) a coupled CRM and global model (Goddard Multi-scale Modeling Framework, MMF), and (4) a land modeling system. The same microphysical processes, long and short wave radiative transfer and land processes and the explicit cloud-radiation, and cloud-surface interactive processes are applied in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator to use NASA high-resolution satellite data to identify the strengths and weaknesses of cloud and precipitation processes simulated by the model. In this talk, the microphysics developments of the multi-scale modeling system will be presented. In particular, the results from using multi-scale modeling system to study the heavy precipitation processes will be presented.
NASA Astrophysics Data System (ADS)
Nowak, W.; Schöniger, A.; Wöhling, T.; Illman, W. A.
2016-12-01
Model-based decision support requires justifiable models with good predictive capabilities. This, in turn, calls for a fine adjustment between predictive accuracy (small systematic model bias that can be achieved with rather complex models), and predictive precision (small predictive uncertainties that can be achieved with simpler models with fewer parameters). The implied complexity/simplicity trade-off depends on the availability of informative data for calibration. If not available, additional data collection can be planned through optimal experimental design. We present a model justifiability analysis that can compare models of vastly different complexity. It rests on Bayesian model averaging (BMA) to investigate the complexity/performance trade-off dependent on data availability. Then, we disentangle the complexity component from the performance component. We achieve this by replacing actually observed data by realizations of synthetic data predicted by the models. This results in a "model confusion matrix". Based on this matrix, the modeler can identify the maximum model complexity that can be justified by the available (or planned) amount and type of data. As a side product, the matrix quantifies model (dis-)similarity. We apply this analysis to aquifer characterization via hydraulic tomography, comparing four models with a vastly different number of parameters (from a homogeneous model to geostatistical random fields). As a testing scenario, we consider hydraulic tomography data. Using subsets of these data, we determine model justifiability as a function of data set size. The test case shows that geostatistical parameterization requires a substantial amount of hydraulic tomography data to be justified, while a zonation-based model can be justified with more limited data set sizes. The actual model performance (as opposed to model justifiability), however, depends strongly on the quality of prior geological information.
Green, Colin; Shearer, James; Ritchie, Craig W; Zajicek, John P
2011-01-01
To consider the methods available to model Alzheimer's disease (AD) progression over time to inform on the structure and development of model-based evaluations, and the future direction of modelling methods in AD. A systematic search of the health care literature was undertaken to identify methods to model disease progression in AD. Modelling methods are presented in a descriptive review. The literature search identified 42 studies presenting methods or applications of methods to model AD progression over time. The review identified 10 general modelling frameworks available to empirically model the progression of AD as part of a model-based evaluation. Seven of these general models are statistical models predicting progression of AD using a measure of cognitive function. The main concerns with models are on model structure, around the limited characterization of disease progression, and on the use of a limited number of health states to capture events related to disease progression over time. None of the available models have been able to present a comprehensive model of the natural history of AD. Although helpful, there are serious limitations in the methods available to model progression of AD over time. Advances are needed to better model the progression of AD and the effects of the disease on peoples' lives. Recent evidence supports the need for a multivariable approach to the modelling of AD progression, and indicates that a latent variable analytic approach to characterising AD progression is a promising avenue for advances in the statistical development of modelling methods. Copyright © 2011 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Marín, Laura; Torrejón, Antonio; Oltra, Lorena; Seoane, Montserrat; Hernández-Sampelayo, Paloma; Vera, María Isabel; Casellas, Francesc; Alfaro, Noelia; Lázaro, Pablo; García-Sánchez, Valle
2011-06-01
Nurses play an important role in the multidisciplinary management of inflammatory bowel disease (IBD), but little is known about this role and the associated resources. To improve knowledge of resource availability for health care activities and the different organizational models in managing IBD in Spain. Cross-sectional study with data obtained by questionnaire directed at Spanish Gastroenterology Services (GS). Five GS models were identified according to whether they have: no specific service for IBD management (Model A); IBD outpatient office for physician consultations (Model B); general outpatient office for nurse consultations (Model C); both, Model B and Model C (Model D); and IBD Unit (Model E) when the hospital has a Comprehensive Care Unit for IBD with telephone helpline, computer, including a Model B. Available resources and activities performed were compared according to GS model (chi-square test and test for linear trend). Responses were received from 107 GS: 33 Model A (31%), 38 Model B (36%), 4 Model C (4%), 16 Model D (15%) and 16 Model E (15%). The model in which nurses have the most resources and responsibilities is the Model E. The more complete the organizational model, the more frequent the availability of nursing resources (educational material, databases, office, and specialized software) and responsibilities (management of walk-in appointments, provision of emotional support, health education, follow-up of drug treatment and treatment adherence) (p<0.05). Nurses have more resources and responsibilities the more complete is the organizational model for IBD management. Development of these areas may improve patient outcomes. Copyright © 2011 European Crohn's and Colitis Organisation. Published by Elsevier B.V. All rights reserved.
Template-free modeling by LEE and LEER in CASP11.
Joung, InSuk; Lee, Sun Young; Cheng, Qianyi; Kim, Jong Yun; Joo, Keehyoung; Lee, Sung Jong; Lee, Jooyoung
2016-09-01
For the template-free modeling of human targets of CASP11, we utilized two of our modeling protocols, LEE and LEER. The LEE protocol took CASP11-released server models as the input and used some of them as templates for 3D (three-dimensional) modeling. The template selection procedure was based on the clustering of the server models aided by a community detection method of a server-model network. Restraining energy terms generated from the selected templates together with physical and statistical energy terms were used to build 3D models. Side-chains of the 3D models were rebuilt using target-specific consensus side-chain library along with the SCWRL4 rotamer library, which completed the LEE protocol. The first success factor of the LEE protocol was due to efficient server model screening. The average backbone accuracy of selected server models was similar to that of top 30% server models. The second factor was that a proper energy function along with our optimization method guided us, so that we successfully generated better quality models than the input template models. In 10 out of 24 cases, better backbone structures than the best of input template structures were generated. LEE models were further refined by performing restrained molecular dynamics simulations to generate LEER models. CASP11 results indicate that LEE models were better than the average template models in terms of both backbone structures and side-chain orientations. LEER models were of improved physical realism and stereo-chemistry compared to LEE models, and they were comparable to LEE models in the backbone accuracy. Proteins 2016; 84(Suppl 1):118-130. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
Bromaghin, Jeffrey F.; McDonald, Trent L.; Amstrup, Steven C.
2013-01-01
Mark-recapture models are extensively used in quantitative population ecology, providing estimates of population vital rates, such as survival, that are difficult to obtain using other methods. Vital rates are commonly modeled as functions of explanatory covariates, adding considerable flexibility to mark-recapture models, but also increasing the subjectivity and complexity of the modeling process. Consequently, model selection and the evaluation of covariate structure remain critical aspects of mark-recapture modeling. The difficulties involved in model selection are compounded in Cormack-Jolly- Seber models because they are composed of separate sub-models for survival and recapture probabilities, which are conceptualized independently even though their parameters are not statistically independent. The construction of models as combinations of sub-models, together with multiple potential covariates, can lead to a large model set. Although desirable, estimation of the parameters of all models may not be feasible. Strategies to search a model space and base inference on a subset of all models exist and enjoy widespread use. However, even though the methods used to search a model space can be expected to influence parameter estimation, the assessment of covariate importance, and therefore the ecological interpretation of the modeling results, the performance of these strategies has received limited investigation. We present a new strategy for searching the space of a candidate set of Cormack-Jolly-Seber models and explore its performance relative to existing strategies using computer simulation. The new strategy provides an improved assessment of the importance of covariates and covariate combinations used to model survival and recapture probabilities, while requiring only a modest increase in the number of models on which inference is based in comparison to existing techniques.
Clark, Martyn P.; Slater, Andrew G.; Rupp, David E.; Woods, Ross A.; Vrugt, Jasper A.; Gupta, Hoshin V.; Wagener, Thorsten; Hay, Lauren E.
2008-01-01
The problems of identifying the most appropriate model structure for a given problem and quantifying the uncertainty in model structure remain outstanding research challenges for the discipline of hydrology. Progress on these problems requires understanding of the nature of differences between models. This paper presents a methodology to diagnose differences in hydrological model structures: the Framework for Understanding Structural Errors (FUSE). FUSE was used to construct 79 unique model structures by combining components of 4 existing hydrological models. These new models were used to simulate streamflow in two of the basins used in the Model Parameter Estimation Experiment (MOPEX): the Guadalupe River (Texas) and the French Broad River (North Carolina). Results show that the new models produced simulations of streamflow that were at least as good as the simulations produced by the models that participated in the MOPEX experiment. Our initial application of the FUSE method for the Guadalupe River exposed relationships between model structure and model performance, suggesting that the choice of model structure is just as important as the choice of model parameters. However, further work is needed to evaluate model simulations using multiple criteria to diagnose the relative importance of model structural differences in various climate regimes and to assess the amount of independent information in each of the models. This work will be crucial to both identifying the most appropriate model structure for a given problem and quantifying the uncertainty in model structure. To facilitate research on these problems, the FORTRAN‐90 source code for FUSE is available upon request from the lead author.
Apostolopoulos, Yorghos; Lemke, Michael K; Barry, Adam E; Lich, Kristen Hassmiller
2018-02-01
Given the complexity of factors contributing to alcohol misuse, appropriate epistemologies and methodologies are needed to understand and intervene meaningfully. We aimed to (1) provide an overview of computational modeling methodologies, with an emphasis on system dynamics modeling; (2) explain how community-based system dynamics modeling can forge new directions in alcohol prevention research; and (3) present a primer on how to build alcohol misuse simulation models using system dynamics modeling, with an emphasis on stakeholder involvement, data sources and model validation. Throughout, we use alcohol misuse among college students in the United States as a heuristic example for demonstrating these methodologies. System dynamics modeling employs a top-down aggregate approach to understanding dynamically complex problems. Its three foundational properties-stocks, flows and feedbacks-capture non-linearity, time-delayed effects and other system characteristics. As a methodological choice, system dynamics modeling is amenable to participatory approaches; in particular, community-based system dynamics modeling has been used to build impactful models for addressing dynamically complex problems. The process of community-based system dynamics modeling consists of numerous stages: (1) creating model boundary charts, behavior-over-time-graphs and preliminary system dynamics models using group model-building techniques; (2) model formulation; (3) model calibration; (4) model testing and validation; and (5) model simulation using learning-laboratory techniques. Community-based system dynamics modeling can provide powerful tools for policy and intervention decisions that can result ultimately in sustainable changes in research and action in alcohol misuse prevention. © 2017 Society for the Study of Addiction.
Johnson, Leigh F; Geffen, Nathan
2016-03-01
Different models of sexually transmitted infections (STIs) can yield substantially different conclusions about STI epidemiology, and it is important to understand how and why models differ. Frequency-dependent models make the simplifying assumption that STI incidence is proportional to STI prevalence in the population, whereas network models calculate STI incidence more realistically by classifying individuals according to their partners' STI status. We assessed a deterministic frequency-dependent model approximation to a microsimulation network model of STIs in South Africa. Sexual behavior and demographic parameters were identical in the 2 models. Six STIs were simulated using each model: HIV, herpes, syphilis, gonorrhea, chlamydia, and trichomoniasis. For all 6 STIs, the frequency-dependent model estimated a higher STI prevalence than the network model, with the difference between the 2 models being relatively large for the curable STIs. When the 2 models were fitted to the same STI prevalence data, the best-fitting parameters differed substantially between models, with the frequency-dependent model suggesting more immunity and lower transmission probabilities. The fitted frequency-dependent model estimated that the effects of a hypothetical elimination of concurrent partnerships and a reduction in commercial sex were both smaller than estimated by the fitted network model, whereas the latter model estimated a smaller impact of a reduction in unprotected sex in spousal relationships. The frequency-dependent assumption is problematic when modeling short-term STIs. Frequency-dependent models tend to underestimate the importance of high-risk groups in sustaining STI epidemics, while overestimating the importance of long-term partnerships and low-risk groups.
NASA Astrophysics Data System (ADS)
Ahmadlou, M.; Delavar, M. R.; Tayyebi, A.; Shafizadeh-Moghadam, H.
2015-12-01
Land use change (LUC) models used for modelling urban growth are different in structure and performance. Local models divide the data into separate subsets and fit distinct models on each of the subsets. Non-parametric models are data driven and usually do not have a fixed model structure or model structure is unknown before the modelling process. On the other hand, global models perform modelling using all the available data. In addition, parametric models have a fixed structure before the modelling process and they are model driven. Since few studies have compared local non-parametric models with global parametric models, this study compares a local non-parametric model called multivariate adaptive regression spline (MARS), and a global parametric model called artificial neural network (ANN) to simulate urbanization in Mumbai, India. Both models determine the relationship between a dependent variable and multiple independent variables. We used receiver operating characteristic (ROC) to compare the power of the both models for simulating urbanization. Landsat images of 1991 (TM) and 2010 (ETM+) were used for modelling the urbanization process. The drivers considered for urbanization in this area were distance to urban areas, urban density, distance to roads, distance to water, distance to forest, distance to railway, distance to central business district, number of agricultural cells in a 7 by 7 neighbourhoods, and slope in 1991. The results showed that the area under the ROC curve for MARS and ANN was 94.77% and 95.36%, respectively. Thus, ANN performed slightly better than MARS to simulate urban areas in Mumbai, India.
ModelMuse - A Graphical User Interface for MODFLOW-2005 and PHAST
Winston, Richard B.
2009-01-01
ModelMuse is a graphical user interface (GUI) for the U.S. Geological Survey (USGS) models MODFLOW-2005 and PHAST. This software package provides a GUI for creating the flow and transport input file for PHAST and the input files for MODFLOW-2005. In ModelMuse, the spatial data for the model is independent of the grid, and the temporal data is independent of the stress periods. Being able to input these data independently allows the user to redefine the spatial and temporal discretization at will. This report describes the basic concepts required to work with ModelMuse. These basic concepts include the model grid, data sets, formulas, objects, the method used to assign values to data sets, and model features. The ModelMuse main window has a top, front, and side view of the model that can be used for editing the model, and a 3-D view of the model that can be used to display properties of the model. ModelMuse has tools to generate and edit the model grid. It also has a variety of interpolation methods and geographic functions that can be used to help define the spatial variability of the model. ModelMuse can be used to execute both MODFLOW-2005 and PHAST and can also display the results of MODFLOW-2005 models. An example of using ModelMuse with MODFLOW-2005 is included in this report. Several additional examples are described in the help system for ModelMuse, which can be accessed from the Help menu.
Transient PVT measurements and model predictions for vessel heat transfer. Part II.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Felver, Todd G.; Paradiso, Nicholas Joseph; Winters, William S., Jr.
2010-07-01
Part I of this report focused on the acquisition and presentation of transient PVT data sets that can be used to validate gas transfer models. Here in Part II we focus primarily on describing models and validating these models using the data sets. Our models are intended to describe the high speed transport of compressible gases in arbitrary arrangements of vessels, tubing, valving and flow branches. Our models fall into three categories: (1) network flow models in which flow paths are modeled as one-dimensional flow and vessels are modeled as single control volumes, (2) CFD (Computational Fluid Dynamics) models inmore » which flow in and between vessels is modeled in three dimensions and (3) coupled network/CFD models in which vessels are modeled using CFD and flows between vessels are modeled using a network flow code. In our work we utilized NETFLOW as our network flow code and FUEGO for our CFD code. Since network flow models lack three-dimensional resolution, correlations for heat transfer and tube frictional pressure drop are required to resolve important physics not being captured by the model. Here we describe how vessel heat transfer correlations were improved using the data and present direct model-data comparisons for all tests documented in Part I. Our results show that our network flow models have been substantially improved. The CFD modeling presented here describes the complex nature of vessel heat transfer and for the first time demonstrates that flow and heat transfer in vessels can be modeled directly without the need for correlations.« less
Comparison of chiller models for use in model-based fault detection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sreedharan, Priya; Haves, Philip
Selecting the model is an important and essential step in model based fault detection and diagnosis (FDD). Factors that are considered in evaluating a model include accuracy, training data requirements, calibration effort, generality, and computational requirements. The objective of this study was to evaluate different modeling approaches for their applicability to model based FDD of vapor compression chillers. Three different models were studied: the Gordon and Ng Universal Chiller model (2nd generation) and a modified version of the ASHRAE Primary Toolkit model, which are both based on first principles, and the DOE-2 chiller model, as implemented in CoolTools{trademark}, which ismore » empirical. The models were compared in terms of their ability to reproduce the observed performance of an older, centrifugal chiller operating in a commercial office building and a newer centrifugal chiller in a laboratory. All three models displayed similar levels of accuracy. Of the first principles models, the Gordon-Ng model has the advantage of being linear in the parameters, which allows more robust parameter estimation methods to be used and facilitates estimation of the uncertainty in the parameter values. The ASHRAE Toolkit Model may have advantages when refrigerant temperature measurements are also available. The DOE-2 model can be expected to have advantages when very limited data are available to calibrate the model, as long as one of the previously identified models in the CoolTools library matches the performance of the chiller in question.« less
NASA Astrophysics Data System (ADS)
Lute, A. C.; Luce, Charles H.
2017-11-01
The related challenges of predictions in ungauged basins and predictions in ungauged climates point to the need to develop environmental models that are transferable across both space and time. Hydrologic modeling has historically focused on modelling one or only a few basins using highly parameterized conceptual or physically based models. However, model parameters and structures have been shown to change significantly when calibrated to new basins or time periods, suggesting that model complexity and model transferability may be antithetical. Empirical space-for-time models provide a framework within which to assess model transferability and any tradeoff with model complexity. Using 497 SNOTEL sites in the western U.S., we develop space-for-time models of April 1 SWE and Snow Residence Time based on mean winter temperature and cumulative winter precipitation. The transferability of the models to new conditions (in both space and time) is assessed using non-random cross-validation tests with consideration of the influence of model complexity on transferability. As others have noted, the algorithmic empirical models transfer best when minimal extrapolation in input variables is required. Temporal split-sample validations use pseudoreplicated samples, resulting in the selection of overly complex models, which has implications for the design of hydrologic model validation tests. Finally, we show that low to moderate complexity models transfer most successfully to new conditions in space and time, providing empirical confirmation of the parsimony principal.
Geospace environment modeling 2008--2009 challenge: Dst index
Rastätter, L.; Kuznetsova, M.M.; Glocer, A.; Welling, D.; Meng, X.; Raeder, J.; Wittberger, M.; Jordanova, V.K.; Yu, Y.; Zaharia, S.; Weigel, R.S.; Sazykin, S.; Boynton, R.; Wei, H.; Eccles, V.; Horton, W.; Mays, M.L.; Gannon, J.
2013-01-01
This paper reports the metrics-based results of the Dst index part of the 2008–2009 GEM Metrics Challenge. The 2008–2009 GEM Metrics Challenge asked modelers to submit results for four geomagnetic storm events and five different types of observations that can be modeled by statistical, climatological or physics-based models of the magnetosphere-ionosphere system. We present the results of 30 model settings that were run at the Community Coordinated Modeling Center and at the institutions of various modelers for these events. To measure the performance of each of the models against the observations, we use comparisons of 1 hour averaged model data with the Dst index issued by the World Data Center for Geomagnetism, Kyoto, Japan, and direct comparison of 1 minute model data with the 1 minute Dst index calculated by the United States Geological Survey. The latter index can be used to calculate spectral variability of model outputs in comparison to the index. We find that model rankings vary widely by skill score used. None of the models consistently perform best for all events. We find that empirical models perform well in general. Magnetohydrodynamics-based models of the global magnetosphere with inner magnetosphere physics (ring current model) included and stand-alone ring current models with properly defined boundary conditions perform well and are able to match or surpass results from empirical models. Unlike in similar studies, the statistical models used in this study found their challenge in the weakest events rather than the strongest events.
Hybrid Forecasting of Daily River Discharges Considering Autoregressive Heteroscedasticity
NASA Astrophysics Data System (ADS)
Szolgayová, Elena Peksová; Danačová, Michaela; Komorniková, Magda; Szolgay, Ján
2017-06-01
It is widely acknowledged that in the hydrological and meteorological communities, there is a continuing need to improve the quality of quantitative rainfall and river flow forecasts. A hybrid (combined deterministic-stochastic) modelling approach is proposed here that combines the advantages offered by modelling the system dynamics with a deterministic model and a deterministic forecasting error series with a data-driven model in parallel. Since the processes to be modelled are generally nonlinear and the model error series may exhibit nonstationarity and heteroscedasticity, GARCH-type nonlinear time series models are considered here. The fitting, forecasting and simulation performance of such models have to be explored on a case-by-case basis. The goal of this paper is to test and develop an appropriate methodology for model fitting and forecasting applicable for daily river discharge forecast error data from the GARCH family of time series models. We concentrated on verifying whether the use of a GARCH-type model is suitable for modelling and forecasting a hydrological model error time series on the Hron and Morava Rivers in Slovakia. For this purpose we verified the presence of heteroscedasticity in the simulation error series of the KLN multilinear flow routing model; then we fitted the GARCH-type models to the data and compared their fit with that of an ARMA - type model. We produced one-stepahead forecasts from the fitted models and again provided comparisons of the model's performance.
CHENG, JIANLIN; EICKHOLT, JESSE; WANG, ZHENG; DENG, XIN
2013-01-01
After decades of research, protein structure prediction remains a very challenging problem. In order to address the different levels of complexity of structural modeling, two types of modeling techniques — template-based modeling and template-free modeling — have been developed. Template-based modeling can often generate a moderate- to high-resolution model when a similar, homologous template structure is found for a query protein but fails if no template or only incorrect templates are found. Template-free modeling, such as fragment-based assembly, may generate models of moderate resolution for small proteins of low topological complexity. Seldom have the two techniques been integrated together to improve protein modeling. Here we develop a recursive protein modeling approach to selectively and collaboratively apply template-based and template-free modeling methods to model template-covered (i.e. certain) and template-free (i.e. uncertain) regions of a protein. A preliminary implementation of the approach was tested on a number of hard modeling cases during the 9th Critical Assessment of Techniques for Protein Structure Prediction (CASP9) and successfully improved the quality of modeling in most of these cases. Recursive modeling can signicantly reduce the complexity of protein structure modeling and integrate template-based and template-free modeling to improve the quality and efficiency of protein structure prediction. PMID:22809379
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dai, Heng; Ye, Ming; Walker, Anthony P.
Hydrological models are always composed of multiple components that represent processes key to intended model applications. When a process can be simulated by multiple conceptual-mathematical models (process models), model uncertainty in representing the process arises. While global sensitivity analysis methods have been widely used for identifying important processes in hydrologic modeling, the existing methods consider only parametric uncertainty but ignore the model uncertainty for process representation. To address this problem, this study develops a new method to probe multimodel process sensitivity by integrating the model averaging methods into the framework of variance-based global sensitivity analysis, given that the model averagingmore » methods quantify both parametric and model uncertainty. A new process sensitivity index is derived as a metric of relative process importance, and the index includes variance in model outputs caused by uncertainty in both process models and model parameters. For demonstration, the new index is used to evaluate the processes of recharge and geology in a synthetic study of groundwater reactive transport modeling. The recharge process is simulated by two models that converting precipitation to recharge, and the geology process is also simulated by two models of different parameterizations of hydraulic conductivity; each process model has its own random parameters. The new process sensitivity index is mathematically general, and can be applied to a wide range of problems in hydrology and beyond.« less
Comparison of childbirth care models in public hospitals, Brazil.
Vogt, Sibylle Emilie; Silva, Kátia Silveira da; Dias, Marcos Augusto Bastos
2014-04-01
To compare collaborative and traditional childbirth care models. Cross-sectional study with 655 primiparous women in four public health system hospitals in Belo Horizonte, MG, Southeastern Brazil, in 2011 (333 women for the collaborative model and 322 for the traditional model, including those with induced or premature labor). Data were collected using interviews and medical records. The Chi-square test was used to compare the outcomes and multivariate logistic regression to determine the association between the model and the interventions used. Paid work and schooling showed significant differences in distribution between the models. Oxytocin (50.2% collaborative model and 65.5% traditional model; p < 0.001), amniotomy (54.3% collaborative model and 65.9% traditional model; p = 0.012) and episiotomy (collaborative model 16.1% and traditional model 85.2%; p < 0.001) were less used in the collaborative model with increased application of non-pharmacological pain relief (85.0% collaborative model and 78.9% traditional model; p = 0.042). The association between the collaborative model and the reduction in the use of oxytocin, artificial rupture of membranes and episiotomy remained after adjustment for confounding. The care model was not associated with complications in newborns or mothers neither with the use of spinal or epidural analgesia. The results suggest that collaborative model may reduce interventions performed in labor care with similar perinatal outcomes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuhn, J K; von Fuchs, G F; Zob, A P
1980-05-01
Two water tank component simulation models have been selected and upgraded. These models are called the CSU Model and the Extended SOLSYS Model. The models have been standardized and links have been provided for operation in the TRNSYS simulation program. The models are described in analytical terms as well as in computer code. Specific water tank tests were performed for the purpose of model validation. Agreement between model data and test data is excellent. A description of the limitations has also been included. Streamlining results and criteria for the reduction of computer time have also been shown for both watermore » tank computer models. Computer codes for the models and instructions for operating these models in TRNSYS have also been included, making the models readily available for DOE and industry use. Rock bed component simulation models have been reviewed and a model selected and upgraded. This model is a logical extension of the Mumma-Marvin model. Specific rock bed tests have been performed for the purpose of validation. Data have been reviewed for consistency. Details of the test results concerned with rock characteristics and pressure drop through the bed have been explored and are reported.« less
Modeling approaches in avian conservation and the role of field biologists
Beissinger, Steven R.; Walters, J.R.; Catanzaro, D.G.; Smith, Kimberly G.; Dunning, J.B.; Haig, Susan M.; Noon, Barry; Stith, Bradley M.
2006-01-01
This review grew out of our realization that models play an increasingly important role in conservation but are rarely used in the research of most avian biologists. Modelers are creating models that are more complex and mechanistic and that can incorporate more of the knowledge acquired by field biologists. Such models require field biologists to provide more specific information, larger sample sizes, and sometimes new kinds of data, such as habitat-specific demography and dispersal information. Field biologists need to support model development by testing key model assumptions and validating models. The best conservation decisions will occur where cooperative interaction enables field biologists, modelers, statisticians, and managers to contribute effectively. We begin by discussing the general form of ecological models—heuristic or mechanistic, "scientific" or statistical—and then highlight the structure, strengths, weaknesses, and applications of six types of models commonly used in avian conservation: (1) deterministic single-population matrix models, (2) stochastic population viability analysis (PVA) models for single populations, (3) metapopulation models, (4) spatially explicit models, (5) genetic models, and (6) species distribution models. We end by considering their unique attributes, determining whether the assumptions that underlie the structure are valid, and testing the ability of the model to predict the future correctly.
NASA Astrophysics Data System (ADS)
Rossman, Nathan R.; Zlotnik, Vitaly A.
2013-09-01
Water resources in agriculture-dominated basins of the arid western United States are stressed due to long-term impacts from pumping. A review of 88 regional groundwater-flow modeling applications from seven intensively irrigated western states (Arizona, California, Colorado, Idaho, Kansas, Nebraska and Texas) was conducted to provide hydrogeologists, modelers, water managers, and decision makers insight about past modeling studies that will aid future model development. Groundwater models were classified into three types: resource evaluation models (39 %), which quantify water budgets and act as preliminary models intended to be updated later, or constitute re-calibrations of older models; management/planning models (55 %), used to explore and identify management plans based on the response of the groundwater system to water-development or climate scenarios, sometimes under water-use constraints; and water rights models (7 %), used to make water administration decisions based on model output and to quantify water shortages incurred by water users or climate changes. Results for 27 model characteristics are summarized by state and model type, and important comparisons and contrasts are highlighted. Consideration of modeling uncertainty and the management focus toward sustainability, adaptive management and resilience are discussed, and future modeling recommendations, in light of the reviewed models and other published works, are presented.
Roelker, Sarah A; Caruthers, Elena J; Baker, Rachel K; Pelz, Nicholas C; Chaudhari, Ajit M W; Siston, Robert A
2017-11-01
With more than 29,000 OpenSim users, several musculoskeletal models with varying levels of complexity are available to study human gait. However, how different model parameters affect estimated joint and muscle function between models is not fully understood. The purpose of this study is to determine the effects of four OpenSim models (Gait2392, Lower Limb Model 2010, Full-Body OpenSim Model, and Full Body Model 2016) on gait mechanics and estimates of muscle forces and activations. Using OpenSim 3.1 and the same experimental data for all models, six young adults were scaled in each model, gait kinematics were reproduced, and static optimization estimated muscle function. Simulated measures differed between models by up to 6.5° knee range of motion, 0.012 Nm/Nm peak knee flexion moment, 0.49 peak rectus femoris activation, and 462 N peak rectus femoris force. Differences in coordinate system definitions between models altered joint kinematics, influencing joint moments. Muscle parameter and joint moment discrepancies altered muscle activations and forces. Additional model complexity yielded greater error between experimental and simulated measures; therefore, this study suggests Gait2392 is a sufficient model for studying walking in healthy young adults. Future research is needed to determine which model(s) is best for tasks with more complex motion.
Inter-sectoral comparison of model uncertainty of climate change impacts in Africa
NASA Astrophysics Data System (ADS)
van Griensven, Ann; Vetter, Tobias; Piontek, Franzisca; Gosling, Simon N.; Kamali, Bahareh; Reinhardt, Julia; Dinkneh, Aklilu; Yang, Hong; Alemayehu, Tadesse
2016-04-01
We present the model results and their uncertainties of an inter-sectoral impact model inter-comparison initiative (ISI-MIP) for climate change impacts in Africa. The study includes results on hydrological, crop and health aspects. The impact models used ensemble inputs consisting of 20 time series of daily rainfall and temperature data obtained from 5 Global Circulation Models (GCMs) and 4 Representative concentration pathway (RCP). In this study, we analysed model uncertainty for the Regional Hydrological Models, Global Hydrological Models, Malaria models and Crop models. For the regional hydrological models, we used 2 African test cases: the Blue Nile in Eastern Africa and the Niger in Western Africa. For both basins, the main sources of uncertainty are originating from the GCM and RCPs, while the uncertainty of the regional hydrological models is relatively low. The hydrological model uncertainty becomes more important when predicting changes on low flows compared to mean or high flows. For the other sectors, the impact models have the largest share of uncertainty compared to GCM and RCP, especially for Malaria and crop modelling. The overall conclusion of the ISI-MIP is that it is strongly advised to use ensemble modeling approach for climate change impact studies throughout the whole modelling chain.
Extended behavioural modelling of FET and lattice-mismatched HEMT devices
NASA Astrophysics Data System (ADS)
Khawam, Yahya; Albasha, Lutfi
2017-07-01
This study presents an improved large signal model that can be used for high electron mobility transistors (HEMTs) and field effect transistors using measurement-based behavioural modelling techniques. The steps for accurate large and small signal modelling for transistor are also discussed. The proposed DC model is based on the Fager model since it compensates between the number of model's parameters and accuracy. The objective is to increase the accuracy of the drain-source current model with respect to any change in gate or drain voltages. Also, the objective is to extend the improved DC model to account for soft breakdown and kink effect found in some variants of HEMT devices. A hybrid Newton's-Genetic algorithm is used in order to determine the unknown parameters in the developed model. In addition to accurate modelling of a transistor's DC characteristics, the complete large signal model is modelled using multi-bias s-parameter measurements. The way that the complete model is performed is by using a hybrid multi-objective optimisation technique (Non-dominated Sorting Genetic Algorithm II) and local minimum search (multivariable Newton's method) for parasitic elements extraction. Finally, the results of DC modelling and multi-bias s-parameters modelling are presented, and three-device modelling recommendations are discussed.
The regionalization of national-scale SPARROW models for stream nutrients
Schwarz, Gregory E.; Alexander, Richard B.; Smith, Richard A.; Preston, Stephen D.
2011-01-01
This analysis modifies the parsimonious specification of recently published total nitrogen (TN) and total phosphorus (TP) national-scale SPAtially Referenced Regressions On Watershed attributes models to allow each model coefficient to vary geographically among three major river basins of the conterminous United States. Regionalization of the national models reduces the standard errors in the prediction of TN and TP loads, expressed as a percentage of the predicted load, by about 6 and 7%. We develop and apply a method for combining national-scale and regional-scale information to estimate a hybrid model that imposes cross-region constraints that limit regional variation in model coefficients, effectively reducing the number of free model parameters as compared to a collection of independent regional models. The hybrid TN and TP regional models have improved model fit relative to the respective national models, reducing the standard error in the prediction of loads, expressed as a percentage of load, by about 5 and 4%. Only 19% of the TN hybrid model coefficients and just 2% of the TP hybrid model coefficients show evidence of substantial regional specificity (more than ±100% deviation from the national model estimate). The hybrid models have much greater precision in the estimated coefficients than do the unconstrained regional models, demonstrating the efficacy of pooling information across regions to improve regional models.
Modeling of Stiffness and Strength of Bone at Nanoscale.
Abueidda, Diab W; Sabet, Fereshteh A; Jasiuk, Iwona M
2017-05-01
Two distinct geometrical models of bone at the nanoscale (collagen fibril and mineral platelets) are analyzed computationally. In the first model (model I), minerals are periodically distributed in a staggered manner in a collagen matrix while in the second model (model II), minerals form continuous layers outside the collagen fibril. Elastic modulus and strength of bone at the nanoscale, represented by these two models under longitudinal tensile loading, are studied using a finite element (FE) software abaqus. The analysis employs a traction-separation law (cohesive surface modeling) at various interfaces in the models to account for interfacial delaminations. Plane stress, plane strain, and axisymmetric versions of the two models are considered. Model II is found to have a higher stiffness than model I for all cases. For strength, the two models alternate the superiority of performance depending on the inputs and assumptions used. For model II, the axisymmetric case gives higher results than the plane stress and plane strain cases while an opposite trend is observed for model I. For axisymmetric case, model II shows greater strength and stiffness compared to model I. The collagen-mineral arrangement of bone at nanoscale forms a basic building block of bone. Thus, knowledge of its mechanical properties is of high scientific and clinical interests.
The Use of Behavior Models for Predicting Complex Operations
NASA Technical Reports Server (NTRS)
Gore, Brian F.
2010-01-01
Modeling and simulation (M&S) plays an important role when complex human-system notions are being proposed, developed and tested within the system design process. National Aeronautics and Space Administration (NASA) as an agency uses many different types of M&S approaches for predicting human-system interactions, especially when it is early in the development phase of a conceptual design. NASA Ames Research Center possesses a number of M&S capabilities ranging from airflow, flight path models, aircraft models, scheduling models, human performance models (HPMs), and bioinformatics models among a host of other kinds of M&S capabilities that are used for predicting whether the proposed designs will benefit the specific mission criteria. The Man-Machine Integration Design and Analysis System (MIDAS) is a NASA ARC HPM software tool that integrates many models of human behavior with environment models, equipment models, and procedural / task models. The challenge to model comprehensibility is heightened as the number of models that are integrated and the requisite fidelity of the procedural sets are increased. Model transparency is needed for some of the more complex HPMs to maintain comprehensibility of the integrated model performance. This will be exemplified in a recent MIDAS v5 application model and plans for future model refinements will be presented.
ERIC Educational Resources Information Center
Gerst, Elyssa H.
2017-01-01
The primary aim of this study was to examine the structure of processing speed (PS) in middle childhood by comparing five theoretically driven models of PS. The models consisted of two conceptual models (a unitary model, a complexity model) and three methodological models (a stimulus material model, an output modality model, and a timing modality…
ERIC Educational Resources Information Center
Shin, Tacksoo
2012-01-01
This study introduced various nonlinear growth models, including the quadratic conventional polynomial model, the fractional polynomial model, the Sigmoid model, the growth model with negative exponential functions, the multidimensional scaling technique, and the unstructured growth curve model. It investigated which growth models effectively…
ERIC Educational Resources Information Center
Scheer, Scott D.; Cochran, Graham R.; Harder, Amy; Place, Nick T.
2011-01-01
The purpose of this study was to compare and contrast an academic extension education model with an Extension human resource management model. The academic model of 19 competencies was similar across the 22 competencies of the Extension human resource management model. There were seven unique competencies for the human resource management model.…
Defining a Family of Cognitive Diagnosis Models Using Log-Linear Models with Latent Variables
ERIC Educational Resources Information Center
Henson, Robert A.; Templin, Jonathan L.; Willse, John T.
2009-01-01
This paper uses log-linear models with latent variables (Hagenaars, in "Loglinear Models with Latent Variables," 1993) to define a family of cognitive diagnosis models. In doing so, the relationship between many common models is explicitly defined and discussed. In addition, because the log-linear model with latent variables is a general model for…
A toolbox and a record for scientific model development
NASA Technical Reports Server (NTRS)
Ellman, Thomas
1994-01-01
Scientific computation can benefit from software tools that facilitate construction of computational models, control the application of models, and aid in revising models to handle new situations. Existing environments for scientific programming provide only limited means of handling these tasks. This paper describes a two pronged approach for handling these tasks: (1) designing a 'Model Development Toolbox' that includes a basic set of model constructing operations; and (2) designing a 'Model Development Record' that is automatically generated during model construction. The record is subsequently exploited by tools that control the application of scientific models and revise models to handle new situations. Our two pronged approach is motivated by our belief that the model development toolbox and record should be highly interdependent. In particular, a suitable model development record can be constructed only when models are developed using a well defined set of operations. We expect this research to facilitate rapid development of new scientific computational models, to help ensure appropriate use of such models and to facilitate sharing of such models among working computational scientists. We are testing this approach by extending SIGMA, and existing knowledge-based scientific software design tool.
A decision support model for investment on P2P lending platform.
Zeng, Xiangxiang; Liu, Li; Leung, Stephen; Du, Jiangze; Wang, Xun; Li, Tao
2017-01-01
Peer-to-peer (P2P) lending, as a novel economic lending model, has triggered new challenges on making effective investment decisions. In a P2P lending platform, one lender can invest N loans and a loan may be accepted by M investors, thus forming a bipartite graph. Basing on the bipartite graph model, we built an iteration computation model to evaluate the unknown loans. To validate the proposed model, we perform extensive experiments on real-world data from the largest American P2P lending marketplace-Prosper. By comparing our experimental results with those obtained by Bayes and Logistic Regression, we show that our computation model can help borrowers select good loans and help lenders make good investment decisions. Experimental results also show that the Logistic classification model is a good complement to our iterative computation model, which motivates us to integrate the two classification models. The experimental results of the hybrid classification model demonstrate that the logistic classification model and our iteration computation model are complementary to each other. We conclude that the hybrid model (i.e., the integration of iterative computation model and Logistic classification model) is more efficient and stable than the individual model alone.
A decision support model for investment on P2P lending platform
Liu, Li; Leung, Stephen; Du, Jiangze; Wang, Xun; Li, Tao
2017-01-01
Peer-to-peer (P2P) lending, as a novel economic lending model, has triggered new challenges on making effective investment decisions. In a P2P lending platform, one lender can invest N loans and a loan may be accepted by M investors, thus forming a bipartite graph. Basing on the bipartite graph model, we built an iteration computation model to evaluate the unknown loans. To validate the proposed model, we perform extensive experiments on real-world data from the largest American P2P lending marketplace—Prosper. By comparing our experimental results with those obtained by Bayes and Logistic Regression, we show that our computation model can help borrowers select good loans and help lenders make good investment decisions. Experimental results also show that the Logistic classification model is a good complement to our iterative computation model, which motivates us to integrate the two classification models. The experimental results of the hybrid classification model demonstrate that the logistic classification model and our iteration computation model are complementary to each other. We conclude that the hybrid model (i.e., the integration of iterative computation model and Logistic classification model) is more efficient and stable than the individual model alone. PMID:28877234
NASA Technical Reports Server (NTRS)
Alexandrov, N. M.; Nielsen, E. J.; Lewis, R. M.; Anderson, W. K.
2000-01-01
First-order approximation and model management is a methodology for a systematic use of variable-fidelity models or approximations in optimization. The intent of model management is to attain convergence to high-fidelity solutions with minimal expense in high-fidelity computations. The savings in terms of computationally intensive evaluations depends on the ability of the available lower-fidelity model or a suite of models to predict the improvement trends for the high-fidelity problem, Variable-fidelity models can be represented by data-fitting approximations, variable-resolution models. variable-convergence models. or variable physical fidelity models. The present work considers the use of variable-fidelity physics models. We demonstrate the performance of model management on an aerodynamic optimization of a multi-element airfoil designed to operate in the transonic regime. Reynolds-averaged Navier-Stokes equations represent the high-fidelity model, while the Euler equations represent the low-fidelity model. An unstructured mesh-based analysis code FUN2D evaluates functions and sensitivity derivatives for both models. Model management for the present demonstration problem yields fivefold savings in terms of high-fidelity evaluations compared to optimization done with high-fidelity computations alone.
Cai, Qing; Lee, Jaeyoung; Eluru, Naveen; Abdel-Aty, Mohamed
2016-08-01
This study attempts to explore the viability of dual-state models (i.e., zero-inflated and hurdle models) for traffic analysis zones (TAZs) based pedestrian and bicycle crash frequency analysis. Additionally, spatial spillover effects are explored in the models by employing exogenous variables from neighboring zones. The dual-state models such as zero-inflated negative binomial and hurdle negative binomial models (with and without spatial effects) are compared with the conventional single-state model (i.e., negative binomial). The model comparison for pedestrian and bicycle crashes revealed that the models that considered observed spatial effects perform better than the models that did not consider the observed spatial effects. Across the models with spatial spillover effects, the dual-state models especially zero-inflated negative binomial model offered better performance compared to single-state models. Moreover, the model results clearly highlighted the importance of various traffic, roadway, and sociodemographic characteristics of the TAZ as well as neighboring TAZs on pedestrian and bicycle crash frequency. Copyright © 2016 Elsevier Ltd. All rights reserved.
BioModels Database: a repository of mathematical models of biological processes.
Chelliah, Vijayalakshmi; Laibe, Camille; Le Novère, Nicolas
2013-01-01
BioModels Database is a public online resource that allows storing and sharing of published, peer-reviewed quantitative, dynamic models of biological processes. The model components and behaviour are thoroughly checked to correspond the original publication and manually curated to ensure reliability. Furthermore, the model elements are annotated with terms from controlled vocabularies as well as linked to relevant external data resources. This greatly helps in model interpretation and reuse. Models are stored in SBML format, accepted in SBML and CellML formats, and are available for download in various other common formats such as BioPAX, Octave, SciLab, VCML, XPP and PDF, in addition to SBML. The reaction network diagram of the models is also available in several formats. BioModels Database features a search engine, which provides simple and more advanced searches. Features such as online simulation and creation of smaller models (submodels) from the selected model elements of a larger one are provided. BioModels Database can be accessed both via a web interface and programmatically via web services. New models are available in BioModels Database at regular releases, about every 4 months.
Documenting Models for Interoperability and Reusability ...
Many modeling frameworks compartmentalize science via individual models that link sets of small components to create larger modeling workflows. Developing integrated watershed models increasingly requires coupling multidisciplinary, independent models, as well as collaboration between scientific communities, since component-based modeling can integrate models from different disciplines. Integrated Environmental Modeling (IEM) systems focus on transferring information between components by capturing a conceptual site model; establishing local metadata standards for input/output of models and databases; managing data flow between models and throughout the system; facilitating quality control of data exchanges (e.g., checking units, unit conversions, transfers between software languages); warning and error handling; and coordinating sensitivity/uncertainty analyses. Although many computational software systems facilitate communication between, and execution of, components, there are no common approaches, protocols, or standards for turn-key linkages between software systems and models, especially if modifying components is not the intent. Using a standard ontology, this paper reviews how models can be described for discovery, understanding, evaluation, access, and implementation to facilitate interoperability and reusability. In the proceedings of the International Environmental Modelling and Software Society (iEMSs), 8th International Congress on Environmental Mod
CSR Model Implementation from School Stakeholder Perspectives
ERIC Educational Resources Information Center
Herrmann, Suzannah
2006-01-01
Despite comprehensive school reform (CSR) model developers' best intentions to make school stakeholders adhere strictly to the implementation of model components, school stakeholders implementing CSR models inevitably make adaptations to the CSR model. Adaptations are made to CSR models because school stakeholders internalize CSR model practices…
A comparison of simple global kinetic models for coal devolatilization with the CPD model
Richards, Andrew P.; Fletcher, Thomas H.
2016-08-01
Simulations of coal combustors and gasifiers generally cannot incorporate the complexities of advanced pyrolysis models, and hence there is interest in evaluating simpler models over ranges of temperature and heating rate that are applicable to the furnace of interest. In this paper, six different simple model forms are compared to predictions made by the Chemical Percolation Devolatilization (CPD) model. The model forms included three modified one-step models, a simple two-step model, and two new modified two-step models. These simple model forms were compared over a wide range of heating rates (5 × 10 3 to 10 6 K/s) at finalmore » temperatures up to 1600 K. Comparisons were made of total volatiles yield as a function of temperature, as well as the ultimate volatiles yield. Advantages and disadvantages for each simple model form are discussed. In conclusion, a modified two-step model with distributed activation energies seems to give the best agreement with CPD model predictions (with the fewest tunable parameters).« less
[Bone remodeling and modeling/mini-modeling.
Hasegawa, Tomoka; Amizuka, Norio
Modeling, adapting structures to loading by changing bone size and shapes, often takes place in bone of the fetal and developmental stages, while bone remodeling-replacement of old bone into new bone-is predominant in the adult stage. Modeling can be divided into macro-modeling(macroscopic modeling)and mini-modeling(microscopic modeling). In the cellular process of mini-modeling, unlike bone remodeling, bone lining cells, i.e., resting flattened osteoblasts covering bone surfaces will become active form of osteoblasts, and then, deposit new bone onto the old bone without mediating osteoclastic bone resorption. Among the drugs for osteoporotic treatment, eldecalcitol(a vitamin D3 analog)and teriparatide(human PTH[1-34])could show mini-modeling based bone formation. Histologically, mature, active form of osteoblasts are localized on the new bone induced by mini-modeling, however, only a few cell layer of preosteoblasts are formed over the newly-formed bone, and accordingly, few osteoclasts are present in the region of mini-modeling. In this review, histological characteristics of bone remodeling and modeling including mini-modeling will be introduced.
An Introduction to Markov Modeling: Concepts and Uses
NASA Technical Reports Server (NTRS)
Boyd, Mark A.; Lau, Sonie (Technical Monitor)
1998-01-01
Kharkov modeling is a modeling technique that is widely useful for dependability analysis of complex fault tolerant systems. It is very flexible in the type of systems and system behavior it can model. It is not, however, the most appropriate modeling technique for every modeling situation. The first task in obtaining a reliability or availability estimate for a system is selecting which modeling technique is most appropriate to the situation at hand. A person performing a dependability analysis must confront the question: is Kharkov modeling most appropriate to the system under consideration, or should another technique be used instead? The need to answer this gives rise to other more basic questions regarding Kharkov modeling: what are the capabilities and limitations of Kharkov modeling as a modeling technique? How does it relate to other modeling techniques? What kind of system behavior can it model? What kinds of software tools are available for performing dependability analyses with Kharkov modeling techniques? These questions and others will be addressed in this tutorial.
The cerebro-cerebellum: Could it be loci of forward models?
Ishikawa, Takahiro; Tomatsu, Saeka; Izawa, Jun; Kakei, Shinji
2016-03-01
It is widely accepted that the cerebellum acquires and maintain internal models for motor control. An internal model simulates mapping between a set of causes and effects. There are two candidates of cerebellar internal models, forward models and inverse models. A forward model transforms a motor command into a prediction of the sensory consequences of a movement. In contrast, an inverse model inverts the information flow of the forward model. Despite the clearly different formulations of the two internal models, it is still controversial whether the cerebro-cerebellum, the phylogenetically newer part of the cerebellum, provides inverse models or forward models for voluntary limb movements or other higher brain functions. In this article, we review physiological and morphological evidence that suggests the existence in the cerebro-cerebellum of a forward model for limb movement. We will also discuss how the characteristic input-output organization of the cerebro-cerebellum may contribute to forward models for non-motor higher brain functions. Copyright © 2015 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.
Second Generation Crop Yield Models Review
NASA Technical Reports Server (NTRS)
Hodges, T. (Principal Investigator)
1982-01-01
Second generation yield models, including crop growth simulation models and plant process models, may be suitable for large area crop yield forecasting in the yield model development project. Subjective and objective criteria for model selection are defined and models which might be selected are reviewed. Models may be selected to provide submodels as input to other models; for further development and testing; or for immediate testing as forecasting tools. A plant process model may range in complexity from several dozen submodels simulating (1) energy, carbohydrates, and minerals; (2) change in biomass of various organs; and (3) initiation and development of plant organs, to a few submodels simulating key physiological processes. The most complex models cannot be used directly in large area forecasting but may provide submodels which can be simplified for inclusion into simpler plant process models. Both published and unpublished models which may be used for development or testing are reviewed. Several other models, currently under development, may become available at a later date.
Microphysics in Multi-scale Modeling System with Unified Physics
NASA Technical Reports Server (NTRS)
Tao, Wei-Kuo
2012-01-01
Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (1) a cloud-resolving model (Goddard Cumulus Ensemble model, GCE model), (2) a regional scale model (a NASA unified weather research and forecast, WRF), (3) a coupled CRM and global model (Goddard Multi-scale Modeling Framework, MMF), and (4) a land modeling system. The same microphysical processes, long and short wave radiative transfer and land processes and the explicit cloud-radiation, and cloud-land surface interactive processes are applied in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator to use NASA high-resolution satellite data to identify the strengths and weaknesses of cloud and precipitation processes simulated by the model. In this talk, a review of developments and applications of the multi-scale modeling system will be presented. In particular, the microphysics development and its performance for the multi-scale modeling system will be presented.
Mechanical model development of rolling bearing-rotor systems: A review
NASA Astrophysics Data System (ADS)
Cao, Hongrui; Niu, Linkai; Xi, Songtao; Chen, Xuefeng
2018-03-01
The rolling bearing rotor (RBR) system is the kernel of many rotating machines, which affects the performance of the whole machine. Over the past decades, extensive research work has been carried out to investigate the dynamic behavior of RBR systems. However, to the best of the authors' knowledge, no comprehensive review on RBR modelling has been reported yet. To address this gap in the literature, this paper reviews and critically discusses the current progress of mechanical model development of RBR systems, and identifies future trends for research. Firstly, five kinds of rolling bearing models, i.e., the lumped-parameter model, the quasi-static model, the quasi-dynamic model, the dynamic model, and the finite element (FE) model are summarized. Then, the coupled modelling between bearing models and various rotor models including De Laval/Jeffcott rotor, rigid rotor, transfer matrix method (TMM) models and FE models are presented. Finally, the paper discusses the key challenges of previous works and provides new insights into understanding of RBR systems for their advanced future engineering applications.
NASA Astrophysics Data System (ADS)
Gouvea, Julia; Passmore, Cynthia
2017-03-01
The inclusion of the practice of "developing and using models" in the Framework for K-12 Science Education and in the Next Generation Science Standards provides an opportunity for educators to examine the role this practice plays in science and how it can be leveraged in a science classroom. Drawing on conceptions of models in the philosophy of science, we bring forward an agent-based account of models and discuss the implications of this view for enacting modeling in science classrooms. Models, according to this account, can only be understood with respect to the aims and intentions of a cognitive agent (models for), not solely in terms of how they represent phenomena in the world (models of). We present this contrast as a heuristic— models of versus models for—that can be used to help educators notice and interpret how models are positioned in standards, curriculum, and classrooms.
Model Hierarchies in Edge-Based Compartmental Modeling for Infectious Disease Spread
Miller, Joel C.; Volz, Erik M.
2012-01-01
We consider the family of edge-based compartmental models for epidemic spread developed in [11]. These models allow for a range of complex behaviors, and in particular allow us to explicitly incorporate duration of a contact into our mathematical models. Our focus here is to identify conditions under which simpler models may be substituted for more detailed models, and in so doing we define a hierarchy of epidemic models. In particular we provide conditions under which it is appropriate to use the standard mass action SIR model, and we show what happens when these conditions fail. Using our hierarchy, we provide a procedure leading to the choice of the appropriate model for a given population. Our result about the convergence of models to the Mass Action model gives clear, rigorous conditions under which the Mass Action model is accurate. PMID:22911242
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clark, Martyn P.; Bierkens, Marc F. P.; Samaniego, Luis
The diversity in hydrologic models has historically led to great controversy on the correct approach to process-based hydrologic modeling, with debates centered on the adequacy of process parameterizations, data limitations and uncertainty, and computational constraints on model analysis. Here, we revisit key modeling challenges on requirements to (1) define suitable model equations, (2) define adequate model parameters, and (3) cope with limitations in computing power. We outline the historical modeling challenges, provide examples of modeling advances that address these challenges, and define outstanding research needs. We also illustrate how modeling advances have been made by groups using models of different type and complexity,more » and we argue for the need to more effectively use our diversity of modeling approaches in order to advance our collective quest for physically realistic hydrologic models.« less
Clark, Martyn P.; Bierkens, Marc F. P.; Samaniego, Luis; ...
2017-07-11
The diversity in hydrologic models has historically led to great controversy on the correct approach to process-based hydrologic modeling, with debates centered on the adequacy of process parameterizations, data limitations and uncertainty, and computational constraints on model analysis. Here, we revisit key modeling challenges on requirements to (1) define suitable model equations, (2) define adequate model parameters, and (3) cope with limitations in computing power. We outline the historical modeling challenges, provide examples of modeling advances that address these challenges, and define outstanding research needs. We also illustrate how modeling advances have been made by groups using models of different type and complexity,more » and we argue for the need to more effectively use our diversity of modeling approaches in order to advance our collective quest for physically realistic hydrologic models.« less
Modeling of near-wall turbulence
NASA Technical Reports Server (NTRS)
Shih, T. H.; Mansour, N. N.
1990-01-01
An improved k-epsilon model and a second order closure model is presented for low Reynolds number turbulence near a wall. For the k-epsilon model, a modified form of the eddy viscosity having correct asymptotic near wall behavior is suggested, and a model for the pressure diffusion term in the turbulent kinetic energy equation is proposed. For the second order closure model, the existing models are modified for the Reynolds stress equations to have proper near wall behavior. A dissipation rate equation for the turbulent kinetic energy is also reformulated. The proposed models satisfy realizability and will not produce unphysical behavior. Fully developed channel flows are used for model testing. The calculations are compared with direct numerical simulations. It is shown that the present models, both the k-epsilon model and the second order closure model, perform well in predicting the behavior of the near wall turbulence. Significant improvements over previous models are obtained.
[Modeling in value-based medicine].
Neubauer, A S; Hirneiss, C; Kampik, A
2010-03-01
Modeling plays an important role in value-based medicine (VBM). It allows decision support by predicting potential clinical and economic consequences, frequently combining different sources of evidence. Based on relevant publications and examples focusing on ophthalmology the key economic modeling methods are explained and definitions are given. The most frequently applied model types are decision trees, Markov models, and discrete event simulation (DES) models. Model validation includes besides verifying internal validity comparison with other models (external validity) and ideally validation of its predictive properties. The existing uncertainty with any modeling should be clearly stated. This is true for economic modeling in VBM as well as when using disease risk models to support clinical decisions. In economic modeling uni- and multivariate sensitivity analyses are usually applied; the key concepts here are tornado plots and cost-effectiveness acceptability curves. Given the existing uncertainty, modeling helps to make better informed decisions than without this additional information.
NASA Astrophysics Data System (ADS)
Sohn, G.; Jung, J.; Jwa, Y.; Armenakis, C.
2013-05-01
This paper presents a sequential rooftop modelling method to refine initial rooftop models derived from airborne LiDAR data by integrating it with linear cues retrieved from single imagery. A cue integration between two datasets is facilitated by creating new topological features connecting between the initial model and image lines, with which new model hypotheses (variances to the initial model) are produced. We adopt Minimum Description Length (MDL) principle for competing the model candidates and selecting the optimal model by considering the balanced trade-off between the model closeness and the model complexity. Our preliminary results, combined with the Vaihingen data provided by ISPRS WGIII/4 demonstrate the image-driven modelling cues can compensate the limitations posed by LiDAR data in rooftop modelling.
ModelMate - A graphical user interface for model analysis
Banta, Edward R.
2011-01-01
ModelMate is a graphical user interface designed to facilitate use of model-analysis programs with models. This initial version of ModelMate supports one model-analysis program, UCODE_2005, and one model software program, MODFLOW-2005. ModelMate can be used to prepare input files for UCODE_2005, run UCODE_2005, and display analysis results. A link to the GW_Chart graphing program facilitates visual interpretation of results. ModelMate includes capabilities for organizing directories used with the parallel-processing capabilities of UCODE_2005 and for maintaining files in those directories to be identical to a set of files in a master directory. ModelMate can be used on its own or in conjunction with ModelMuse, a graphical user interface for MODFLOW-2005 and PHAST.
[Model-based biofuels system analysis: a review].
Chang, Shiyan; Zhang, Xiliang; Zhao, Lili; Ou, Xunmin
2011-03-01
Model-based system analysis is an important tool for evaluating the potential and impacts of biofuels, and for drafting biofuels technology roadmaps and targets. The broad reach of the biofuels supply chain requires that biofuels system analyses span a range of disciplines, including agriculture/forestry, energy, economics, and the environment. Here we reviewed various models developed for or applied to modeling biofuels, and presented a critical analysis of Agriculture/Forestry System Models, Energy System Models, Integrated Assessment Models, Micro-level Cost, Energy and Emission Calculation Models, and Specific Macro-level Biofuel Models. We focused on the models' strengths, weaknesses, and applicability, facilitating the selection of a suitable type of model for specific issues. Such an analysis was a prerequisite for future biofuels system modeling, and represented a valuable resource for researchers and policy makers.
An Immuno-epidemiological Model of Paratuberculosis
NASA Astrophysics Data System (ADS)
Martcheva, M.
2011-11-01
The primary objective of this article is to introduce an immuno-epidemiological model of paratuberculosis (Johne's disease). To develop the immuno-epidemiological model, we first develop an immunological model and an epidemiological model. Then, we link the two models through time-since-infection structure and parameters of the epidemiological model. We use the nested approach to compose the immuno-epidemiological model. Our immunological model captures the switch between the T-cell immune response and the antibody response in Johne's disease. The epidemiological model is a time-since-infection model and captures the variability of transmission rate and the vertical transmission of the disease. We compute the immune-response-dependent epidemiological reproduction number. Our immuno-epidemiological model can be used for investigation of the impact of the immune response on the epidemiology of Johne's disease.
Correlation of ground tests and analyses of a dynamically scaled Space Station model configuration
NASA Technical Reports Server (NTRS)
Javeed, Mehzad; Edighoffer, Harold H.; Mcgowan, Paul E.
1993-01-01
Verification of analytical models through correlation with ground test results of a complex space truss structure is demonstrated. A multi-component, dynamically scaled space station model configuration is the focus structure for this work. Previously established test/analysis correlation procedures are used to develop improved component analytical models. Integrated system analytical models, consisting of updated component analytical models, are compared with modal test results to establish the accuracy of system-level dynamic predictions. Design sensitivity model updating methods are shown to be effective for providing improved component analytical models. Also, the effects of component model accuracy and interface modeling fidelity on the accuracy of integrated model predictions is examined.
FacetModeller: Software for manual creation, manipulation and analysis of 3D surface-based models
NASA Astrophysics Data System (ADS)
Lelièvre, Peter G.; Carter-McAuslan, Angela E.; Dunham, Michael W.; Jones, Drew J.; Nalepa, Mariella; Squires, Chelsea L.; Tycholiz, Cassandra J.; Vallée, Marc A.; Farquharson, Colin G.
2018-01-01
The creation of 3D models is commonplace in many disciplines. Models are often built from a collection of tessellated surfaces. To apply numerical methods to such models it is often necessary to generate a mesh of space-filling elements that conforms to the model surfaces. While there are meshing algorithms that can do so, they place restrictive requirements on the surface-based models that are rarely met by existing 3D model building software. Hence, we have developed a Java application named FacetModeller, designed for efficient manual creation, modification and analysis of 3D surface-based models destined for use in numerical modelling.
Posada, David
2006-01-01
ModelTest server is a web-based application for the selection of models of nucleotide substitution using the program ModelTest. The server takes as input a text file with likelihood scores for the set of candidate models. Models can be selected with hierarchical likelihood ratio tests, or with the Akaike or Bayesian information criteria. The output includes several statistics for the assessment of model selection uncertainty, for model averaging or to estimate the relative importance of model parameters. The server can be accessed at . PMID:16845102
Application of surface complexation models to anion adsorption by natural materials
USDA-ARS?s Scientific Manuscript database
Various chemical models of ion adsorption will be presented and discussed. Chemical models, such as surface complexation models, provide a molecular description of anion adsorption reactions using an equilibrium approach. Two such models, the constant capacitance model and the triple layer model w...
Space Environments and Effects: Trapped Proton Model
NASA Technical Reports Server (NTRS)
Huston, S. L.; Kauffman, W. (Technical Monitor)
2002-01-01
An improved model of the Earth's trapped proton environment has been developed. This model, designated Trapped Proton Model version 1 (TPM-1), determines the omnidirectional flux of protons with energy between 1 and 100 MeV throughout near-Earth space. The model also incorporates a true solar cycle dependence. The model consists of several data files and computer software to read them. There are three versions of the mo'del: a FORTRAN-Callable library, a stand-alone model, and a Web-based model.
The NASA Marshall engineering thermosphere model
NASA Technical Reports Server (NTRS)
Hickey, Michael Philip
1988-01-01
Described is the NASA Marshall Engineering Thermosphere (MET) Model, which is a modified version of the MFSC/J70 Orbital Atmospheric Density Model as currently used in the J70MM program at MSFC. The modifications to the MFSC/J70 model required for the MET model are described, graphical and numerical examples of the models are included, as is a listing of the MET model computer program. Major differences between the numerical output from the MET model and the MFSC/J70 model are discussed.
Wind turbine model and loop shaping controller design
NASA Astrophysics Data System (ADS)
Gilev, Bogdan
2017-12-01
A model of a wind turbine is evaluated, consisting of: wind speed model, mechanical and electrical model of generator and tower oscillation model. Model of the whole system is linearized around of a nominal point. By using the linear model with uncertainties is synthesized a uncertain model. By using the uncertain model is developed a H∞ controller, which provide mode of stabilizing the rotor frequency and damping the tower oscillations. Finally is simulated work of nonlinear system and H∞ controller.
Simulated Students and Classroom Use of Model-Based Intelligent Tutoring
NASA Technical Reports Server (NTRS)
Koedinger, Kenneth R.
2008-01-01
Two educational uses of models and simulations: 1) Students create models and use simulations ; and 2) Researchers create models of learners to guide development of reliably effective materials. Cognitive tutors simulate and support tutoring - data is crucial to create effective model. Pittsburgh Science of Learning Center: Resources for modeling, authoring, experimentation. Repository of data and theory. Examples of advanced modeling efforts: SimStudent learns rule-based model. Help-seeking model: Tutors metacognition. Scooter uses machine learning detectors of student engagement.
Modeling for Battery Prognostics
NASA Technical Reports Server (NTRS)
Kulkarni, Chetan S.; Goebel, Kai; Khasin, Michael; Hogge, Edward; Quach, Patrick
2017-01-01
For any battery-powered vehicles (be it unmanned aerial vehicles, small passenger aircraft, or assets in exoplanetary operations) to operate at maximum efficiency and reliability, it is critical to monitor battery health as well performance and to predict end of discharge (EOD) and end of useful life (EOL). To fulfil these needs, it is important to capture the battery's inherent characteristics as well as operational knowledge in the form of models that can be used by monitoring, diagnostic, and prognostic algorithms. Several battery modeling methodologies have been developed in last few years as the understanding of underlying electrochemical mechanics has been advancing. The models can generally be classified as empirical models, electrochemical engineering models, multi-physics models, and molecular/atomist. Empirical models are based on fitting certain functions to past experimental data, without making use of any physicochemical principles. Electrical circuit equivalent models are an example of such empirical models. Electrochemical engineering models are typically continuum models that include electrochemical kinetics and transport phenomena. Each model has its advantages and disadvantages. The former type of model has the advantage of being computationally efficient, but has limited accuracy and robustness, due to the approximations used in developed model, and as a result of such approximations, cannot represent aging well. The latter type of model has the advantage of being very accurate, but is often computationally inefficient, having to solve complex sets of partial differential equations, and thus not suited well for online prognostic applications. In addition both multi-physics and atomist models are computationally expensive hence are even less suited to online application An electrochemistry-based model of Li-ion batteries has been developed, that captures crucial electrochemical processes, captures effects of aging, is computationally efficient, and is of suitable accuracy for reliable EOD prediction in a variety of operational profiles. The model can be considered an electrochemical engineering model, but unlike most such models found in the literature, certain approximations are done that allow to retain computational efficiency for online implementation of the model. Although the focus here is on Li-ion batteries, the model is quite general and can be applied to different chemistries through a change of model parameter values. Progress on model development, providing model validation results and EOD prediction results is being presented.
NASA Astrophysics Data System (ADS)
Rooper, Christopher N.; Zimmermann, Mark; Prescott, Megan M.
2017-08-01
Deep-sea coral and sponge ecosystems are widespread throughout most of Alaska's marine waters, and are associated with many different species of fishes and invertebrates. These ecosystems are vulnerable to the effects of commercial fishing activities and climate change. We compared four commonly used species distribution models (general linear models, generalized additive models, boosted regression trees and random forest models) and an ensemble model to predict the presence or absence and abundance of six groups of benthic invertebrate taxa in the Gulf of Alaska. All four model types performed adequately on training data for predicting presence and absence, with regression forest models having the best overall performance measured by the area under the receiver-operating-curve (AUC). The models also performed well on the test data for presence and absence with average AUCs ranging from 0.66 to 0.82. For the test data, ensemble models performed the best. For abundance data, there was an obvious demarcation in performance between the two regression-based methods (general linear models and generalized additive models), and the tree-based models. The boosted regression tree and random forest models out-performed the other models by a wide margin on both the training and testing data. However, there was a significant drop-off in performance for all models of invertebrate abundance ( 50%) when moving from the training data to the testing data. Ensemble model performance was between the tree-based and regression-based methods. The maps of predictions from the models for both presence and abundance agreed very well across model types, with an increase in variability in predictions for the abundance data. We conclude that where data conforms well to the modeled distribution (such as the presence-absence data and binomial distribution in this study), the four types of models will provide similar results, although the regression-type models may be more consistent with biological theory. For data with highly zero-inflated distributions and non-normal distributions such as the abundance data from this study, the tree-based methods performed better. Ensemble models that averaged predictions across the four model types, performed better than the GLM or GAM models but slightly poorer than the tree-based methods, suggesting ensemble models might be more robust to overfitting than tree methods, while mitigating some of the disadvantages in predictive performance of regression methods.
A toy terrestrial carbon flow model
NASA Technical Reports Server (NTRS)
Parton, William J.; Running, Steven W.; Walker, Brian
1992-01-01
A generalized carbon flow model for the major terrestrial ecosystems of the world is reported. The model is a simplification of the Century model and the Forest-Biogeochemical model. Topics covered include plant production, decomposition and nutrient cycling, biomes, the utility of the carbon flow model for predicting carbon dynamics under global change, and possible applications to state-and-transition models and environmentally driven global vegetation models.
2010-01-01
Background Quantitative models of biochemical and cellular systems are used to answer a variety of questions in the biological sciences. The number of published quantitative models is growing steadily thanks to increasing interest in the use of models as well as the development of improved software systems and the availability of better, cheaper computer hardware. To maximise the benefits of this growing body of models, the field needs centralised model repositories that will encourage, facilitate and promote model dissemination and reuse. Ideally, the models stored in these repositories should be extensively tested and encoded in community-supported and standardised formats. In addition, the models and their components should be cross-referenced with other resources in order to allow their unambiguous identification. Description BioModels Database http://www.ebi.ac.uk/biomodels/ is aimed at addressing exactly these needs. It is a freely-accessible online resource for storing, viewing, retrieving, and analysing published, peer-reviewed quantitative models of biochemical and cellular systems. The structure and behaviour of each simulation model distributed by BioModels Database are thoroughly checked; in addition, model elements are annotated with terms from controlled vocabularies as well as linked to relevant data resources. Models can be examined online or downloaded in various formats. Reaction network diagrams generated from the models are also available in several formats. BioModels Database also provides features such as online simulation and the extraction of components from large scale models into smaller submodels. Finally, the system provides a range of web services that external software systems can use to access up-to-date data from the database. Conclusions BioModels Database has become a recognised reference resource for systems biology. It is being used by the community in a variety of ways; for example, it is used to benchmark different simulation systems, and to study the clustering of models based upon their annotations. Model deposition to the database today is advised by several publishers of scientific journals. The models in BioModels Database are freely distributed and reusable; the underlying software infrastructure is also available from SourceForge https://sourceforge.net/projects/biomodels/ under the GNU General Public License. PMID:20587024
Drift-Scale Coupled Processes (DST and THC Seepage) Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
P. Dixon
The purpose of this Model Report (REV02) is to document the unsaturated zone (UZ) models used to evaluate the potential effects of coupled thermal-hydrological-chemical (THC) processes on UZ flow and transport. This Model Report has been developed in accordance with the ''Technical Work Plan for: Performance Assessment Unsaturated Zone'' (Bechtel SAIC Company, LLC (BSC) 2002 [160819]). The technical work plan (TWP) describes planning information pertaining to the technical scope, content, and management of this Model Report in Section 1.12, Work Package AUZM08, ''Coupled Effects on Flow and Seepage''. The plan for validation of the models documented in this Model Reportmore » is given in Attachment I, Model Validation Plans, Section I-3-4, of the TWP. Except for variations in acceptance criteria (Section 4.2), there were no deviations from this TWP. This report was developed in accordance with AP-SIII.10Q, ''Models''. This Model Report documents the THC Seepage Model and the Drift Scale Test (DST) THC Model. The THC Seepage Model is a drift-scale process model for predicting the composition of gas and water that could enter waste emplacement drifts and the effects of mineral alteration on flow in rocks surrounding drifts. The DST THC model is a drift-scale process model relying on the same conceptual model and much of the same input data (i.e., physical, hydrological, thermodynamic, and kinetic) as the THC Seepage Model. The DST THC Model is the primary method for validating the THC Seepage Model. The DST THC Model compares predicted water and gas compositions, as well as mineral alteration patterns, with observed data from the DST. These models provide the framework to evaluate THC coupled processes at the drift scale, predict flow and transport behavior for specified thermal-loading conditions, and predict the evolution of mineral alteration and fluid chemistry around potential waste emplacement drifts. The DST THC Model is used solely for the validation of the THC Seepage Model and is not used for calibration to measured data.« less
Muñoz-Tamayo, R; Puillet, L; Daniel, J B; Sauvant, D; Martin, O; Taghipoor, M; Blavy, P
2018-04-01
What is a good (useful) mathematical model in animal science? For models constructed for prediction purposes, the question of model adequacy (usefulness) has been traditionally tackled by statistical analysis applied to observed experimental data relative to model-predicted variables. However, little attention has been paid to analytic tools that exploit the mathematical properties of the model equations. For example, in the context of model calibration, before attempting a numerical estimation of the model parameters, we might want to know if we have any chance of success in estimating a unique best value of the model parameters from available measurements. This question of uniqueness is referred to as structural identifiability; a mathematical property that is defined on the sole basis of the model structure within a hypothetical ideal experiment determined by a setting of model inputs (stimuli) and observable variables (measurements). Structural identifiability analysis applied to dynamic models described by ordinary differential equations (ODEs) is a common practice in control engineering and system identification. This analysis demands mathematical technicalities that are beyond the academic background of animal science, which might explain the lack of pervasiveness of identifiability analysis in animal science modelling. To fill this gap, in this paper we address the analysis of structural identifiability from a practitioner perspective by capitalizing on the use of dedicated software tools. Our objectives are (i) to provide a comprehensive explanation of the structural identifiability notion for the community of animal science modelling, (ii) to assess the relevance of identifiability analysis in animal science modelling and (iii) to motivate the community to use identifiability analysis in the modelling practice (when the identifiability question is relevant). We focus our study on ODE models. By using illustrative examples that include published mathematical models describing lactation in cattle, we show how structural identifiability analysis can contribute to advancing mathematical modelling in animal science towards the production of useful models and, moreover, highly informative experiments via optimal experiment design. Rather than attempting to impose a systematic identifiability analysis to the modelling community during model developments, we wish to open a window towards the discovery of a powerful tool for model construction and experiment design.
Ecosystem Model Skill Assessment. Yes We Can!
Olsen, Erik; Fay, Gavin; Gaichas, Sarah; Gamble, Robert; Lucey, Sean; Link, Jason S.
2016-01-01
Need to Assess the Skill of Ecosystem Models Accelerated changes to global ecosystems call for holistic and integrated analyses of past, present and future states under various pressures to adequately understand current and projected future system states. Ecosystem models can inform management of human activities in a complex and changing environment, but are these models reliable? Ensuring that models are reliable for addressing management questions requires evaluating their skill in representing real-world processes and dynamics. Skill has been evaluated for just a limited set of some biophysical models. A range of skill assessment methods have been reviewed but skill assessment of full marine ecosystem models has not yet been attempted. Northeast US Atlantis Marine Ecosystem Model We assessed the skill of the Northeast U.S. (NEUS) Atlantis marine ecosystem model by comparing 10-year model forecasts with observed data. Model forecast performance was compared to that obtained from a 40-year hindcast. Multiple metrics (average absolute error, root mean squared error, modeling efficiency, and Spearman rank correlation), and a suite of time-series (species biomass, fisheries landings, and ecosystem indicators) were used to adequately measure model skill. Overall, the NEUS model performed above average and thus better than expected for the key species that had been the focus of the model tuning. Model forecast skill was comparable to the hindcast skill, showing that model performance does not degenerate in a 10-year forecast mode, an important characteristic for an end-to-end ecosystem model to be useful for strategic management purposes. Skill Assessment Is Both Possible and Advisable We identify best-practice approaches for end-to-end ecosystem model skill assessment that would improve both operational use of other ecosystem models and future model development. We show that it is possible to not only assess the skill of a complicated marine ecosystem model, but that it is necessary do so to instill confidence in model results and encourage their use for strategic management. Our methods are applicable to any type of predictive model, and should be considered for use in fields outside ecology (e.g. economics, climate change, and risk assessment). PMID:26731540
Challenges and opportunities for integrating lake ecosystem modelling approaches
Mooij, Wolf M.; Trolle, Dennis; Jeppesen, Erik; Arhonditsis, George; Belolipetsky, Pavel V.; Chitamwebwa, Deonatus B.R.; Degermendzhy, Andrey G.; DeAngelis, Donald L.; Domis, Lisette N. De Senerpont; Downing, Andrea S.; Elliott, J. Alex; Ruberto, Carlos Ruberto; Gaedke, Ursula; Genova, Svetlana N.; Gulati, Ramesh D.; Hakanson, Lars; Hamilton, David P.; Hipsey, Matthew R.; Hoen, Jochem 't; Hulsmann, Stephan; Los, F. Hans; Makler-Pick, Vardit; Petzoldt, Thomas; Prokopkin, Igor G.; Rinke, Karsten; Schep, Sebastiaan A.; Tominaga, Koji; Van Dam, Anne A.; Van Nes, Egbert H.; Wells, Scott A.; Janse, Jan H.
2010-01-01
A large number and wide variety of lake ecosystem models have been developed and published during the past four decades. We identify two challenges for making further progress in this field. One such challenge is to avoid developing more models largely following the concept of others ('reinventing the wheel'). The other challenge is to avoid focusing on only one type of model, while ignoring new and diverse approaches that have become available ('having tunnel vision'). In this paper, we aim at improving the awareness of existing models and knowledge of concurrent approaches in lake ecosystem modelling, without covering all possible model tools and avenues. First, we present a broad variety of modelling approaches. To illustrate these approaches, we give brief descriptions of rather arbitrarily selected sets of specific models. We deal with static models (steady state and regression models), complex dynamic models (CAEDYM, CE-QUAL-W2, Delft 3D-ECO, LakeMab, LakeWeb, MyLake, PCLake, PROTECH, SALMO), structurally dynamic models and minimal dynamic models. We also discuss a group of approaches that could all be classified as individual based: super-individual models (Piscator, Charisma), physiologically structured models, stage-structured models and trait-based models. We briefly mention genetic algorithms, neural networks, Kalman filters and fuzzy logic. Thereafter, we zoom in, as an in-depth example, on the multi-decadal development and application of the lake ecosystem model PCLake and related models (PCLake Metamodel, Lake Shira Model, IPH-TRIM3D-PCLake). In the discussion, we argue that while the historical development of each approach and model is understandable given its 'leading principle', there are many opportunities for combining approaches. We take the point of view that a single 'right' approach does not exist and should not be strived for. Instead, multiple modelling approaches, applied concurrently to a given problem, can help develop an integrative view on the functioning of lake ecosystems. We end with a set of specific recommendations that may be of help in the further development of lake ecosystem models.
NASA Astrophysics Data System (ADS)
Duane, G. S.; Selten, F.
2016-12-01
Different models of climate and weather commonly give projections/predictions that differ widely in their details. While averaging of model outputs almost always improves results, nonlinearity implies that further improvement can be obtained from model interaction in run time, as has already been demonstrated with toy systems of ODEs and idealized quasigeostrophic models. In the supermodeling scheme, models effectively assimilate data from one another and partially synchronize with one another. Spread among models is manifest as a spread in possible inter-model connection coefficients, so that the models effectively "agree to disagree". Here, we construct a supermodel formed from variants of the SPEEDO model, a primitive-equation atmospheric model (SPEEDY) coupled to ocean and land. A suite of atmospheric models, coupled to the same ocean and land, is chosen to represent typical differences among climate models by varying model parameters. Connections are introduced between all pairs of corresponding independent variables at synoptic-scale intervals. Strengths of the inter-atmospheric connections can be considered to represent inverse inter-model observation error. Connection strengths are adapted based on an established procedure that extends the dynamical equations of a pair of synchronizing systems to synchronize parameters as well. The procedure is applied to synchronize the suite of SPEEDO models with another SPEEDO model regarded as "truth", adapting the inter-model connections along the way. The supermodel with trained connections gives marginally lower error in all fields than any weighted combination of the separate model outputs when used in "weather-prediction mode", i.e. with constant nudging to truth. Stronger results are obtained if a supermodel is used to predict the formation of coherent structures or the frequency of such. Partially synchronized SPEEDO models give a better representation of the blocked-zonal index cycle than does a weighted average of the constituent model outputs. We have thus shown that supermodeling and the synchronization-based procedure to adapt inter-model connections give results superior to output averaging not only with highly nonlinear toy systems, but with smaller nonlinearities as occur in climate models.
Liu, Jie; Zhang, Fu-Dong; Teng, Fei; Li, Jun; Wang, Zhi-Hong
2014-10-01
In order to in-situ detect the oil yield of oil shale, based on portable near infrared spectroscopy analytical technology, with 66 rock core samples from No. 2 well drilling of Fuyu oil shale base in Jilin, the modeling and analyzing methods for in-situ detection were researched. By the developed portable spectrometer, 3 data formats (reflectance, absorbance and K-M function) spectra were acquired. With 4 different modeling data optimization methods: principal component-mahalanobis distance (PCA-MD) for eliminating abnormal samples, uninformative variables elimination (UVE) for wavelength selection and their combina- tions: PCA-MD + UVE and UVE + PCA-MD, 2 modeling methods: partial least square (PLS) and back propagation artificial neural network (BPANN), and the same data pre-processing, the modeling and analyzing experiment were performed to determine the optimum analysis model and method. The results show that the data format, modeling data optimization method and modeling method all affect the analysis precision of model. Results show that whether or not using the optimization method, reflectance or K-M function is the proper spectrum format of the modeling database for two modeling methods. Using two different modeling methods and four different data optimization methods, the model precisions of the same modeling database are different. For PLS modeling method, the PCA-MD and UVE + PCA-MD data optimization methods can improve the modeling precision of database using K-M function spectrum data format. For BPANN modeling method, UVE, UVE + PCA-MD and PCA- MD + UVE data optimization methods can improve the modeling precision of database using any of the 3 spectrum data formats. In addition to using the reflectance spectra and PCA-MD data optimization method, modeling precision by BPANN method is better than that by PLS method. And modeling with reflectance spectra, UVE optimization method and BPANN modeling method, the model gets the highest analysis precision, its correlation coefficient (Rp) is 0.92, and its standard error of prediction (SEP) is 0.69%.
NASA Astrophysics Data System (ADS)
Elshall, A. S.; Ye, M.; Niu, G. Y.; Barron-Gafford, G.
2015-12-01
Models in biogeoscience involve uncertainties in observation data, model inputs, model structure, model processes and modeling scenarios. To accommodate for different sources of uncertainty, multimodal analysis such as model combination, model selection, model elimination or model discrimination are becoming more popular. To illustrate theoretical and practical challenges of multimodal analysis, we use an example about microbial soil respiration modeling. Global soil respiration releases more than ten times more carbon dioxide to the atmosphere than all anthropogenic emissions. Thus, improving our understanding of microbial soil respiration is essential for improving climate change models. This study focuses on a poorly understood phenomena, which is the soil microbial respiration pulses in response to episodic rainfall pulses (the "Birch effect"). We hypothesize that the "Birch effect" is generated by the following three mechanisms. To test our hypothesis, we developed and assessed five evolving microbial-enzyme models against field measurements from a semiarid Savannah that is characterized by pulsed precipitation. These five model evolve step-wise such that the first model includes none of these three mechanism, while the fifth model includes the three mechanisms. The basic component of Bayesian multimodal analysis is the estimation of marginal likelihood to rank the candidate models based on their overall likelihood with respect to observation data. The first part of the study focuses on using this Bayesian scheme to discriminate between these five candidate models. The second part discusses some theoretical and practical challenges, which are mainly the effect of likelihood function selection and the marginal likelihood estimation methods on both model ranking and Bayesian model averaging. The study shows that making valid inference from scientific data is not a trivial task, since we are not only uncertain about the candidate scientific models, but also about the statistical methods that are used to discriminate between these models.
Ecosystem Model Skill Assessment. Yes We Can!
Olsen, Erik; Fay, Gavin; Gaichas, Sarah; Gamble, Robert; Lucey, Sean; Link, Jason S
2016-01-01
Accelerated changes to global ecosystems call for holistic and integrated analyses of past, present and future states under various pressures to adequately understand current and projected future system states. Ecosystem models can inform management of human activities in a complex and changing environment, but are these models reliable? Ensuring that models are reliable for addressing management questions requires evaluating their skill in representing real-world processes and dynamics. Skill has been evaluated for just a limited set of some biophysical models. A range of skill assessment methods have been reviewed but skill assessment of full marine ecosystem models has not yet been attempted. We assessed the skill of the Northeast U.S. (NEUS) Atlantis marine ecosystem model by comparing 10-year model forecasts with observed data. Model forecast performance was compared to that obtained from a 40-year hindcast. Multiple metrics (average absolute error, root mean squared error, modeling efficiency, and Spearman rank correlation), and a suite of time-series (species biomass, fisheries landings, and ecosystem indicators) were used to adequately measure model skill. Overall, the NEUS model performed above average and thus better than expected for the key species that had been the focus of the model tuning. Model forecast skill was comparable to the hindcast skill, showing that model performance does not degenerate in a 10-year forecast mode, an important characteristic for an end-to-end ecosystem model to be useful for strategic management purposes. We identify best-practice approaches for end-to-end ecosystem model skill assessment that would improve both operational use of other ecosystem models and future model development. We show that it is possible to not only assess the skill of a complicated marine ecosystem model, but that it is necessary do so to instill confidence in model results and encourage their use for strategic management. Our methods are applicable to any type of predictive model, and should be considered for use in fields outside ecology (e.g. economics, climate change, and risk assessment).
NASA Astrophysics Data System (ADS)
Kwiatkowski, L.; Yool, A.; Allen, J. I.; Anderson, T. R.; Barciela, R.; Buitenhuis, E. T.; Butenschön, M.; Enright, C.; Halloran, P. R.; Le Quéré, C.; de Mora, L.; Racault, M.-F.; Sinha, B.; Totterdell, I. J.; Cox, P. M.
2014-07-01
Ocean biogeochemistry (OBGC) models span a wide range of complexities from highly simplified, nutrient-restoring schemes, through nutrient-phytoplankton-zooplankton-detritus (NPZD) models that crudely represent the marine biota, through to models that represent a broader trophic structure by grouping organisms as plankton functional types (PFT) based on their biogeochemical role (Dynamic Green Ocean Models; DGOM) and ecosystem models which group organisms by ecological function and trait. OBGC models are now integral components of Earth System Models (ESMs), but they compete for computing resources with higher resolution dynamical setups and with other components such as atmospheric chemistry and terrestrial vegetation schemes. As such, the choice of OBGC in ESMs needs to balance model complexity and realism alongside relative computing cost. Here, we present an inter-comparison of six OBGC models that were candidates for implementation within the next UK Earth System Model (UKESM1). The models cover a large range of biological complexity (from 7 to 57 tracers) but all include representations of at least the nitrogen, carbon, alkalinity and oxygen cycles. Each OBGC model was coupled to the Nucleus for the European Modelling of the Ocean (NEMO) ocean general circulation model (GCM), and results from physically identical hindcast simulations were compared. Model skill was evaluated for biogeochemical metrics of global-scale bulk properties using conventional statistical techniques. The computing cost of each model was also measured in standardised tests run at two resource levels. No model is shown to consistently outperform or underperform all other models across all metrics. Nonetheless, the simpler models that are easier to tune are broadly closer to observations across a number of fields, and thus offer a high-efficiency option for ESMs that prioritise high resolution climate dynamics. However, simpler models provide limited insight into more complex marine biogeochemical processes and ecosystem pathways, and a parallel approach of low resolution climate dynamics and high complexity biogeochemistry is desirable in order to provide additional insights into biogeochemistry-climate interactions.
NASA Astrophysics Data System (ADS)
Kwiatkowski, L.; Yool, A.; Allen, J. I.; Anderson, T. R.; Barciela, R.; Buitenhuis, E. T.; Butenschön, M.; Enright, C.; Halloran, P. R.; Le Quéré, C.; de Mora, L.; Racault, M.-F.; Sinha, B.; Totterdell, I. J.; Cox, P. M.
2014-12-01
Ocean biogeochemistry (OBGC) models span a wide variety of complexities, including highly simplified nutrient-restoring schemes, nutrient-phytoplankton-zooplankton-detritus (NPZD) models that crudely represent the marine biota, models that represent a broader trophic structure by grouping organisms as plankton functional types (PFTs) based on their biogeochemical role (dynamic green ocean models) and ecosystem models that group organisms by ecological function and trait. OBGC models are now integral components of Earth system models (ESMs), but they compete for computing resources with higher resolution dynamical setups and with other components such as atmospheric chemistry and terrestrial vegetation schemes. As such, the choice of OBGC in ESMs needs to balance model complexity and realism alongside relative computing cost. Here we present an intercomparison of six OBGC models that were candidates for implementation within the next UK Earth system model (UKESM1). The models cover a large range of biological complexity (from 7 to 57 tracers) but all include representations of at least the nitrogen, carbon, alkalinity and oxygen cycles. Each OBGC model was coupled to the ocean general circulation model Nucleus for European Modelling of the Ocean (NEMO) and results from physically identical hindcast simulations were compared. Model skill was evaluated for biogeochemical metrics of global-scale bulk properties using conventional statistical techniques. The computing cost of each model was also measured in standardised tests run at two resource levels. No model is shown to consistently outperform all other models across all metrics. Nonetheless, the simpler models are broadly closer to observations across a number of fields and thus offer a high-efficiency option for ESMs that prioritise high-resolution climate dynamics. However, simpler models provide limited insight into more complex marine biogeochemical processes and ecosystem pathways, and a parallel approach of low-resolution climate dynamics and high-complexity biogeochemistry is desirable in order to provide additional insights into biogeochemistry-climate interactions.
NASA Astrophysics Data System (ADS)
Malard, J. J.; Baig, A. I.; Hassanzadeh, E.; Adamowski, J. F.; Tuy, H.; Melgar-Quiñonez, H.
2016-12-01
Model coupling is a crucial step to constructing many environmental models, as it allows for the integration of independently-built models representing different system sub-components to simulate the entire system. Model coupling has been of particular interest in combining socioeconomic System Dynamics (SD) models, whose visual interface facilitates their direct use by stakeholders, with more complex physically-based models of the environmental system. However, model coupling processes are often cumbersome and inflexible and require extensive programming knowledge, limiting their potential for continued use by stakeholders in policy design and analysis after the end of the project. Here, we present Tinamit, a flexible Python-based model-coupling software tool whose easy-to-use API and graphical user interface make the coupling of stakeholder-built SD models with physically-based models rapid, flexible and simple for users with limited to no coding knowledge. The flexibility of the system allows end users to modify the SD model as well as the linking variables between the two models themselves with no need for recoding. We use Tinamit to couple a stakeholder-built socioeconomic model of soil salinization in Pakistan with the physically-based soil salinity model SAHYSMOD. As climate extremes increase in the region, policies to slow or reverse soil salinity buildup are increasing in urgency and must take both socioeconomic and biophysical spheres into account. We use the Tinamit-coupled model to test the impact of integrated policy options (economic and regulatory incentives to farmers) on soil salinity in the region in the face of future climate change scenarios. Use of the Tinamit model allowed for rapid and flexible coupling of the two models, allowing the end user to continue making model structure and policy changes. In addition, the clear interface (in contrast to most model coupling code) makes the final coupled model easily accessible to stakeholders with limited technical background.
Bayesian Model Selection under Time Constraints
NASA Astrophysics Data System (ADS)
Hoege, M.; Nowak, W.; Illman, W. A.
2017-12-01
Bayesian model selection (BMS) provides a consistent framework for rating and comparing models in multi-model inference. In cases where models of vastly different complexity compete with each other, we also face vastly different computational runtimes of such models. For instance, time series of a quantity of interest can be simulated by an autoregressive process model that takes even less than a second for one run, or by a partial differential equations-based model with runtimes up to several hours or even days. The classical BMS is based on a quantity called Bayesian model evidence (BME). It determines the model weights in the selection process and resembles a trade-off between bias of a model and its complexity. However, in practice, the runtime of models is another weight relevant factor for model selection. Hence, we believe that it should be included, leading to an overall trade-off problem between bias, variance and computing effort. We approach this triple trade-off from the viewpoint of our ability to generate realizations of the models under a given computational budget. One way to obtain BME values is through sampling-based integration techniques. We argue with the fact that more expensive models can be sampled much less under time constraints than faster models (in straight proportion to their runtime). The computed evidence in favor of a more expensive model is statistically less significant than the evidence computed in favor of a faster model, since sampling-based strategies are always subject to statistical sampling error. We present a straightforward way to include this misbalance into the model weights that are the basis for model selection. Our approach follows directly from the idea of insufficient significance. It is based on a computationally cheap bootstrapping error estimate of model evidence and is easy to implement. The approach is illustrated in a small synthetic modeling study.
Prediction-error variance in Bayesian model updating: a comparative study
NASA Astrophysics Data System (ADS)
Asadollahi, Parisa; Li, Jian; Huang, Yong
2017-04-01
In Bayesian model updating, the likelihood function is commonly formulated by stochastic embedding in which the maximum information entropy probability model of prediction error variances plays an important role and it is Gaussian distribution subject to the first two moments as constraints. The selection of prediction error variances can be formulated as a model class selection problem, which automatically involves a trade-off between the average data-fit of the model class and the information it extracts from the data. Therefore, it is critical for the robustness in the updating of the structural model especially in the presence of modeling errors. To date, three ways of considering prediction error variances have been seem in the literature: 1) setting constant values empirically, 2) estimating them based on the goodness-of-fit of the measured data, and 3) updating them as uncertain parameters by applying Bayes' Theorem at the model class level. In this paper, the effect of different strategies to deal with the prediction error variances on the model updating performance is investigated explicitly. A six-story shear building model with six uncertain stiffness parameters is employed as an illustrative example. Transitional Markov Chain Monte Carlo is used to draw samples of the posterior probability density function of the structure model parameters as well as the uncertain prediction variances. The different levels of modeling uncertainty and complexity are modeled through three FE models, including a true model, a model with more complexity, and a model with modeling error. Bayesian updating is performed for the three FE models considering the three aforementioned treatments of the prediction error variances. The effect of number of measurements on the model updating performance is also examined in the study. The results are compared based on model class assessment and indicate that updating the prediction error variances as uncertain parameters at the model class level produces more robust results especially when the number of measurement is small.
Comparison and Analysis of Geometric Correction Models of Spaceborne SAR
Jiang, Weihao; Yu, Anxi; Dong, Zhen; Wang, Qingsong
2016-01-01
Following the development of synthetic aperture radar (SAR), SAR images have become increasingly common. Many researchers have conducted large studies on geolocation models, but little work has been conducted on the available models for the geometric correction of SAR images of different terrain. To address the terrain issue, four different models were compared and are described in this paper: a rigorous range-doppler (RD) model, a rational polynomial coefficients (RPC) model, a revised polynomial (PM) model and an elevation derivation (EDM) model. The results of comparisons of the geolocation capabilities of the models show that a proper model for a SAR image of a specific terrain can be determined. A solution table was obtained to recommend a suitable model for users. Three TerraSAR-X images, two ALOS-PALSAR images and one Envisat-ASAR image were used for the experiment, including flat terrain and mountain terrain SAR images as well as two large area images. Geolocation accuracies of the models for different terrain SAR images were computed and analyzed. The comparisons of the models show that the RD model was accurate but was the least efficient; therefore, it is not the ideal model for real-time implementations. The RPC model is sufficiently accurate and efficient for the geometric correction of SAR images of flat terrain, whose precision is below 0.001 pixels. The EDM model is suitable for the geolocation of SAR images of mountainous terrain, and its precision can reach 0.007 pixels. Although the PM model does not produce results as precise as the other models, its efficiency is excellent and its potential should not be underestimated. With respect to the geometric correction of SAR images over large areas, the EDM model has higher accuracy under one pixel, whereas the RPC model consumes one third of the time of the EDM model. PMID:27347973
Towards policy relevant environmental modeling: contextual validity and pragmatic models
Miles, Scott B.
2000-01-01
"What makes for a good model?" In various forms, this question is a question that, undoubtedly, many people, businesses, and institutions ponder with regards to their particular domain of modeling. One particular domain that is wrestling with this question is the multidisciplinary field of environmental modeling. Examples of environmental models range from models of contaminated ground water flow to the economic impact of natural disasters, such as earthquakes. One of the distinguishing claims of the field is the relevancy of environmental modeling to policy and environment-related decision-making in general. A pervasive view by both scientists and decision-makers is that a "good" model is one that is an accurate predictor. Thus, determining whether a model is "accurate" or "correct" is done by comparing model output to empirical observations. The expected outcome of this process, usually referred to as "validation" or "ground truthing," is a stamp on the model in question of "valid" or "not valid" that serves to indicate whether or not the model will be reliable before it is put into service in a decision-making context. In this paper, I begin by elaborating on the prevailing view of model validation and why this view must change. Drawing from concepts coming out of the studies of science and technology, I go on to propose a contextual view of validity that can overcome the problems associated with "ground truthing" models as an indicator of model goodness. The problem of how we talk about and determine model validity has much to do about how we perceive the utility of environmental models. In the remainder of the paper, I argue that we should adopt ideas of pragmatism in judging what makes for a good model and, in turn, developing good models. From such a perspective of model goodness, good environmental models should facilitate communication, convey—not bury or "eliminate"—uncertainties, and, thus, afford the active building of consensus decisions, instead of promoting passive or self-righteous decisions.
On Using Meta-Modeling and Multi-Modeling to Address Complex Problems
ERIC Educational Resources Information Center
Abu Jbara, Ahmed
2013-01-01
Models, created using different modeling techniques, usually serve different purposes and provide unique insights. While each modeling technique might be capable of answering specific questions, complex problems require multiple models interoperating to complement/supplement each other; we call this Multi-Modeling. To address the syntactic and…
The US EPA has a plan to leverage recent advances in meteorological modeling to develop a "Next-Generation" air quality modeling system that will allow consistent modeling of problems from global to local scale. The meteorological model of choice is the Model for Predic...
Model Comparison of Bayesian Semiparametric and Parametric Structural Equation Models
ERIC Educational Resources Information Center
Song, Xin-Yuan; Xia, Ye-Mao; Pan, Jun-Hao; Lee, Sik-Yum
2011-01-01
Structural equation models have wide applications. One of the most important issues in analyzing structural equation models is model comparison. This article proposes a Bayesian model comparison statistic, namely the "L[subscript nu]"-measure for both semiparametric and parametric structural equation models. For illustration purposes, we consider…
National Centers for Environmental Prediction
Modeling Mesoscale Modeling Marine Modeling and Analysis Teams Climate Data Assimilation Ensembles and Post / VISION | About EMC EMC > Mesoscale Modeling > MODELS Home Mission Models R & D Collaborators Cyclone Tracks & Verification Implementation Info FAQ Disclaimer More Info MESOSCALE MODELING SREF
Computer Models of Personality: Implications for Measurement
ERIC Educational Resources Information Center
Cranton, P. A.
1976-01-01
Current research on computer models of personality is reviewed and categorized under five headings: (1) models of belief systems; (2) models of interpersonal behavior; (3) models of decision-making processes; (4) prediction models; and (5) theory-based simulations of specific processes. The use of computer models in personality measurement is…
Uses of Computer Simulation Models in Ag-Research and Everyday Life
USDA-ARS?s Scientific Manuscript database
When the news media talks about models they could be talking about role models, fashion models, conceptual models like the auto industry uses, or computer simulation models. A computer simulation model is a computer code that attempts to imitate the processes and functions of certain systems. There ...
ERIC Educational Resources Information Center
King, Gillian; Currie, Melissa; Smith, Linda; Servais, Michelle; McDougall, Janette
2008-01-01
A framework of operating models for interdisciplinary research programs in clinical service organizations is presented, consisting of a "clinician-researcher" skill development model, a program evaluation model, a researcher-led knowledge generation model, and a knowledge conduit model. Together, these models comprise a tailored, collaborative…
Modelling Students' Visualisation of Chemical Reaction
ERIC Educational Resources Information Center
Cheng, Maurice M. W.; Gilbert, John K.
2017-01-01
This paper proposes a model-based notion of "submicro representations of chemical reactions". Based on three structural models of matter (the simple particle model, the atomic model and the free electron model of metals), we suggest there are two major models of reaction in school chemistry curricula: (a) reactions that are simple…
Multilevel and Latent Variable Modeling with Composite Links and Exploded Likelihoods
ERIC Educational Resources Information Center
Rabe-Hesketh, Sophia; Skrondal, Anders
2007-01-01
Composite links and exploded likelihoods are powerful yet simple tools for specifying a wide range of latent variable models. Applications considered include survival or duration models, models for rankings, small area estimation with census information, models for ordinal responses, item response models with guessing, randomized response models,…
Planning Major Curricular Change.
ERIC Educational Resources Information Center
Kirkland, Travis P.
Decision-making and change models can take many forms. One researcher (Nordvall, 1982) has suggested five conceptual models for introducing change: a political model; a rational decision-making model; a social interaction decision model; the problem-solving method; and an adaptive/linkage model which is an amalgam of each of the other models.…
UNITED STATES METEOROLOGICAL DATA - DAILY AND HOURLY FILES TO SUPPORT PREDICTIVE EXPOSURE MODELING
ORD numerical models for pesticide exposure include a model of spray drift (AgDisp), a cropland pesticide persistence model (PRZM), a surface water exposure model (EXAMS), and a model of fish bioaccumulation (BASS). A unified climatological database for these models has been asse...
2009-12-01
Business Process Modeling BPMN Business Process Modeling Notation SoA Service-oriented Architecture UML Unified Modeling Language CSP...system developers. Supporting technologies include Business Process Modeling Notation ( BPMN ), Unified Modeling Language (UML), model-driven architecture
Hunt, R.J.; Anderson, M.P.; Kelson, V.A.
1998-01-01
This paper demonstrates that analytic element models have potential as powerful screening tools that can facilitate or improve calibration of more complicated finite-difference and finite-element models. We demonstrate how a two-dimensional analytic element model was used to identify errors in a complex three-dimensional finite-difference model caused by incorrect specification of boundary conditions. An improved finite-difference model was developed using boundary conditions developed from a far-field analytic element model. Calibration of a revised finite-difference model was achieved using fewer zones of hydraulic conductivity and lake bed conductance than the original finite-difference model. Calibration statistics were also improved in that simulated base-flows were much closer to measured values. The improved calibration is due mainly to improved specification of the boundary conditions made possible by first solving the far-field problem with an analytic element model.This paper demonstrates that analytic element models have potential as powerful screening tools that can facilitate or improve calibration of more complicated finite-difference and finite-element models. We demonstrate how a two-dimensional analytic element model was used to identify errors in a complex three-dimensional finite-difference model caused by incorrect specification of boundary conditions. An improved finite-difference model was developed using boundary conditions developed from a far-field analytic element model. Calibration of a revised finite-difference model was achieved using fewer zones of hydraulic conductivity and lake bed conductance than the original finite-difference model. Calibration statistics were also improved in that simulated base-flows were much closer to measured values. The improved calibration is due mainly to improved specification of the boundary conditions made possible by first solving the far-field problem with an analytic element model.
A stochastic model for tumor geometry evolution during radiation therapy in cervical cancer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Yifang; Lee, Chi-Guhn; Chan, Timothy C. Y., E-mail: tcychan@mie.utoronto.ca
2014-02-15
Purpose: To develop mathematical models to predict the evolution of tumor geometry in cervical cancer undergoing radiation therapy. Methods: The authors develop two mathematical models to estimate tumor geometry change: a Markov model and an isomorphic shrinkage model. The Markov model describes tumor evolution by investigating the change in state (either tumor or nontumor) of voxels on the tumor surface. It assumes that the evolution follows a Markov process. Transition probabilities are obtained using maximum likelihood estimation and depend on the states of neighboring voxels. The isomorphic shrinkage model describes tumor shrinkage or growth in terms of layers of voxelsmore » on the tumor surface, instead of modeling individual voxels. The two proposed models were applied to data from 29 cervical cancer patients treated at Princess Margaret Cancer Centre and then compared to a constant volume approach. Model performance was measured using sensitivity and specificity. Results: The Markov model outperformed both the isomorphic shrinkage and constant volume models in terms of the trade-off between sensitivity (target coverage) and specificity (normal tissue sparing). Generally, the Markov model achieved a few percentage points in improvement in either sensitivity or specificity compared to the other models. The isomorphic shrinkage model was comparable to the Markov approach under certain parameter settings. Convex tumor shapes were easier to predict. Conclusions: By modeling tumor geometry change at the voxel level using a probabilistic model, improvements in target coverage and normal tissue sparing are possible. Our Markov model is flexible and has tunable parameters to adjust model performance to meet a range of criteria. Such a model may support the development of an adaptive paradigm for radiation therapy of cervical cancer.« less
NASA Astrophysics Data System (ADS)
Pincus, R.; Stevens, B. B.; Forster, P.; Collins, W.; Ramaswamy, V.
2014-12-01
The Radiative Forcing Model Intercomparison Project (RFMIP): Assessment and characterization of forcing to enable feedback studies An enormous amount of attention has been paid to the diversity of responses in the CMIP and other multi-model ensembles. This diversity is normally interpreted as a distribution in climate sensitivity driven by some distribution of feedback mechanisms. Identification of these feedbacks relies on precise identification of the forcing to which each model is subject, including distinguishing true error from model diversity. The Radiative Forcing Model Intercomparison Project (RFMIP) aims to disentangle the role of forcing from model sensitivity as determinants of varying climate model response by carefully characterizing the radiative forcing to which such models are subject and by coordinating experiments in which it is specified. RFMIP consists of four activities: 1) An assessment of accuracy in flux and forcing calculations for greenhouse gases under past, present, and future climates, using off-line radiative transfer calculations in specified atmospheres with climate model parameterizations and reference models 2) Characterization and assessment of model-specific historical forcing by anthropogenic aerosols, based on coordinated diagnostic output from climate models and off-line radiative transfer calculations with reference models 3) Characterization of model-specific effective radiative forcing, including contributions of model climatology and rapid adjustments, using coordinated climate model integrations and off-line radiative transfer calculations with a single fast model 4) Assessment of climate model response to precisely-characterized radiative forcing over the historical record, including efforts to infer true historical forcing from patterns of response, by direct specification of non-greenhouse-gas forcing in a series of coordinated climate model integrations This talk discusses the rationale for RFMIP, provides an overview of the four activities, and presents preliminary motivating results.
NASA Technical Reports Server (NTRS)
Wang, Yi; Pant, Kapil; Brenner, Martin J.; Ouellette, Jeffrey A.
2018-01-01
This paper presents a data analysis and modeling framework to tailor and develop linear parameter-varying (LPV) aeroservoelastic (ASE) model database for flexible aircrafts in broad 2D flight parameter space. The Kriging surrogate model is constructed using ASE models at a fraction of grid points within the original model database, and then the ASE model at any flight condition can be obtained simply through surrogate model interpolation. The greedy sampling algorithm is developed to select the next sample point that carries the worst relative error between the surrogate model prediction and the benchmark model in the frequency domain among all input-output channels. The process is iterated to incrementally improve surrogate model accuracy till a pre-determined tolerance or iteration budget is met. The methodology is applied to the ASE model database of a flexible aircraft currently being tested at NASA/AFRC for flutter suppression and gust load alleviation. Our studies indicate that the proposed method can reduce the number of models in the original database by 67%. Even so the ASE models obtained through Kriging interpolation match the model in the original database constructed directly from the physics-based tool with the worst relative error far below 1%. The interpolated ASE model exhibits continuously-varying gains along a set of prescribed flight conditions. More importantly, the selected grid points are distributed non-uniformly in the parameter space, a) capturing the distinctly different dynamic behavior and its dependence on flight parameters, and b) reiterating the need and utility for adaptive space sampling techniques for ASE model database compaction. The present framework is directly extendible to high-dimensional flight parameter space, and can be used to guide the ASE model development, model order reduction, robust control synthesis and novel vehicle design of flexible aircraft.
Mind the Noise When Identifying Computational Models of Cognition from Brain Activity.
Kolossa, Antonio; Kopp, Bruno
2016-01-01
The aim of this study was to analyze how measurement error affects the validity of modeling studies in computational neuroscience. A synthetic validity test was created using simulated P300 event-related potentials as an example. The model space comprised four computational models of single-trial P300 amplitude fluctuations which differed in terms of complexity and dependency. The single-trial fluctuation of simulated P300 amplitudes was computed on the basis of one of the models, at various levels of measurement error and at various numbers of data points. Bayesian model selection was performed based on exceedance probabilities. At very low numbers of data points, the least complex model generally outperformed the data-generating model. Invalid model identification also occurred at low levels of data quality and under low numbers of data points if the winning model's predictors were closely correlated with the predictors from the data-generating model. Given sufficient data quality and numbers of data points, the data-generating model could be correctly identified, even against models which were very similar to the data-generating model. Thus, a number of variables affects the validity of computational modeling studies, and data quality and numbers of data points are among the main factors relevant to the issue. Further, the nature of the model space (i.e., model complexity, model dependency) should not be neglected. This study provided quantitative results which show the importance of ensuring the validity of computational modeling via adequately prepared studies. The accomplishment of synthetic validity tests is recommended for future applications. Beyond that, we propose to render the demonstration of sufficient validity via adequate simulations mandatory to computational modeling studies.
Chasing Perfection: Should We Reduce Model Uncertainty in Carbon Cycle-Climate Feedbacks
NASA Astrophysics Data System (ADS)
Bonan, G. B.; Lombardozzi, D.; Wieder, W. R.; Lindsay, K. T.; Thomas, R. Q.
2015-12-01
Earth system model simulations of the terrestrial carbon (C) cycle show large multi-model spread in the carbon-concentration and carbon-climate feedback parameters. Large differences among models are also seen in their simulation of global vegetation and soil C stocks and other aspects of the C cycle, prompting concern about model uncertainty and our ability to faithfully represent fundamental aspects of the terrestrial C cycle in Earth system models. Benchmarking analyses that compare model simulations with common datasets have been proposed as a means to assess model fidelity with observations, and various model-data fusion techniques have been used to reduce model biases. While such efforts will reduce multi-model spread, they may not help reduce uncertainty (and increase confidence) in projections of the C cycle over the twenty-first century. Many ecological and biogeochemical processes represented in Earth system models are poorly understood at both the site scale and across large regions, where biotic and edaphic heterogeneity are important. Our experience with the Community Land Model (CLM) suggests that large uncertainty in the terrestrial C cycle and its feedback with climate change is an inherent property of biological systems. The challenge of representing life in Earth system models, with the rich diversity of lifeforms and complexity of biological systems, may necessitate a multitude of modeling approaches to capture the range of possible outcomes. Such models should encompass a range of plausible model structures. We distinguish between model parameter uncertainty and model structural uncertainty. Focusing on improved parameter estimates may, in fact, limit progress in assessing model structural uncertainty associated with realistically representing biological processes. Moreover, higher confidence may be achieved through better process representation, but this does not necessarily reduce uncertainty.
Clarity versus complexity: land-use modeling as a practical tool for decision-makers
Sohl, Terry L.; Claggett, Peter
2013-01-01
The last decade has seen a remarkable increase in the number of modeling tools available to examine future land-use and land-cover (LULC) change. Integrated modeling frameworks, agent-based models, cellular automata approaches, and other modeling techniques have substantially improved the representation of complex LULC systems, with each method using a different strategy to address complexity. However, despite the development of new and better modeling tools, the use of these tools is limited for actual planning, decision-making, or policy-making purposes. LULC modelers have become very adept at creating tools for modeling LULC change, but complicated models and lack of transparency limit their utility for decision-makers. The complicated nature of many LULC models also makes it impractical or even impossible to perform a rigorous analysis of modeling uncertainty. This paper provides a review of land-cover modeling approaches and the issues causes by the complicated nature of models, and provides suggestions to facilitate the increased use of LULC models by decision-makers and other stakeholders. The utility of LULC models themselves can be improved by 1) providing model code and documentation, 2) through the use of scenario frameworks to frame overall uncertainties, 3) improving methods for generalizing key LULC processes most important to stakeholders, and 4) adopting more rigorous standards for validating models and quantifying uncertainty. Communication with decision-makers and other stakeholders can be improved by increasing stakeholder participation in all stages of the modeling process, increasing the transparency of model structure and uncertainties, and developing user-friendly decision-support systems to bridge the link between LULC science and policy. By considering these options, LULC science will be better positioned to support decision-makers and increase real-world application of LULC modeling results.
Healy, Richard W.; Scanlon, Bridget R.
2010-01-01
Simulation models are widely used in all types of hydrologic studies, and many of these models can be used to estimate recharge. Models can provide important insight into the functioning of hydrologic systems by identifying factors that influence recharge. The predictive capability of models can be used to evaluate how changes in climate, water use, land use, and other factors may affect recharge rates. Most hydrological simulation models, including watershed models and groundwater-flow models, are based on some form of water-budget equation, so the material in this chapter is closely linked to that in Chapter 2. Empirical models that are not based on a water-budget equation have also been used for estimating recharge; these models generally take the form of simple estimation equations that define annual recharge as a function of precipitation and possibly other climatic data or watershed characteristics.Model complexity varies greatly. Some models are simple accounting models; others attempt to accurately represent the physics of water movement through each compartment of the hydrologic system. Some models provide estimates of recharge explicitly; for example, a model based on the Richards equation can simulate water movement from the soil surface through the unsaturated zone to the water table. Recharge estimates can be obtained indirectly from other models. For example, recharge is a parameter in groundwater-flow models that solve for hydraulic head (i.e. groundwater level). Recharge estimates can be obtained through a model calibration process in which recharge and other model parameter values are adjusted so that simulated water levels agree with measured water levels. The simulation that provides the closest agreement is called the best fit, and the recharge value used in that simulation is the model-generated estimate of recharge.
Schryver, Jack; Nutaro, James; Shankar, Mallikarjun
2015-10-30
An agent-based simulation model hierarchy emulating disease states and behaviors critical to progression of diabetes type 2 was designed and implemented in the DEVS framework. The models are translations of basic elements of an established system dynamics model of diabetes. In this model hierarchy, which mimics diabetes progression over an aggregated U.S. population, was dis-aggregated and reconstructed bottom-up at the individual (agent) level. Four levels of model complexity were defined in order to systematically evaluate which parameters are needed to mimic outputs of the system dynamics model. Moreover, the four estimated models attempted to replicate stock counts representing disease statesmore » in the system dynamics model, while estimating impacts of an elderliness factor, obesity factor and health-related behavioral parameters. Health-related behavior was modeled as a simple realization of the Theory of Planned Behavior, a joint function of individual attitude and diffusion of social norms that spread over each agent s social network. Although the most complex agent-based simulation model contained 31 adjustable parameters, all models were considerably less complex than the system dynamics model which required numerous time series inputs to make its predictions. In all three elaborations of the baseline model provided significantly improved fits to the output of the system dynamics model. The performances of the baseline agent-based model and its extensions illustrate a promising approach to translate complex system dynamics models into agent-based model alternatives that are both conceptually simpler and capable of capturing main effects of complex local agent-agent interactions.« less
Kalvāns, Andis; Bitāne, Māra; Kalvāne, Gunta
2015-02-01
A historical phenological record and meteorological data of the period 1960-2009 are used to analyse the ability of seven phenological models to predict leaf unfolding and beginning of flowering for two tree species-silver birch Betula pendula and bird cherry Padus racemosa-in Latvia. Model stability is estimated performing multiple model fitting runs using half of the data for model training and the other half for evaluation. Correlation coefficient, mean absolute error and mean squared error are used to evaluate model performance. UniChill (a model using sigmoidal development rate and temperature relationship and taking into account the necessity for dormancy release) and DDcos (a simple degree-day model considering the diurnal temperature fluctuations) are found to be the best models for describing the considered spring phases. A strong collinearity between base temperature and required heat sum is found for several model fitting runs of the simple degree-day based models. Large variation of the model parameters between different model fitting runs in case of more complex models indicates similar collinearity and over-parameterization of these models. It is suggested that model performance can be improved by incorporating the resolved daily temperature fluctuations of the DDcos model into the framework of the more complex models (e.g. UniChill). The average base temperature, as found by DDcos model, for B. pendula leaf unfolding is 5.6 °C and for the start of the flowering 6.7 °C; for P. racemosa, the respective base temperatures are 3.2 °C and 3.4 °C.
A toolbox and record for scientific models
NASA Technical Reports Server (NTRS)
Ellman, Thomas
1994-01-01
Computational science presents a host of challenges for the field of knowledge-based software design. Scientific computation models are difficult to construct. Models constructed by one scientist are easily misapplied by other scientists to problems for which they are not well-suited. Finally, models constructed by one scientist are difficult for others to modify or extend to handle new types of problems. Construction of scientific models actually involves much more than the mechanics of building a single computational model. In the course of developing a model, a scientist will often test a candidate model against experimental data or against a priori expectations. Test results often lead to revisions of the model and a consequent need for additional testing. During a single model development session, a scientist typically examines a whole series of alternative models, each using different simplifying assumptions or modeling techniques. A useful scientific software design tool must support these aspects of the model development process as well. In particular, it should propose and carry out tests of candidate models. It should analyze test results and identify models and parts of models that must be changed. It should determine what types of changes can potentially cure a given negative test result. It should organize candidate models, test data, and test results into a coherent record of the development process. Finally, it should exploit the development record for two purposes: (1) automatically determining the applicability of a scientific model to a given problem; (2) supporting revision of a scientific model to handle a new type of problem. Existing knowledge-based software design tools must be extended in order to provide these facilities.
Donnolley, Natasha R; Chambers, Georgina M; Butler-Henderson, Kerryn A; Chapman, Michael G; Sullivan, Elizabeth A
2017-08-01
Without a standard terminology to classify models of maternity care, it is problematic to compare and evaluate clinical outcomes across different models. The Maternity Care Classification System is a novel system developed in Australia to classify models of maternity care based on their characteristics and an overarching broad model descriptor (Major Model Category). This study aimed to assess the extent of variability in the defining characteristics of models of care grouped to the same Major Model Category, using the Maternity Care Classification System. All public hospital maternity services in New South Wales, Australia, were invited to complete a web-based survey classifying two local models of care using the Maternity Care Classification System. A descriptive analysis of the variation in 15 attributes of models of care was conducted to evaluate the level of heterogeneity within and across Major Model Categories. Sixty-nine out of seventy hospitals responded, classifying 129 models of care. There was wide variation in a number of important attributes of models classified to the same Major Model Category. The category of 'Public hospital maternity care' contained the most variation across all characteristics. This study demonstrated that although models of care can be grouped into a distinct set of Major Model Categories, there are significant variations in models of the same type. This could result in seemingly 'like' models of care being incorrectly compared if grouped only by the Major Model Category. Copyright © 2017 Australian College of Midwives. Published by Elsevier Ltd. All rights reserved.
The Diffusion Model Is Not a Deterministic Growth Model: Comment on Jones and Dzhafarov (2014)
Smith, Philip L.; Ratcliff, Roger; McKoon, Gail
2015-01-01
Jones and Dzhafarov (2014) claim that several current models of speeded decision making in cognitive tasks, including the diffusion model, can be viewed as special cases of other general models or model classes. The general models can be made to match any set of response time (RT) distribution and accuracy data exactly by a suitable choice of parameters and so are unfalsifiable. The implication of their claim is that models like the diffusion model are empirically testable only by artificially restricting them to exclude unfalsifiable instances of the general model. We show that Jones and Dzhafarov’s argument depends on enlarging the class of “diffusion” models to include models in which there is little or no diffusion. The unfalsifiable models are deterministic or near-deterministic growth models, from which the effects of within-trial variability have been removed or in which they are constrained to be negligible. These models attribute most or all of the variability in RT and accuracy to across-trial variability in the rate of evidence growth, which is permitted to be distributed arbitrarily and to vary freely across experimental conditions. In contrast, in the standard diffusion model, within-trial variability in evidence is the primary determinant of variability in RT. Across-trial variability, which determines the relative speed of correct responses and errors, is theoretically and empirically constrained. Jones and Dzhafarov’s attempt to include the diffusion model in a class of models that also includes deterministic growth models misrepresents and trivializes it and conveys a misleading picture of cognitive decision-making research. PMID:25347314
Hill, Mary C.; L. Foglia,; S. W. Mehl,; P. Burlando,
2013-01-01
Model adequacy is evaluated with alternative models rated using model selection criteria (AICc, BIC, and KIC) and three other statistics. Model selection criteria are tested with cross-validation experiments and insights for using alternative models to evaluate model structural adequacy are provided. The study is conducted using the computer codes UCODE_2005 and MMA (MultiModel Analysis). One recharge alternative is simulated using the TOPKAPI hydrological model. The predictions evaluated include eight heads and three flows located where ecological consequences and model precision are of concern. Cross-validation is used to obtain measures of prediction accuracy. Sixty-four models were designed deterministically and differ in representation of river, recharge, bedrock topography, and hydraulic conductivity. Results include: (1) What may seem like inconsequential choices in model construction may be important to predictions. Analysis of predictions from alternative models is advised. (2) None of the model selection criteria consistently identified models with more accurate predictions. This is a disturbing result that suggests to reconsider the utility of model selection criteria, and/or the cross-validation measures used in this work to measure model accuracy. (3) KIC displayed poor performance for the present regression problems; theoretical considerations suggest that difficulties are associated with wide variations in the sensitivity term of KIC resulting from the models being nonlinear and the problems being ill-posed due to parameter correlations and insensitivity. The other criteria performed somewhat better, and similarly to each other. (4) Quantities with high leverage are more difficult to predict. The results are expected to be generally applicable to models of environmental systems.
Graham, Jim; Young, Nick; Jarnevich, Catherine S.; Newman, Greg; Evangelista, Paul; Stohlgren, Thomas J.
2013-01-01
Habitat suitability maps are commonly created by modeling a species’ environmental niche from occurrences and environmental characteristics. Here, we introduce the hyper-envelope modeling interface (HEMI), providing a new method for creating habitat suitability models using Bezier surfaces to model a species niche in environmental space. HEMI allows modeled surfaces to be visualized and edited in environmental space based on expert knowledge and does not require absence points for model development. The modeled surfaces require relatively few parameters compared to similar modeling approaches and may produce models that better match ecological niche theory. As a case study, we modeled the invasive species tamarisk (Tamarix spp.) in the western USA. We compare results from HEMI with those from existing similar modeling approaches (including BioClim, BioMapper, and Maxent). We used synthetic surfaces to create visualizations of the various models in environmental space and used modified area under the curve (AUC) statistic and akaike information criterion (AIC) as measures of model performance. We show that HEMI produced slightly better AUC values, except for Maxent and better AIC values overall. HEMI created a model with only ten parameters while Maxent produced a model with over 100 and BioClim used only eight. Additionally, HEMI allowed visualization and editing of the model in environmental space to develop alternative potential habitat scenarios. The use of Bezier surfaces can provide simple models that match our expectations of biological niche models and, at least in some cases, out-perform more complex approaches.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schryver, Jack; Nutaro, James; Shankar, Mallikarjun
An agent-based simulation model hierarchy emulating disease states and behaviors critical to progression of diabetes type 2 was designed and implemented in the DEVS framework. The models are translations of basic elements of an established system dynamics model of diabetes. In this model hierarchy, which mimics diabetes progression over an aggregated U.S. population, was dis-aggregated and reconstructed bottom-up at the individual (agent) level. Four levels of model complexity were defined in order to systematically evaluate which parameters are needed to mimic outputs of the system dynamics model. Moreover, the four estimated models attempted to replicate stock counts representing disease statesmore » in the system dynamics model, while estimating impacts of an elderliness factor, obesity factor and health-related behavioral parameters. Health-related behavior was modeled as a simple realization of the Theory of Planned Behavior, a joint function of individual attitude and diffusion of social norms that spread over each agent s social network. Although the most complex agent-based simulation model contained 31 adjustable parameters, all models were considerably less complex than the system dynamics model which required numerous time series inputs to make its predictions. In all three elaborations of the baseline model provided significantly improved fits to the output of the system dynamics model. The performances of the baseline agent-based model and its extensions illustrate a promising approach to translate complex system dynamics models into agent-based model alternatives that are both conceptually simpler and capable of capturing main effects of complex local agent-agent interactions.« less
Probabilistic Graphical Model Representation in Phylogenetics
Höhna, Sebastian; Heath, Tracy A.; Boussau, Bastien; Landis, Michael J.; Ronquist, Fredrik; Huelsenbeck, John P.
2014-01-01
Recent years have seen a rapid expansion of the model space explored in statistical phylogenetics, emphasizing the need for new approaches to statistical model representation and software development. Clear communication and representation of the chosen model is crucial for: (i) reproducibility of an analysis, (ii) model development, and (iii) software design. Moreover, a unified, clear and understandable framework for model representation lowers the barrier for beginners and nonspecialists to grasp complex phylogenetic models, including their assumptions and parameter/variable dependencies. Graphical modeling is a unifying framework that has gained in popularity in the statistical literature in recent years. The core idea is to break complex models into conditionally independent distributions. The strength lies in the comprehensibility, flexibility, and adaptability of this formalism, and the large body of computational work based on it. Graphical models are well-suited to teach statistical models, to facilitate communication among phylogeneticists and in the development of generic software for simulation and statistical inference. Here, we provide an introduction to graphical models for phylogeneticists and extend the standard graphical model representation to the realm of phylogenetics. We introduce a new graphical model component, tree plates, to capture the changing structure of the subgraph corresponding to a phylogenetic tree. We describe a range of phylogenetic models using the graphical model framework and introduce modules to simplify the representation of standard components in large and complex models. Phylogenetic model graphs can be readily used in simulation, maximum likelihood inference, and Bayesian inference using, for example, Metropolis–Hastings or Gibbs sampling of the posterior distribution. [Computation; graphical models; inference; modularization; statistical phylogenetics; tree plate.] PMID:24951559
Field Test of a Hybrid Finite-Difference and Analytic Element Regional Model.
Abrams, D B; Haitjema, H M; Feinstein, D T; Hunt, R J
2016-01-01
Regional finite-difference models often have cell sizes that are too large to sufficiently model well-stream interactions. Here, a steady-state hybrid model is applied whereby the upper layer or layers of a coarse MODFLOW model are replaced by the analytic element model GFLOW, which represents surface waters and wells as line and point sinks. The two models are coupled by transferring cell-by-cell leakage obtained from the original MODFLOW model to the bottom of the GFLOW model. A real-world test of the hybrid model approach is applied on a subdomain of an existing model of the Lake Michigan Basin. The original (coarse) MODFLOW model consists of six layers, the top four of which are aggregated into GFLOW as a single layer, while the bottom two layers remain part of MODFLOW in the hybrid model. The hybrid model and a refined "benchmark" MODFLOW model simulate similar baseflows. The hybrid and benchmark models also simulate similar baseflow reductions due to nearby pumping when the well is located within the layers represented by GFLOW. However, the benchmark model requires refinement of the model grid in the local area of interest, while the hybrid approach uses a gridless top layer and is thus unaffected by grid discretization errors. The hybrid approach is well suited to facilitate cost-effective retrofitting of existing coarse grid MODFLOW models commonly used for regional studies because it leverages the strengths of both finite-difference and analytic element methods for predictions in mildly heterogeneous systems that can be simulated with steady-state conditions. © 2015, National Ground Water Association.
Documenting Models for Interoperability and Reusability (proceedings)
Many modeling frameworks compartmentalize science via individual models that link sets of small components to create larger modeling workflows. Developing integrated watershed models increasingly requires coupling multidisciplinary, independent models, as well as collaboration be...
Documenting Models for Interoperability and Reusability
Many modeling frameworks compartmentalize science via individual models that link sets of small components to create larger modeling workflows. Developing integrated watershed models increasingly requires coupling multidisciplinary, independent models, as well as collaboration be...
Integration of Tuyere, Raceway and Shaft Models for Predicting Blast Furnace Process
NASA Astrophysics Data System (ADS)
Fu, Dong; Tang, Guangwu; Zhao, Yongfu; D'Alessio, John; Zhou, Chenn Q.
2018-06-01
A novel modeling strategy is presented for simulating the blast furnace iron making process. Such physical and chemical phenomena are taking place across a wide range of length and time scales, and three models are developed to simulate different regions of the blast furnace, i.e., the tuyere model, the raceway model and the shaft model. This paper focuses on the integration of the three models to predict the entire blast furnace process. Mapping output and input between models and an iterative scheme are developed to establish communications between models. The effects of tuyere operation and burden distribution on blast furnace fuel efficiency are investigated numerically. The integration of different models provides a way to realistically simulate the blast furnace by improving the modeling resolution on local phenomena and minimizing the model assumptions.
Accounting for uncertainty in health economic decision models by using model averaging.
Jackson, Christopher H; Thompson, Simon G; Sharples, Linda D
2009-04-01
Health economic decision models are subject to considerable uncertainty, much of which arises from choices between several plausible model structures, e.g. choices of covariates in a regression model. Such structural uncertainty is rarely accounted for formally in decision models but can be addressed by model averaging. We discuss the most common methods of averaging models and the principles underlying them. We apply them to a comparison of two surgical techniques for repairing abdominal aortic aneurysms. In model averaging, competing models are usually either weighted by using an asymptotically consistent model assessment criterion, such as the Bayesian information criterion, or a measure of predictive ability, such as Akaike's information criterion. We argue that the predictive approach is more suitable when modelling the complex underlying processes of interest in health economics, such as individual disease progression and response to treatment.
Palm: Easing the Burden of Analytical Performance Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tallent, Nathan R.; Hoisie, Adolfy
2014-06-01
Analytical (predictive) application performance models are critical for diagnosing performance-limiting resources, optimizing systems, and designing machines. Creating models, however, is difficult because they must be both accurate and concise. To ease the burden of performance modeling, we developed Palm, a modeling tool that combines top-down (human-provided) semantic insight with bottom-up static and dynamic analysis. To express insight, Palm defines a source code modeling annotation language. By coordinating models and source code, Palm's models are `first-class' and reproducible. Unlike prior work, Palm formally links models, functions, and measurements. As a result, Palm (a) uses functions to either abstract or express complexitymore » (b) generates hierarchical models (representing an application's static and dynamic structure); and (c) automatically incorporates measurements to focus attention, represent constant behavior, and validate models. We discuss generating models for three different applications.« less
A Hybrid 3D Indoor Space Model
NASA Astrophysics Data System (ADS)
Jamali, Ali; Rahman, Alias Abdul; Boguslawski, Pawel
2016-10-01
GIS integrates spatial information and spatial analysis. An important example of such integration is for emergency response which requires route planning inside and outside of a building. Route planning requires detailed information related to indoor and outdoor environment. Indoor navigation network models including Geometric Network Model (GNM), Navigable Space Model, sub-division model and regular-grid model lack indoor data sources and abstraction methods. In this paper, a hybrid indoor space model is proposed. In the proposed method, 3D modeling of indoor navigation network is based on surveying control points and it is less dependent on the 3D geometrical building model. This research proposes a method of indoor space modeling for the buildings which do not have proper 2D/3D geometrical models or they lack semantic or topological information. The proposed hybrid model consists of topological, geometrical and semantical space.
Modified hyperbolic sine model for titanium dioxide-based memristive thin films
NASA Astrophysics Data System (ADS)
Abu Bakar, Raudah; Syahirah Kamarozaman, Nur; Fazlida Hanim Abdullah, Wan; Herman, Sukreen Hana
2018-03-01
Since the emergence of memristor as the newest fundamental circuit elements, studies on memristor modeling have been evolved. To date, the developed models were based on the linear model, linear ionic drift model using different window functions, tunnelling barrier model and hyperbolic-sine function based model. Although using hyperbolic-sine function model could predict the memristor electrical properties, the model was not well fitted to the experimental data. In order to improve the performance of the hyperbolic-sine function model, the state variable equation was modified. On the one hand, the addition of window function cannot provide an improved fitting. By multiplying the Yakopcic’s state variable model to Chang’s model on the other hand resulted in the closer agreement with the TiO2 thin film experimental data. The percentage error was approximately 2.15%.
Resident Role Modeling: "It Just Happens".
Sternszus, Robert; Macdonald, Mary Ellen; Steinert, Yvonne
2016-03-01
Role modeling by staff physicians is a significant component of the clinical teaching of students and residents. However, the importance of resident role modeling has only recently emerged, and residents' understanding of themselves as role models has yet to be explored. This study sought to understand residents' perceptions of themselves as role models, describe how residents learn about role modeling, and identify ways to improve resident role modeling. Fourteen semistructured interviews were conducted with residents in internal medicine, general surgery, and pediatrics at the McGill University Faculty of Medicine between April and September 2013. Interviews were audio-recorded and subsequently transcribed for analysis; iterative analysis followed principles of qualitative description. Four primary themes were identified through data analysis: residents perceived role modeling as the demonstration of "good" behaviors in the clinical context; residents believed that learning from their role modeling "just happens" as long as learners are "watching"; residents did not equate role modeling with being a role model; and residents learned about role modeling from watching their positive and negative role models. While residents were aware that students and junior colleagues learned from their modeling, they were often not aware of role modeling as it was occurring; they also believed that learning from role modeling "just happens" and did not always see themselves as role models. Helping residents view effective role modeling as a deliberate process rather than something that "just happens" may improve clinical teaching across the continuum of medical education.
Why Bother to Calibrate? Model Consistency and the Value of Prior Information
NASA Astrophysics Data System (ADS)
Hrachowitz, Markus; Fovet, Ophelie; Ruiz, Laurent; Euser, Tanja; Gharari, Shervan; Nijzink, Remko; Savenije, Hubert; Gascuel-Odoux, Chantal
2015-04-01
Hydrological models frequently suffer from limited predictive power despite adequate calibration performances. This can indicate insufficient representations of the underlying processes. Thus ways are sought to increase model consistency while satisfying the contrasting priorities of increased model complexity and limited equifinality. In this study the value of a systematic use of hydrological signatures and expert knowledge for increasing model consistency was tested. It was found that a simple conceptual model, constrained by 4 calibration objective functions, was able to adequately reproduce the hydrograph in the calibration period. The model, however, could not reproduce 20 hydrological signatures, indicating a lack of model consistency. Subsequently, testing 11 models, model complexity was increased in a stepwise way and counter-balanced by using prior information about the system to impose "prior constraints", inferred from expert knowledge and to ensure a model which behaves well with respect to the modeller's perception of the system. We showed that, in spite of unchanged calibration performance, the most complex model set-up exhibited increased performance in the independent test period and skill to reproduce all 20 signatures, indicating a better system representation. The results suggest that a model may be inadequate despite good performance with respect to multiple calibration objectives and that increasing model complexity, if efficiently counter-balanced by available prior constraints, can increase predictive performance of a model and its skill to reproduce hydrological signatures. The results strongly illustrate the need to balance automated model calibration with a more expert-knowledge driven strategy of constraining models.
Why Bother and Calibrate? Model Consistency and the Value of Prior Information.
NASA Astrophysics Data System (ADS)
Hrachowitz, M.; Fovet, O.; Ruiz, L.; Euser, T.; Gharari, S.; Nijzink, R.; Freer, J. E.; Savenije, H.; Gascuel-Odoux, C.
2014-12-01
Hydrological models frequently suffer from limited predictive power despite adequate calibration performances. This can indicate insufficient representations of the underlying processes. Thus ways are sought to increase model consistency while satisfying the contrasting priorities of increased model complexity and limited equifinality. In this study the value of a systematic use of hydrological signatures and expert knowledge for increasing model consistency was tested. It was found that a simple conceptual model, constrained by 4 calibration objective functions, was able to adequately reproduce the hydrograph in the calibration period. The model, however, could not reproduce 20 hydrological signatures, indicating a lack of model consistency. Subsequently, testing 11 models, model complexity was increased in a stepwise way and counter-balanced by using prior information about the system to impose "prior constraints", inferred from expert knowledge and to ensure a model which behaves well with respect to the modeller's perception of the system. We showed that, in spite of unchanged calibration performance, the most complex model set-up exhibited increased performance in the independent test period and skill to reproduce all 20 signatures, indicating a better system representation. The results suggest that a model may be inadequate despite good performance with respect to multiple calibration objectives and that increasing model complexity, if efficiently counter-balanced by available prior constraints, can increase predictive performance of a model and its skill to reproduce hydrological signatures. The results strongly illustrate the need to balance automated model calibration with a more expert-knowledge driven strategy of constraining models.
NASA Astrophysics Data System (ADS)
Hrachowitz, M.; Fovet, O.; Ruiz, L.; Euser, T.; Gharari, S.; Nijzink, R.; Freer, J.; Savenije, H. H. G.; Gascuel-Odoux, C.
2014-09-01
Hydrological models frequently suffer from limited predictive power despite adequate calibration performances. This can indicate insufficient representations of the underlying processes. Thus, ways are sought to increase model consistency while satisfying the contrasting priorities of increased model complexity and limited equifinality. In this study, the value of a systematic use of hydrological signatures and expert knowledge for increasing model consistency was tested. It was found that a simple conceptual model, constrained by four calibration objective functions, was able to adequately reproduce the hydrograph in the calibration period. The model, however, could not reproduce a suite of hydrological signatures, indicating a lack of model consistency. Subsequently, testing 11 models, model complexity was increased in a stepwise way and counter-balanced by "prior constraints," inferred from expert knowledge to ensure a model which behaves well with respect to the modeler's perception of the system. We showed that, in spite of unchanged calibration performance, the most complex model setup exhibited increased performance in the independent test period and skill to better reproduce all tested signatures, indicating a better system representation. The results suggest that a model may be inadequate despite good performance with respect to multiple calibration objectives and that increasing model complexity, if counter-balanced by prior constraints, can significantly increase predictive performance of a model and its skill to reproduce hydrological signatures. The results strongly illustrate the need to balance automated model calibration with a more expert-knowledge-driven strategy of constraining models.
Nonlinear time series modeling and forecasting the seismic data of the Hindu Kush region
NASA Astrophysics Data System (ADS)
Khan, Muhammad Yousaf; Mittnik, Stefan
2018-01-01
In this study, we extended the application of linear and nonlinear time models in the field of earthquake seismology and examined the out-of-sample forecast accuracy of linear Autoregressive (AR), Autoregressive Conditional Duration (ACD), Self-Exciting Threshold Autoregressive (SETAR), Threshold Autoregressive (TAR), Logistic Smooth Transition Autoregressive (LSTAR), Additive Autoregressive (AAR), and Artificial Neural Network (ANN) models for seismic data of the Hindu Kush region. We also extended the previous studies by using Vector Autoregressive (VAR) and Threshold Vector Autoregressive (TVAR) models and compared their forecasting accuracy with linear AR model. Unlike previous studies that typically consider the threshold model specifications by using internal threshold variable, we specified these models with external transition variables and compared their out-of-sample forecasting performance with the linear benchmark AR model. The modeling results show that time series models used in the present study are capable of capturing the dynamic structure present in the seismic data. The point forecast results indicate that the AR model generally outperforms the nonlinear models. However, in some cases, threshold models with external threshold variables specification produce more accurate forecasts, indicating that specification of threshold time series models is of crucial importance. For raw seismic data, the ACD model does not show an improved out-of-sample forecasting performance over the linear AR model. The results indicate that the AR model is the best forecasting device to model and forecast the raw seismic data of the Hindu Kush region.
Modeling habitat for Marbled Murrelets on the Siuslaw National Forest, Oregon, using lidar data
Hagar, Joan C.; Aragon, Ramiro; Haggerty, Patricia; Hollenbeck, Jeff P.
2018-03-28
Habitat models using lidar-derived variables that quantify fine-scale variation in vegetation structure can improve the accuracy of occupancy estimates for canopy-dwelling species over models that use variables derived from other remote sensing techniques. However, the ability of models developed at such a fine spatial scale to maintain accuracy at regional or larger spatial scales has not been tested. We tested the transferability of a lidar-based habitat model for the threatened Marbled Murrelet (Brachyramphus marmoratus) between two management districts within a larger regional conservation zone in coastal western Oregon. We compared the performance of the transferred model against models developed with data from the application location. The transferred model had good discrimination (AUC = 0.73) at the application location, and model performance was further improved by fitting the original model with coefficients from the application location dataset (AUC = 0.79). However, the model selection procedure indicated that neither of these transferred models were considered competitive with a model trained on local data. The new model trained on data from the application location resulted in the selection of a slightly different set of lidar metrics from the original model, but both transferred and locally trained models consistently indicated positive relationships between the probability of occupancy and lidar measures of canopy structural complexity. We conclude that while the locally trained model had superior performance for local application, the transferred model could reasonably be applied to the entire conservation zone.
How Qualitative Methods Can be Used to Inform Model Development.
Husbands, Samantha; Jowett, Susan; Barton, Pelham; Coast, Joanna
2017-06-01
Decision-analytic models play a key role in informing healthcare resource allocation decisions. However, there are ongoing concerns with the credibility of models. Modelling methods guidance can encourage good practice within model development, but its value is dependent on its ability to address the areas that modellers find most challenging. Further, it is important that modelling methods and related guidance are continually updated in light of any new approaches that could potentially enhance model credibility. The objective of this article was to highlight the ways in which qualitative methods have been used and recommended to inform decision-analytic model development and enhance modelling practices. With reference to the literature, the article discusses two key ways in which qualitative methods can be, and have been, applied. The first approach involves using qualitative methods to understand and inform general and future processes of model development, and the second, using qualitative techniques to directly inform the development of individual models. The literature suggests that qualitative methods can improve the validity and credibility of modelling processes by providing a means to understand existing modelling approaches that identifies where problems are occurring and further guidance is needed. It can also be applied within model development to facilitate the input of experts to structural development. We recommend that current and future model development would benefit from the greater integration of qualitative methods, specifically by studying 'real' modelling processes, and by developing recommendations around how qualitative methods can be adopted within everyday modelling practice.
Large-scale model quality assessment for improving protein tertiary structure prediction.
Cao, Renzhi; Bhattacharya, Debswapna; Adhikari, Badri; Li, Jilong; Cheng, Jianlin
2015-06-15
Sampling structural models and ranking them are the two major challenges of protein structure prediction. Traditional protein structure prediction methods generally use one or a few quality assessment (QA) methods to select the best-predicted models, which cannot consistently select relatively better models and rank a large number of models well. Here, we develop a novel large-scale model QA method in conjunction with model clustering to rank and select protein structural models. It unprecedentedly applied 14 model QA methods to generate consensus model rankings, followed by model refinement based on model combination (i.e. averaging). Our experiment demonstrates that the large-scale model QA approach is more consistent and robust in selecting models of better quality than any individual QA method. Our method was blindly tested during the 11th Critical Assessment of Techniques for Protein Structure Prediction (CASP11) as MULTICOM group. It was officially ranked third out of all 143 human and server predictors according to the total scores of the first models predicted for 78 CASP11 protein domains and second according to the total scores of the best of the five models predicted for these domains. MULTICOM's outstanding performance in the extremely competitive 2014 CASP11 experiment proves that our large-scale QA approach together with model clustering is a promising solution to one of the two major problems in protein structure modeling. The web server is available at: http://sysbio.rnet.missouri.edu/multicom_cluster/human/. © The Author 2015. Published by Oxford University Press.
Scharm, Martin; Wolkenhauer, Olaf; Waltemath, Dagmar
2016-02-15
Repositories support the reuse of models and ensure transparency about results in publications linked to those models. With thousands of models available in repositories, such as the BioModels database or the Physiome Model Repository, a framework to track the differences between models and their versions is essential to compare and combine models. Difference detection not only allows users to study the history of models but also helps in the detection of errors and inconsistencies. Existing repositories lack algorithms to track a model's development over time. Focusing on SBML and CellML, we present an algorithm to accurately detect and describe differences between coexisting versions of a model with respect to (i) the models' encoding, (ii) the structure of biological networks and (iii) mathematical expressions. This algorithm is implemented in a comprehensive and open source library called BiVeS. BiVeS helps to identify and characterize changes in computational models and thereby contributes to the documentation of a model's history. Our work facilitates the reuse and extension of existing models and supports collaborative modelling. Finally, it contributes to better reproducibility of modelling results and to the challenge of model provenance. The workflow described in this article is implemented in BiVeS. BiVeS is freely available as source code and binary from sems.uni-rostock.de. The web interface BudHat demonstrates the capabilities of BiVeS at budhat.sems.uni-rostock.de. © The Author 2015. Published by Oxford University Press.
Experiments in concept modeling for radiographic image reports.
Bell, D S; Pattison-Gordon, E; Greenes, R A
1994-01-01
OBJECTIVE: Development of methods for building concept models to support structured data entry and image retrieval in chest radiography. DESIGN: An organizing model for chest-radiographic reporting was built by analyzing manually a set of natural-language chest-radiograph reports. During model building, clinician-informaticians judged alternative conceptual structures according to four criteria: content of clinically relevant detail, provision for semantic constraints, provision for canonical forms, and simplicity. The organizing model was applied in representing three sample reports in their entirety. To explore the potential for automatic model discovery, the representation of one sample report was compared with the noun phrases derived from the same report by the CLARIT natural-language processing system. RESULTS: The organizing model for chest-radiographic reporting consists of 62 concept types and 17 relations, arranged in an inheritance network. The broadest types in the model include finding, anatomic locus, procedure, attribute, and status. Diagnoses are modeled as a subtype of finding. Representing three sample reports in their entirety added 79 narrower concept types. Some CLARIT noun phrases suggested valid associations among subtypes of finding, status, and anatomic locus. CONCLUSIONS: A manual modeling process utilizing explicitly stated criteria for making modeling decisions produced an organizing model that showed consistency in early testing. A combination of top-down and bottom-up modeling was required. Natural-language processing may inform model building, but algorithms that would replace manual modeling were not discovered. Further progress in modeling will require methods for objective model evaluation and tools for formalizing the model-building process. PMID:7719807
A strategy to establish Food Safety Model Repositories.
Plaza-Rodríguez, C; Thoens, C; Falenski, A; Weiser, A A; Appel, B; Kaesbohrer, A; Filter, M
2015-07-02
Transferring the knowledge of predictive microbiology into real world food manufacturing applications is still a major challenge for the whole food safety modelling community. To facilitate this process, a strategy for creating open, community driven and web-based predictive microbial model repositories is proposed. These collaborative model resources could significantly improve the transfer of knowledge from research into commercial and governmental applications and also increase efficiency, transparency and usability of predictive models. To demonstrate the feasibility, predictive models of Salmonella in beef previously published in the scientific literature were re-implemented using an open source software tool called PMM-Lab. The models were made publicly available in a Food Safety Model Repository within the OpenML for Predictive Modelling in Food community project. Three different approaches were used to create new models in the model repositories: (1) all information relevant for model re-implementation is available in a scientific publication, (2) model parameters can be imported from tabular parameter collections and (3) models have to be generated from experimental data or primary model parameters. All three approaches were demonstrated in the paper. The sample Food Safety Model Repository is available via: http://sourceforge.net/projects/microbialmodelingexchange/files/models and the PMM-Lab software can be downloaded from http://sourceforge.net/projects/pmmlab/. This work also illustrates that a standardized information exchange format for predictive microbial models, as the key component of this strategy, could be established by adoption of resources from the Systems Biology domain. Copyright © 2015. Published by Elsevier B.V.
The LUE data model for representation of agents and fields
NASA Astrophysics Data System (ADS)
de Jong, Kor; Schmitz, Oliver; Karssenberg, Derek
2017-04-01
Traditionally, agents-based and field-based modelling environments use different data models to represent the state of information they manipulate. In agent-based modelling, involving the representation of phenomena as objects bounded in space and time, agents are often represented by classes, each of which represents a particular kind of agent and all its properties. Such classes can be used to represent entities like people, birds, cars and countries. In field-based modelling, involving the representation of the environment as continuous fields, fields are often represented by a discretization of space, using multidimensional arrays, each storing mostly a single attribute. Such arrays can be used to represent the elevation of the land-surface, the pH of the soil, or the population density in an area, for example. Representing a population of agents by class instances grouped in collections is an intuitive way of organizing information. A drawback, though, is that models in which class instances grouping properties are stored in collections are less efficient (execute slower) than models in which collections of properties are grouped. The field representation, on the other hand, is convenient for the efficient execution of models. Another drawback is that, because the data models used are so different, integrating agent-based and field-based models becomes difficult, since the model builder has to deal with multiple concepts, and often multiple modelling environments. With the development of the LUE data model [1] we aim at representing agents and fields within a single paradigm, by combining the advantages of the data models used in agent-based and field-based data modelling. This removes the barrier for writing integrated agent-based and field-based models. The resulting data model is intuitive to use and allows for efficient execution of models. LUE is both a high-level conceptual data model and a low-level physical data model. The LUE conceptual data model is a generalization of the data models used in agent-based and field-based modelling. The LUE physical data model [2] is an implementation of the LUE conceptual data model in HDF5. In our presentation we will provide details of our approach to organizing information about agents and fields. We will show examples of agent and field data represented by the conceptual and physical data model. References: [1] de Bakker, M.P., de Jong, K., Schmitz, O., Karssenberg, D., 2016. Design and demonstration of a data model to integrate agent-based and field-based modelling. Environmental Modelling and Software. http://dx.doi.org/10.1016/j.envsoft.2016.11.016 [2] de Jong, K., 2017. LUE source code. https://github.com/pcraster/lue
NASA Astrophysics Data System (ADS)
Nozu, A.
2013-12-01
A new simplified source model is proposed to explain strong ground motions from a mega-thrust earthquake. The proposed model is simpler, and involves less model parameters, than the conventional characterized source model, which itself is a simplified expression of actual earthquake source. In the proposed model, the spacio-temporal distribution of slip within a subevent is not modeled. Instead, the source spectrum associated with the rupture of a subevent is modeled and it is assumed to follow the omega-square model. By multiplying the source spectrum with the path effect and the site amplification factor, the Fourier amplitude at a target site can be obtained. Then, combining it with Fourier phase characteristics of a smaller event, the time history of strong ground motions from the subevent can be calculated. Finally, by summing up contributions from the subevents, strong ground motions from the entire rupture can be obtained. The source model consists of six parameters for each subevent, namely, longitude, latitude, depth, rupture time, seismic moment and corner frequency of the subevent. Finite size of the subevent can be taken into account in the model, because the corner frequency of the subevent is included in the model, which is inversely proportional to the length of the subevent. Thus, the proposed model is referred to as the 'pseudo point-source model'. To examine the applicability of the model, a pseudo point-source model was developed for the 2011 Tohoku earthquake. The model comprises nine subevents, located off Miyagi Prefecture through Ibaraki Prefecture. The velocity waveforms (0.2-1 Hz), the velocity envelopes (0.2-10 Hz) and the Fourier spectra (0.2-10 Hz) at 15 sites calculated with the pseudo point-source model agree well with the observed ones, indicating the applicability of the model. Then the results were compared with the results of a super-asperity (SPGA) model of the same earthquake (Nozu, 2012, AGU), which can be considered as an example of characterized source models. Although the pseudo point-source model involves much less model parameters than the super-asperity model, the errors associated with the former model were comparable to those for the latter model for velocity waveforms and envelopes. Furthermore, the errors associated with the former model were much smaller than those for the latter model for Fourier spectra. These evidences indicate the usefulness of the pseudo point-source model. Comparison of the observed (black) and synthetic (red) Fourier spectra. The spectra are the composition of two horizontal components and smoothed with a Parzen window with a band width of 0.05 Hz.
Model and Interoperability using Meta Data Annotations
NASA Astrophysics Data System (ADS)
David, O.
2011-12-01
Software frameworks and architectures are in need for meta data to efficiently support model integration. Modelers have to know the context of a model, often stepping into modeling semantics and auxiliary information usually not provided in a concise structure and universal format, consumable by a range of (modeling) tools. XML often seems the obvious solution for capturing meta data, but its wide adoption to facilitate model interoperability is limited by XML schema fragmentation, complexity, and verbosity outside of a data-automation process. Ontologies seem to overcome those shortcomings, however the practical significance of their use remains to be demonstrated. OMS version 3 took a different approach for meta data representation. The fundamental building block of a modular model in OMS is a software component representing a single physical process, calibration method, or data access approach. Here, programing language features known as Annotations or Attributes were adopted. Within other (non-modeling) frameworks it has been observed that annotations lead to cleaner and leaner application code. Framework-supported model integration, traditionally accomplished using Application Programming Interfaces (API) calls is now achieved using descriptive code annotations. Fully annotated components for various hydrological and Ag-system models now provide information directly for (i) model assembly and building, (ii) data flow analysis for implicit multi-threading or visualization, (iii) automated and comprehensive model documentation of component dependencies, physical data properties, (iv) automated model and component testing, calibration, and optimization, and (v) automated audit-traceability to account for all model resources leading to a particular simulation result. Such a non-invasive methodology leads to models and modeling components with only minimal dependencies on the modeling framework but a strong reference to its originating code. Since models and modeling components are not directly bound to framework by the use of specific APIs and/or data types they can more easily be reused both within the framework as well as outside. While providing all those capabilities, a significant reduction in the size of the model source code was achieved. To support the benefit of annotations for a modeler, studies were conducted to evaluate the effectiveness of an annotation based framework approach with other modeling frameworks and libraries, a framework-invasiveness study was conducted to evaluate the effects of framework design on model code quality. A typical hydrological model was implemented across several modeling frameworks and several software metrics were collected. The metrics selected were measures of non-invasive design methods for modeling frameworks from a software engineering perspective. It appears that the use of annotations positively impacts several software quality measures. Experience to date has demonstrated the multi-purpose value of using annotations. Annotations are also a feasible and practical method to enable interoperability among models and modeling frameworks.
A BRDF statistical model applying to space target materials modeling
NASA Astrophysics Data System (ADS)
Liu, Chenghao; Li, Zhi; Xu, Can; Tian, Qichen
2017-10-01
In order to solve the problem of poor effect in modeling the large density BRDF measured data with five-parameter semi-empirical model, a refined statistical model of BRDF which is suitable for multi-class space target material modeling were proposed. The refined model improved the Torrance-Sparrow model while having the modeling advantages of five-parameter model. Compared with the existing empirical model, the model contains six simple parameters, which can approximate the roughness distribution of the material surface, can approximate the intensity of the Fresnel reflectance phenomenon and the attenuation of the reflected light's brightness with the azimuth angle changes. The model is able to achieve parameter inversion quickly with no extra loss of accuracy. The genetic algorithm was used to invert the parameters of 11 different samples in the space target commonly used materials, and the fitting errors of all materials were below 6%, which were much lower than those of five-parameter model. The effect of the refined model is verified by comparing the fitting results of the three samples at different incident zenith angles in 0° azimuth angle. Finally, the three-dimensional modeling visualizations of these samples in the upper hemisphere space was given, in which the strength of the optical scattering of different materials could be clearly shown. It proved the good describing ability of the refined model at the material characterization as well.
Seaman, Shaun R; Hughes, Rachael A
2018-06-01
Estimating the parameters of a regression model of interest is complicated by missing data on the variables in that model. Multiple imputation is commonly used to handle these missing data. Joint model multiple imputation and full-conditional specification multiple imputation are known to yield imputed data with the same asymptotic distribution when the conditional models of full-conditional specification are compatible with that joint model. We show that this asymptotic equivalence of imputation distributions does not imply that joint model multiple imputation and full-conditional specification multiple imputation will also yield asymptotically equally efficient inference about the parameters of the model of interest, nor that they will be equally robust to misspecification of the joint model. When the conditional models used by full-conditional specification multiple imputation are linear, logistic and multinomial regressions, these are compatible with a restricted general location joint model. We show that multiple imputation using the restricted general location joint model can be substantially more asymptotically efficient than full-conditional specification multiple imputation, but this typically requires very strong associations between variables. When associations are weaker, the efficiency gain is small. Moreover, full-conditional specification multiple imputation is shown to be potentially much more robust than joint model multiple imputation using the restricted general location model to mispecification of that model when there is substantial missingness in the outcome variable.
Lessons from Climate Modeling on the Design and Use of Ensembles for Crop Modeling
NASA Technical Reports Server (NTRS)
Wallach, Daniel; Mearns, Linda O.; Ruane, Alexander C.; Roetter, Reimund P.; Asseng, Senthold
2016-01-01
Working with ensembles of crop models is a recent but important development in crop modeling which promises to lead to better uncertainty estimates for model projections and predictions, better predictions using the ensemble mean or median, and closer collaboration within the modeling community. There are numerous open questions about the best way to create and analyze such ensembles. Much can be learned from the field of climate modeling, given its much longer experience with ensembles. We draw on that experience to identify questions and make propositions that should help make ensemble modeling with crop models more rigorous and informative. The propositions include defining criteria for acceptance of models in a crop MME, exploring criteria for evaluating the degree of relatedness of models in a MME, studying the effect of number of models in the ensemble, development of a statistical model of model sampling, creation of a repository for MME results, studies of possible differential weighting of models in an ensemble, creation of single model ensembles based on sampling from the uncertainty distribution of parameter values or inputs specifically oriented toward uncertainty estimation, the creation of super ensembles that sample more than one source of uncertainty, the analysis of super ensemble results to obtain information on total uncertainty and the separate contributions of different sources of uncertainty and finally further investigation of the use of the multi-model mean or median as a predictor.
Assessing Ecosystem Model Performance in Semiarid Systems
NASA Astrophysics Data System (ADS)
Thomas, A.; Dietze, M.; Scott, R. L.; Biederman, J. A.
2017-12-01
In ecosystem process modelling, comparing outputs to benchmark datasets observed in the field is an important way to validate models, allowing the modelling community to track model performance over time and compare models at specific sites. Multi-model comparison projects as well as models themselves have largely been focused on temperate forests and similar biomes. Semiarid regions, on the other hand, are underrepresented in land surface and ecosystem modelling efforts, and yet will be disproportionately impacted by disturbances such as climate change due to their sensitivity to changes in the water balance. Benchmarking models at semiarid sites is an important step in assessing and improving models' suitability for predicting the impact of disturbance on semiarid ecosystems. In this study, several ecosystem models were compared at a semiarid grassland in southwestern Arizona using PEcAn, or the Predictive Ecosystem Analyzer, an open-source eco-informatics toolbox ideal for creating the repeatable model workflows necessary for benchmarking. Models included SIPNET, DALEC, JULES, ED2, GDAY, LPJ-GUESS, MAESPA, CLM, CABLE, and FATES. Comparison between model output and benchmarks such as net ecosystem exchange (NEE) tended to produce high root mean square error and low correlation coefficients, reflecting poor simulation of seasonality and the tendency for models to create much higher carbon sources than observed. These results indicate that ecosystem models do not currently adequately represent semiarid ecosystem processes.
Alcan, Toros; Ceylanoğlu, Cenk; Baysal, Bekir
2009-01-01
To investigate the effects of different storage periods of alginate impressions on digital model accuracy. A total of 105 impressions were taken from a master model with three different brands of alginates and were poured into stone models in five different storage periods. In all, 21 stone models were poured and immediately were scanned, and 21 digital models were prepared. The remaining 84 impressions were poured after 1, 2, 3, and 4 days, respectively. Five linear measurements were made by three researchers on the master model, the stone models, and the digital models. Time-dependent deformation of alginate impressions at different storage periods and the accuracy of traditional stone models and digital models were evaluated separately. Both the stone models and the digital models were highly correlated with the master model. Significant deformities in the alginate impressions were noted at different storage periods of 1 to 4 days. Alginate impressions of different brands also showed significant differences between each other on the first, third, and fourth days. Digital orthodontic models are as reliable as traditional stone models and probably will become the standard for orthodontic clinical use. Storing alginate impressions in sealed plastic bags for up to 4 days caused statistically significant deformation of alginate impressions, but the magnitude of these deformations did not appear to be clinically relevant and had no adverse effect on digital modeling.
Chen, Honglei; Chen, Yuancai; Zhan, Huaiyu; Fu, Shiyu
2011-04-01
A new method has been developed for the determination of chemical oxygen demand (COD) in pulping effluent using chemometrics-assisted spectrophotometry. Two calibration models were established by inducing UV-visible spectroscopy (model 1) and derivative spectroscopy (model 2), combined with the chemometrics software Smica-P. Correlation coefficients of the two models are 0.9954 (model 1) and 0.9963 (model 2) when COD of samples is in the range of 0 to 405 mg/L. Sensitivities of the two models are 0.0061 (model 1) and 0.0056 (model 2) and method detection limits are 2.02-2.45 mg/L (model 1) and 2.13-2.51 mg/L (model 2). Validation experiment showed that the average standard deviation of model 2 was 1.11 and that of model 1 was 1.54. Similarly, average relative error of model 2 (4.25%) was lower than model 1 (5.00%), which indicated that the predictability of model 2 was better than that of model 1. Chemometrics-assisted spectrophotometry method did not need chemical reagents and digestion which were required in the conventional methods, and the testing time of the new method was significantly shorter than the conventional ones. The proposed method can be used to measure COD in pulping effluent as an environmentally friendly approach with satisfactory results.
Improved two-equation k-omega turbulence models for aerodynamic flows
NASA Technical Reports Server (NTRS)
Menter, Florian R.
1992-01-01
Two new versions of the k-omega two-equation turbulence model will be presented. The new Baseline (BSL) model is designed to give results similar to those of the original k-omega model of Wilcox, but without its strong dependency on arbitrary freestream values. The BSL model is identical to the Wilcox model in the inner 50 percent of the boundary-layer but changes gradually to the high Reynolds number Jones-Launder k-epsilon model (in a k-omega formulation) towards the boundary-layer edge. The new model is also virtually identical to the Jones-Lauder model for free shear layers. The second version of the model is called Shear-Stress Transport (SST) model. It is based on the BSL model, but has the additional ability to account for the transport of the principal shear stress in adverse pressure gradient boundary-layers. The model is based on Bradshaw's assumption that the principal shear stress is proportional to the turbulent kinetic energy, which is introduced into the definition of the eddy-viscosity. Both models are tested for a large number of different flowfields. The results of the BSL model are similar to those of the original k-omega model, but without the undesirable freestream dependency. The predictions of the SST model are also independent of the freestream values and show excellent agreement with experimental data for adverse pressure gradient boundary-layer flows.
Efficient polarimetric BRDF model.
Renhorn, Ingmar G E; Hallberg, Tomas; Boreman, Glenn D
2015-11-30
The purpose of the present manuscript is to present a polarimetric bidirectional reflectance distribution function (BRDF) model suitable for hyperspectral and polarimetric signature modelling. The model is based on a further development of a previously published four-parameter model that has been generalized in order to account for different types of surface structures (generalized Gaussian distribution). A generalization of the Lambertian diffuse model is presented. The pBRDF-functions are normalized using numerical integration. Using directional-hemispherical reflectance (DHR) measurements, three of the four basic parameters can be determined for any wavelength. This simplifies considerably the development of multispectral polarimetric BRDF applications. The scattering parameter has to be determined from at least one BRDF measurement. The model deals with linear polarized radiation; and in similarity with e.g. the facet model depolarization is not included. The model is very general and can inherently model extreme surfaces such as mirrors and Lambertian surfaces. The complex mixture of sources is described by the sum of two basic models, a generalized Gaussian/Fresnel model and a generalized Lambertian model. Although the physics inspired model has some ad hoc features, the predictive power of the model is impressive over a wide range of angles and scattering magnitudes. The model has been applied successfully to painted surfaces, both dull and glossy and also on metallic bead blasted surfaces. The simple and efficient model should be attractive for polarimetric simulations and polarimetric remote sensing.
SBML Level 3 package: Hierarchical Model Composition, Version 1 Release 3
Smith, Lucian P.; Hucka, Michael; Hoops, Stefan; Finney, Andrew; Ginkel, Martin; Myers, Chris J.; Moraru, Ion; Liebermeister, Wolfram
2017-01-01
Summary Constructing a model in a hierarchical fashion is a natural approach to managing model complexity, and offers additional opportunities such as the potential to re-use model components. The SBML Level 3 Version 1 Core specification does not directly provide a mechanism for defining hierarchical models, but it does provide a mechanism for SBML packages to extend the Core specification and add additional syntactical constructs. The SBML Hierarchical Model Composition package for SBML Level 3 adds the necessary features to SBML to support hierarchical modeling. The package enables a modeler to include submodels within an enclosing SBML model, delete unneeded or redundant elements of that submodel, replace elements of that submodel with element of the containing model, and replace elements of the containing model with elements of the submodel. In addition, the package defines an optional “port” construct, allowing a model to be defined with suggested interfaces between hierarchical components; modelers can chose to use these interfaces, but they are not required to do so and can still interact directly with model elements if they so chose. Finally, the SBML Hierarchical Model Composition package is defined in such a way that a hierarchical model can be “flattened” to an equivalent, non-hierarchical version that uses only plain SBML constructs, thus enabling software tools that do not yet support hierarchy to nevertheless work with SBML hierarchical models. PMID:26528566
A demonstrative model of a lunar base simulation on a personal computer
NASA Technical Reports Server (NTRS)
1985-01-01
The initial demonstration model of a lunar base simulation is described. This initial model was developed on the personal computer level to demonstrate feasibility and technique before proceeding to a larger computer-based model. Lotus Symphony Version 1.1 software was used to base the demonstration model on an personal computer with an MS-DOS operating system. The personal computer-based model determined the applicability of lunar base modeling techniques developed at an LSPI/NASA workshop. In addition, the personnal computer-based demonstration model defined a modeling structure that could be employed on a larger, more comprehensive VAX-based lunar base simulation. Refinement of this personal computer model and the development of a VAX-based model is planned in the near future.
Molenaar, Peter C M
2017-01-01
Equivalences of two classes of dynamic models for weakly stationary multivariate time series are discussed: dynamic factor models and autoregressive models. It is shown that exploratory dynamic factor models can be rotated, yielding an infinite set of equivalent solutions for any observed series. It also is shown that dynamic factor models with lagged factor loadings are not equivalent to the currently popular state-space models, and that restriction of attention to the latter type of models may yield invalid results. The known equivalent vector autoregressive model types, standard and structural, are given a new interpretation in which they are conceived of as the extremes of an innovating type of hybrid vector autoregressive models. It is shown that consideration of hybrid models solves many problems, in particular with Granger causality testing.
Potocki, J K; Tharp, H S
1993-01-01
Multiple model estimation is a viable technique for dealing with the spatial perfusion model mismatch associated with hyperthermia dosimetry. Using multiple models, spatial discrimination can be obtained without increasing the number of unknown perfusion zones. Two multiple model estimators based on the extended Kalman filter (EKF) are designed and compared with two EKFs based on single models having greater perfusion zone segmentation. Results given here indicate that multiple modelling is advantageous when the number of thermal sensors is insufficient for convergence of single model estimators having greater perfusion zone segmentation. In situations where sufficient measured outputs exist for greater unknown perfusion parameter estimation, the multiple model estimators and the single model estimators yield equivalent results.
Assessing first-order emulator inference for physical parameters in nonlinear mechanistic models
Hooten, Mevin B.; Leeds, William B.; Fiechter, Jerome; Wikle, Christopher K.
2011-01-01
We present an approach for estimating physical parameters in nonlinear models that relies on an approximation to the mechanistic model itself for computational efficiency. The proposed methodology is validated and applied in two different modeling scenarios: (a) Simulation and (b) lower trophic level ocean ecosystem model. The approach we develop relies on the ability to predict right singular vectors (resulting from a decomposition of computer model experimental output) based on the computer model input and an experimental set of parameters. Critically, we model the right singular vectors in terms of the model parameters via a nonlinear statistical model. Specifically, we focus our attention on first-order models of these right singular vectors rather than the second-order (covariance) structure.
Li, Wei Bo; Greiter, Matthias; Oeh, Uwe; Hoeschen, Christoph
2011-12-01
The reliability of biokinetic models is essential in internal dose assessments and radiation risk analysis for the public, occupational workers, and patients exposed to radionuclides. In this paper, a method for assessing the reliability of biokinetic models by means of uncertainty and sensitivity analysis was developed. The paper is divided into two parts. In the first part of the study published here, the uncertainty sources of the model parameters for zirconium (Zr), developed by the International Commission on Radiological Protection (ICRP), were identified and analyzed. Furthermore, the uncertainty of the biokinetic experimental measurement performed at the Helmholtz Zentrum München-German Research Center for Environmental Health (HMGU) for developing a new biokinetic model of Zr was analyzed according to the Guide to the Expression of Uncertainty in Measurement, published by the International Organization for Standardization. The confidence interval and distribution of model parameters of the ICRP and HMGU Zr biokinetic models were evaluated. As a result of computer biokinetic modelings, the mean, standard uncertainty, and confidence interval of model prediction calculated based on the model parameter uncertainty were presented and compared to the plasma clearance and urinary excretion measured after intravenous administration. It was shown that for the most important compartment, the plasma, the uncertainty evaluated for the HMGU model was much smaller than that for the ICRP model; that phenomenon was observed for other organs and tissues as well. The uncertainty of the integral of the radioactivity of Zr up to 50 y calculated by the HMGU model after ingestion by adult members of the public was shown to be smaller by a factor of two than that of the ICRP model. It was also shown that the distribution type of the model parameter strongly influences the model prediction, and the correlation of the model input parameters affects the model prediction to a certain extent depending on the strength of the correlation. In the case of model prediction, the qualitative comparison of the model predictions with the measured plasma and urinary data showed the HMGU model to be more reliable than the ICRP model; quantitatively, the uncertainty model prediction by the HMGU systemic biokinetic model is smaller than that of the ICRP model. The uncertainty information on the model parameters analyzed in this study was used in the second part of the paper regarding a sensitivity analysis of the Zr biokinetic models.
EzGal: A Flexible Interface for Stellar Population Synthesis Models
NASA Astrophysics Data System (ADS)
Mancone, Conor L.; Gonzalez, Anthony H.
2012-06-01
We present EzGal, a flexible Python program designed to easily generate observable parameters (magnitudes, colors, and mass-to-light ratios) for arbitrary input stellar population synthesis (SPS) models. As has been demonstrated by various authors, for many applications the choice of input SPS models can be a significant source of systematic uncertainty. A key strength of EzGal is that it enables simple, direct comparison of different model sets so that the uncertainty introduced by choice of model set can be quantified. Its ability to work with new models will allow EzGal to remain useful as SPS modeling evolves to keep up with the latest research (such as varying IMFs). EzGal is also capable of generating composite stellar population models (CSPs) for arbitrary input star-formation histories and reddening laws, and it can be used to interpolate between metallicities for a given model set. To facilitate use, we have created an online interface to run EzGal and quickly generate magnitude and mass-to-light ratio predictions for a variety of star-formation histories and model sets. We make many commonly used SPS models available from the online interface, including the canonical Bruzual & Charlot models, an updated version of these models, the Maraston models, the BaSTI models, and the Flexible Stellar Population Synthesis (FSPS) models. We use EzGal to compare magnitude predictions for the model sets as a function of wavelength, age, metallicity, and star-formation history. From this comparison we quickly recover the well-known result that the models agree best in the optical for old solar-metallicity models, with differences at the level. Similarly, the most problematic regime for SPS modeling is for young ages (≲2 Gyr) and long wavelengths (λ ≳ 7500 Å), where thermally pulsating AGB stars are important and scatter between models can vary from 0.3 mag (Sloan i) to 0.7 mag (Ks). We find that these differences are not caused by one discrepant model set and should therefore be interpreted as general uncertainties in SPS modeling. Finally, we connect our results to a more physically motivated example by generating CSPs with a star-formation history matching the global star-formation history of the universe. We demonstrate that the wavelength and age dependence of SPS model uncertainty translates into a redshift-dependent model uncertainty, highlighting the importance of a quantitative understanding of model differences when comparing observations with models as a function of redshift.
System and method of designing models in a feedback loop
Gosink, Luke C.; Pulsipher, Trenton C.; Sego, Landon H.
2017-02-14
A method and system for designing models is disclosed. The method includes selecting a plurality of models for modeling a common event of interest. The method further includes aggregating the results of the models and analyzing each model compared to the aggregate result to obtain comparative information. The method also includes providing the information back to the plurality of models to design more accurate models through a feedback loop.
Comment on ``Glassy Potts model: A disordered Potts model without a ferromagnetic phase''
NASA Astrophysics Data System (ADS)
Carlucci, Domenico M.
1999-10-01
We report the equivalence of the ``glassy Potts model,'' recently introduced by Marinari et al. and the ``chiral Potts model'' investigated by Nishimori and Stephen. Both models do not exhibit any spontaneous magnetization at low temperature, differently from the ordinary glass Potts model. The phase transition of the glassy Potts model is easily interpreted as the spin-glass transition of the ordinary random Potts model.
NASA Astrophysics Data System (ADS)
Cannizzo, John K.
2017-01-01
We utilize the time dependent accretion disk model described by Ichikawa & Osaki (1992) to explore two basic ideas for the outbursts in the SU UMa systems, Osaki's Thermal-Tidal Model, and the basic accretion disk limit cycle model. We explore a range in possible input parameters and model assumptions to delineate under what conditions each model may be preferred.
A novel microfluidic model can mimic organ-specific metastasis of circulating tumor cells.
Kong, Jing; Luo, Yong; Jin, Dong; An, Fan; Zhang, Wenyuan; Liu, Lilu; Li, Jiao; Fang, Shimeng; Li, Xiaojie; Yang, Xuesong; Lin, Bingcheng; Liu, Tingjiao
2016-11-29
A biomimetic microsystem might compensate costly and time-consuming animal metastatic models. Herein we developed a biomimetic microfluidic model to study cancer metastasis. Primary cells isolated from different organs were cultured on the microlfuidic model to represent individual organs. Breast and salivary gland cancer cells were driven to flow over primary cell culture chambers, mimicking dynamic adhesion of circulating tumor cells (CTCs) to endothelium in vivo. These flowing artificial CTCs showed different metastatic potentials to lung on the microfluidic model. The traditional nude mouse model of lung metastasis was performed to investigate the physiological similarity of the microfluidic model to animal models. It was found that the metastatic potential of different cancer cells assessed by the microfluidic model was in agreement with that assessed by the nude mouse model. Furthermore, it was demonstrated that the metastatic inhibitor AMD3100 inhibited lung metastasis effectively in both the microfluidic model and the nude mouse model. Then the microfluidic model was used to mimick liver and bone metastasis of CTCs and confirm the potential for research of multiple-organ metastasis. Thus, the metastasis of CTCs to different organs was reconstituted on the microfluidic model. It may expand the capabilities of traditional cell culture models, providing a low-cost, time-saving, and rapid alternative to animal models.
A simple analytical infiltration model for short-duration rainfall
NASA Astrophysics Data System (ADS)
Wang, Kaiwen; Yang, Xiaohua; Liu, Xiaomang; Liu, Changming
2017-12-01
Many infiltration models have been proposed to simulate infiltration process. Different initial soil conditions and non-uniform initial water content can lead to infiltration simulation errors, especially for short-duration rainfall (SHR). Few infiltration models are specifically derived to eliminate the errors caused by the complex initial soil conditions. We present a simple analytical infiltration model for SHR infiltration simulation, i.e., Short-duration Infiltration Process model (SHIP model). The infiltration simulated by 5 models (i.e., SHIP (high) model, SHIP (middle) model, SHIP (low) model, Philip model and Parlange model) were compared based on numerical experiments and soil column experiments. In numerical experiments, SHIP (middle) and Parlange models had robust solutions for SHR infiltration simulation of 12 typical soils under different initial soil conditions. The absolute values of percent bias were less than 12% and the values of Nash and Sutcliffe efficiency were greater than 0.83. Additionally, in soil column experiments, infiltration rate fluctuated in a range because of non-uniform initial water content. SHIP (high) and SHIP (low) models can simulate an infiltration range, which successfully covered the fluctuation range of the observed infiltration rate. According to the robustness of solutions and the coverage of fluctuation range of infiltration rate, SHIP model can be integrated into hydrologic models to simulate SHR infiltration process and benefit the flood forecast.
Mutant mice: experimental organisms as materialised models in biomedicine.
Huber, Lara; Keuck, Lara K
2013-09-01
Animal models have received particular attention as key examples of material models. In this paper, we argue that the specificities of establishing animal models-acknowledging their status as living beings and as epistemological tools-necessitate a more complex account of animal models as materialised models. This becomes particularly evident in animal-based models of diseases that only occur in humans: in these cases, the representational relation between animal model and human patient needs to be generated and validated. The first part of this paper presents an account of how disease-specific animal models are established by drawing on the example of transgenic mice models for Alzheimer's disease. We will introduce an account of validation that involves a three-fold process including (1) from human being to experimental organism; (2) from experimental organism to animal model; and (3) from animal model to human patient. This process draws upon clinical relevance as much as scientific practices and results in disease-specific, yet incomplete, animal models. The second part of this paper argues that the incompleteness of models can be described in terms of multi-level abstractions. We qualify this notion by pointing to different experimental techniques and targets of modelling, which give rise to a plurality of models for a specific disease. Copyright © 2013 Elsevier Ltd. All rights reserved.
Bachis, Giulia; Maruéjouls, Thibaud; Tik, Sovanna; Amerlinck, Youri; Melcer, Henryk; Nopens, Ingmar; Lessard, Paul; Vanrolleghem, Peter A
2015-01-01
Characterization and modelling of primary settlers have been neglected pretty much to date. However, whole plant and resource recovery modelling requires primary settler model development, as current models lack detail in describing the dynamics and the diversity of the removal process for different particulate fractions. This paper focuses on the improved modelling and experimental characterization of primary settlers. First, a new modelling concept based on particle settling velocity distribution is proposed which is then applied for the development of an improved primary settler model as well as for its characterization under addition of chemicals (chemically enhanced primary treatment, CEPT). This model is compared to two existing simple primary settler models (Otterpohl and Freund; Lessard and Beck), showing to be better than the first one and statistically comparable to the second one, but with easier calibration thanks to the ease with which wastewater characteristics can be translated into model parameters. Second, the changes in the activated sludge model (ASM)-based chemical oxygen demand fractionation between inlet and outlet induced by primary settling is investigated, showing that typical wastewater fractions are modified by primary treatment. As they clearly impact the downstream processes, both model improvements demonstrate the need for more detailed primary settler models in view of whole plant modelling.
ERM model analysis for adaptation to hydrological model errors
NASA Astrophysics Data System (ADS)
Baymani-Nezhad, M.; Han, D.
2018-05-01
Hydrological conditions are changed continuously and these phenomenons generate errors on flood forecasting models and will lead to get unrealistic results. Therefore, to overcome these difficulties, a concept called model updating is proposed in hydrological studies. Real-time model updating is one of the challenging processes in hydrological sciences and has not been entirely solved due to lack of knowledge about the future state of the catchment under study. Basically, in terms of flood forecasting process, errors propagated from the rainfall-runoff model are enumerated as the main source of uncertainty in the forecasting model. Hence, to dominate the exciting errors, several methods have been proposed by researchers to update the rainfall-runoff models such as parameter updating, model state updating, and correction on input data. The current study focuses on investigations about the ability of rainfall-runoff model parameters to cope with three types of existing errors, timing, shape and volume as the common errors in hydrological modelling. The new lumped model, the ERM model, has been selected for this study to evaluate its parameters for its use in model updating to cope with the stated errors. Investigation about ten events proves that the ERM model parameters can be updated to cope with the errors without the need to recalibrate the model.
Predictive QSAR modeling workflow, model applicability domains, and virtual screening.
Tropsha, Alexander; Golbraikh, Alexander
2007-01-01
Quantitative Structure Activity Relationship (QSAR) modeling has been traditionally applied as an evaluative approach, i.e., with the focus on developing retrospective and explanatory models of existing data. Model extrapolation was considered if only in hypothetical sense in terms of potential modifications of known biologically active chemicals that could improve compounds' activity. This critical review re-examines the strategy and the output of the modern QSAR modeling approaches. We provide examples and arguments suggesting that current methodologies may afford robust and validated models capable of accurate prediction of compound properties for molecules not included in the training sets. We discuss a data-analytical modeling workflow developed in our laboratory that incorporates modules for combinatorial QSAR model development (i.e., using all possible binary combinations of available descriptor sets and statistical data modeling techniques), rigorous model validation, and virtual screening of available chemical databases to identify novel biologically active compounds. Our approach places particular emphasis on model validation as well as the need to define model applicability domains in the chemistry space. We present examples of studies where the application of rigorously validated QSAR models to virtual screening identified computational hits that were confirmed by subsequent experimental investigations. The emerging focus of QSAR modeling on target property forecasting brings it forward as predictive, as opposed to evaluative, modeling approach.
Lorenz, Alyson; Dhingra, Radhika; Chang, Howard H; Bisanzio, Donal; Liu, Yang; Remais, Justin V
2014-01-01
Extrapolating landscape regression models for use in assessing vector-borne disease risk and other applications requires thoughtful evaluation of fundamental model choice issues. To examine implications of such choices, an analysis was conducted to explore the extent to which disparate landscape models agree in their epidemiological and entomological risk predictions when extrapolated to new regions. Agreement between six literature-drawn landscape models was examined by comparing predicted county-level distributions of either Lyme disease or Ixodes scapularis vector using Spearman ranked correlation. AUC analyses and multinomial logistic regression were used to assess the ability of these extrapolated landscape models to predict observed national data. Three models based on measures of vegetation, habitat patch characteristics, and herbaceous landcover emerged as effective predictors of observed disease and vector distribution. An ensemble model containing these three models improved precision and predictive ability over individual models. A priori assessment of qualitative model characteristics effectively identified models that subsequently emerged as better predictors in quantitative analysis. Both a methodology for quantitative model comparison and a checklist for qualitative assessment of candidate models for extrapolation are provided; both tools aim to improve collaboration between those producing models and those interested in applying them to new areas and research questions.
Chilcott, J; Tappenden, P; Rawdin, A; Johnson, M; Kaltenthaler, E; Paisley, S; Papaioannou, D; Shippam, A
2010-05-01
Health policy decisions must be relevant, evidence-based and transparent. Decision-analytic modelling supports this process but its role is reliant on its credibility. Errors in mathematical decision models or simulation exercises are unavoidable but little attention has been paid to processes in model development. Numerous error avoidance/identification strategies could be adopted but it is difficult to evaluate the merits of strategies for improving the credibility of models without first developing an understanding of error types and causes. The study aims to describe the current comprehension of errors in the HTA modelling community and generate a taxonomy of model errors. Four primary objectives are to: (1) describe the current understanding of errors in HTA modelling; (2) understand current processes applied by the technology assessment community for avoiding errors in development, debugging and critically appraising models for errors; (3) use HTA modellers' perceptions of model errors with the wider non-HTA literature to develop a taxonomy of model errors; and (4) explore potential methods and procedures to reduce the occurrence of errors in models. It also describes the model development process as perceived by practitioners working within the HTA community. A methodological review was undertaken using an iterative search methodology. Exploratory searches informed the scope of interviews; later searches focused on issues arising from the interviews. Searches were undertaken in February 2008 and January 2009. In-depth qualitative interviews were performed with 12 HTA modellers from academic and commercial modelling sectors. All qualitative data were analysed using the Framework approach. Descriptive and explanatory accounts were used to interrogate the data within and across themes and subthemes: organisation, roles and communication; the model development process; definition of error; types of model error; strategies for avoiding errors; strategies for identifying errors; and barriers and facilitators. There was no common language in the discussion of modelling errors and there was inconsistency in the perceived boundaries of what constitutes an error. Asked about the definition of model error, there was a tendency for interviewees to exclude matters of judgement from being errors and focus on 'slips' and 'lapses', but discussion of slips and lapses comprised less than 20% of the discussion on types of errors. Interviewees devoted 70% of the discussion to softer elements of the process of defining the decision question and conceptual modelling, mostly the realms of judgement, skills, experience and training. The original focus concerned model errors, but it may be more useful to refer to modelling risks. Several interviewees discussed concepts of validation and verification, with notable consistency in interpretation: verification meaning the process of ensuring that the computer model correctly implemented the intended model, whereas validation means the process of ensuring that a model is fit for purpose. Methodological literature on verification and validation of models makes reference to the Hermeneutic philosophical position, highlighting that the concept of model validation should not be externalized from the decision-makers and the decision-making process. Interviewees demonstrated examples of all major error types identified in the literature: errors in the description of the decision problem, in model structure, in use of evidence, in implementation of the model, in operation of the model, and in presentation and understanding of results. The HTA error classifications were compared against existing classifications of model errors in the literature. A range of techniques and processes are currently used to avoid errors in HTA models: engaging with clinical experts, clients and decision-makers to ensure mutual understanding, producing written documentation of the proposed model, explicit conceptual modelling, stepping through skeleton models with experts, ensuring transparency in reporting, adopting standard housekeeping techniques, and ensuring that those parties involved in the model development process have sufficient and relevant training. Clarity and mutual understanding were identified as key issues. However, their current implementation is not framed within an overall strategy for structuring complex problems. Some of the questioning may have biased interviewees responses but as all interviewees were represented in the analysis no rebalancing of the report was deemed necessary. A potential weakness of the literature review was its focus on spreadsheet and program development rather than specifically on model development. It should also be noted that the identified literature concerning programming errors was very narrow despite broad searches being undertaken. Published definitions of overall model validity comprising conceptual model validation, verification of the computer model, and operational validity of the use of the model in addressing the real-world problem are consistent with the views expressed by the HTA community and are therefore recommended as the basis for further discussions of model credibility. Such discussions should focus on risks, including errors of implementation, errors in matters of judgement and violations. Discussions of modelling risks should reflect the potentially complex network of cognitive breakdowns that lead to errors in models and existing research on the cognitive basis of human error should be included in an examination of modelling errors. There is a need to develop a better understanding of the skills requirements for the development, operation and use of HTA models. Interaction between modeller and client in developing mutual understanding of a model establishes that model's significance and its warranty. This highlights that model credibility is the central concern of decision-makers using models so it is crucial that the concept of model validation should not be externalized from the decision-makers and the decision-making process. Recommendations for future research would be studies of verification and validation; the model development process; and identification of modifications to the modelling process with the aim of preventing the occurrence of errors and improving the identification of errors in models.
Marzilli Ericson, Keith M.; White, John Myles; Laibson, David; Cohen, Jonathan D.
2015-01-01
Heuristic models have been proposed for many domains of choice. We compare heuristic models of intertemporal choice, which can account for many of the known intertemporal choice anomalies, to discounting models. We conduct an out-of-sample, cross-validated comparison of intertemporal choice models. Heuristic models outperform traditional utility discounting models, including models of exponential and hyperbolic discounting. The best performing models predict choices by using a weighted average of absolute differences and relative (percentage) differences of the attributes of the goods in a choice set. We conclude that heuristic models explain time-money tradeoff choices in experiments better than utility discounting models. PMID:25911124
ASTP ranging system mathematical model
NASA Technical Reports Server (NTRS)
Ellis, M. R.; Robinson, L. H.
1973-01-01
A mathematical model is presented of the VHF ranging system to analyze the performance of the Apollo-Soyuz test project (ASTP). The system was adapted for use in the ASTP. The ranging system mathematical model is presented in block diagram form, and a brief description of the overall model is also included. A procedure for implementing the math model is presented along with a discussion of the validation of the math model and the overall summary and conclusions of the study effort. Detailed appendices of the five study tasks are presented: early late gate model development, unlock probability development, system error model development, probability of acquisition and model development, and math model validation testing.