Improvements in Nonconservative Force Modelling for TOPEX/POSEIDON
NASA Technical Reports Server (NTRS)
Lemoine, Frank G.; Rowlands, David D.; Chinn, Douglas S.; Kubitschek, Daniel G.; Luthcke, Scott B.; Zelensky, Nikita B.; Born, George H.
1999-01-01
It was recognized prior to the launch of TOPEX/POSEIDON, that the most important source of orbit error other than the gravity field, was due to nonconservative force modelling. Accordingly, an intensive effort was undertaken to study the nonconservative forces acting on the spacecraft using detailed finite element modelling (Antreasian, 1992; Antreasian and Rosborough, 1992). However, this detailed modelling was not suitable for orbit determination, and a simplified eight plate "box-wing" model was developed that took into account the aggregate effect of the various materials and associated thermal properties of each spacecraft surface. The a priori model was later tuned post launch with actual tracking data [Nerem et al., 1994; Marshall and Luthcke, 1994; Marshall et al., 1995]. More recently, Kubitschek (1997] developed a newer box-wing model for TOPEX/POSEIDON, which included updated material properties, accounted for a solar array deflection, and modelled solar array warping due to thermal effects. We have used this updated model as a basis to retune the macromodel for TOPEX/POSEIDON, and report on preliminary results using at least 36 cycles (one year) of SLR and DORIS data in 1993.
Improvements in Nonconservative Force Modelling for TOPEX/POSEIDON
NASA Technical Reports Server (NTRS)
Lemoine, Frank G.; Rowlands, David D.; Chinn, Douglas S.; Kubitschek, Daniel G.; Luthcke, Scott B.; Zelensky, Nikita B.; Born, George H.
1999-01-01
It was recognized prior to the launch of TOPEX/POSEIDON, that the most important source of orbit error other than the gravity field, was due to nonconservative force modelling. Accordingly, an intensive effort was undertaken to study the nonconservative forces acting on the spacecraft using detailed finite element modelling (Antreasian, 1992; Antreasian and Rosborough, 1992). However, this detailed modelling was not suitable for orbit determination, and a simplified eight plate "box-wing" model was developed that took into account the aggregate effect of the various materials and associated thermal properties of each spacecraft surface. The a priori model was later tuned post launch with actual tracking data [Nerem et al., 1994; Marshall and Luthcke, 1994; Marshall et al., 1995]. More recently, Kubitschek (1997] developed a newer box-wing model for TOPEX/POSEIDON, which included updated material properties, accounted for a solar array deflection, and modelled solar array warping due to thermal effects. We have used this updated model as a basis to retune the macromodel for TOPEX/POSEIDON, and report on preliminary results using at least 36 cycles (one year) of SLR and DORIS data in 1993.
Phase diagram and density large deviations of a nonconserving ABC model.
Cohen, O; Mukamel, D
2012-02-10
The effect of particle-nonconserving processes on the steady state of driven diffusive systems is studied within the context of a generalized ABC model. It is shown that in the limit of slow nonconserving processes, the large deviation function of the overall particle density can be computed by making use of the steady-state density profile of the conserving model. In this limit one can define a chemical potential and identify first order transitions via Maxwell's construction, similarly to what is done in equilibrium systems. This method may be applied to other driven models subjected to slow nonconserving dynamics.
An uncertainty inclusive un-mixing model to identify tracer non-conservativeness
NASA Astrophysics Data System (ADS)
Sherriff, Sophie; Rowan, John; Franks, Stewart; Fenton, Owen; Jordan, Phil; hUallacháin, Daire Ó.
2015-04-01
Sediment fingerprinting is being increasingly recognised as an essential tool for catchment soil and water management. Selected physico-chemical properties (tracers) of soils and river sediments are used in a statistically-based 'un-mixing' model to apportion sediment delivered to the catchment outlet (target) to its upstream sediment sources. Development of uncertainty-inclusive approaches, taking into account uncertainties in the sampling, measurement and statistical un-mixing, are improving the robustness of results. However, methodological challenges remain including issues of particle size and organic matter selectivity and non-conservative behaviour of tracers - relating to biogeochemical transformations along the transport pathway. This study builds on our earlier uncertainty-inclusive approach (FR2000) to detect and assess the impact of tracer non-conservativeness using synthetic data before applying these lessons to new field data from Ireland. Un-mixing was conducted on 'pristine' and 'corrupted' synthetic datasets containing three to fifty tracers (in the corrupted dataset one target tracer value was manually corrupted to replicate non-conservative behaviour). Additionally, a smaller corrupted dataset was un-mixed using a permutation version of the algorithm. Field data was collected in an 11 km2 river catchment in Ireland. Source samples were collected from topsoils, subsoils, channel banks, open field drains, damaged road verges and farm tracks. Target samples were collected using time integrated suspended sediment samplers at the catchment outlet at 6-12 week intervals from July 2012 to June 2013. Samples were dried (<40°C), sieved (125 µm) and analysed for mineral magnetic susceptibility, anhysteretic remanence and iso-thermal remanence, and geochemical elements Cd, Co, Cr, Cu, Mn, Ni, Pb and Zn (following microwave-assisted acid digestion). Discriminant analysis was used to reduce the number of tracer numbers before un-mixing. Tracer non-conservativeness
NASA Astrophysics Data System (ADS)
Luthcke, S. B.; Marshall, J. A.
1992-11-01
The TOPEX/Poseidon spacecraft was launched on August 10, 1992 to study the Earth's oceans. To achieve maximum benefit from the altimetric data it is to collect, mission requirements dictate that TOPEX/Poseidon's orbit must be computed at an unprecedented level of accuracy. To reach our pre-launch radial orbit accuracy goals, the mismodeling of the radiative nonconservative forces of solar radiation, Earth albedo an infrared re-radiation, and spacecraft thermal imbalances cannot produce in combination more than a 6 cm rms error over a 10 day period. Similarly, the 10-day drag modeling error cannot exceed 3 cm rms. In order to satisfy these requirements, a 'box-wing' representation of the satellite has been developed in which, the satellite is modelled as the combination of flat plates arranged in the shape of a box and a connected solar array. The radiative/thermal nonconservative forces acting on each of the eight surfaces are computed independently, yielding vector accelerations which are summed to compute the total aggregate effect on the satellite center-of-mass. Select parameters associated with the flat plates are adjusted to obtain a better representation of the satellite acceleration history. This study analyzes the estimation of these parameters from simulated TOPEX/Poseidon laser data in the presence of both nonconservative and gravity model errors. A 'best choice' of estimated parameters is derived and the ability to meet mission requirements with the 'box-wing' model evaluated.
NASA Technical Reports Server (NTRS)
Luthcke, S. B.; Marshall, J. A.
1992-01-01
The TOPEX/Poseidon spacecraft was launched on August 10, 1992 to study the Earth's oceans. To achieve maximum benefit from the altimetric data it is to collect, mission requirements dictate that TOPEX/Poseidon's orbit must be computed at an unprecedented level of accuracy. To reach our pre-launch radial orbit accuracy goals, the mismodeling of the radiative nonconservative forces of solar radiation, Earth albedo an infrared re-radiation, and spacecraft thermal imbalances cannot produce in combination more than a 6 cm rms error over a 10 day period. Similarly, the 10-day drag modeling error cannot exceed 3 cm rms. In order to satisfy these requirements, a 'box-wing' representation of the satellite has been developed in which, the satellite is modelled as the combination of flat plates arranged in the shape of a box and a connected solar array. The radiative/thermal nonconservative forces acting on each of the eight surfaces are computed independently, yielding vector accelerations which are summed to compute the total aggregate effect on the satellite center-of-mass. Select parameters associated with the flat plates are adjusted to obtain a better representation of the satellite acceleration history. This study analyzes the estimation of these parameters from simulated TOPEX/Poseidon laser data in the presence of both nonconservative and gravity model errors. A 'best choice' of estimated parameters is derived and the ability to meet mission requirements with the 'box-wing' model evaluated.
Gravity and Nonconservative Force Model Tuning for the GEOSAT Follow-On Spacecraft
NASA Technical Reports Server (NTRS)
Lemoine, Frank G.; Zelensky, Nikita P.; Rowlands, David D.; Luthcke, Scott B.; Chinn, Douglas S.; Marr, Gregory C.; Smith, David E. (Technical Monitor)
2000-01-01
The US Navy's GEOSAT Follow-On spacecraft was launched on February 10, 1998 and the primary objective of the mission was to map the oceans using a radar altimeter. Three radar altimeter calibration campaigns have been conducted in 1999 and 2000. The spacecraft is tracked by satellite laser ranging (SLR) and Doppler beacons and a limited amount of data have been obtained from the Global Positioning Receiver (GPS) on board the satellite. Even with EGM96, the predicted radial orbit error due to gravity field mismodelling (to 70x70) remains high at 2.61 cm (compared to 0.88 cm for TOPEX). We report on the preliminary gravity model tuning for GFO using SLR, and altimeter crossover data. Preliminary solutions using SLR and GFO/GFO crossover data from CalVal campaigns I and II in June-August 1999, and January-February 2000 have reduced the predicted radial orbit error to 1.9 cm and further reduction will be possible when additional data are added to the solutions. The gravity model tuning has improved principally the low order m-daily terms and has reduced significantly the geographically correlated error present in this satellite orbit. In addition to gravity field mismodelling, the largest contributor to the orbit error is the non-conservative force mismodelling. We report on further nonconservative force model tuning results using available data from over one cycle in beta prime.
Hamiltonian formulation of the spin-orbit model with time-varying non-conservative forces
NASA Astrophysics Data System (ADS)
Gkolias, Ioannis; Efthymiopoulos, Christos; Pucacco, Giuseppe; Celletti, Alessandra
2017-10-01
In a realistic scenario, the evolution of the rotational dynamics of a celestial or artificial body is subject to dissipative effects. Time-varying non-conservative forces can be due to, for example, a variation of the moments of inertia or to tidal interactions. In this work, we consider a simplified model describing the rotational dynamics, known as the spin-orbit problem, where we assume that the orbital motion is provided by a fixed Keplerian ellipse. We consider different examples in which a non-conservative force acts on the model and we propose an analytical method, which reduces the system to a Hamiltonian framework. In particular, we compute a time parametrisation in a series form, which allows us to transform the original system into a Hamiltonian one. We also provide applications of our method to study the rotational motion of a body with time-varying moments of inertia, e.g. an artificial satellite with flexible components, as well as subject to a tidal torque depending linearly on the velocity.
NASA Astrophysics Data System (ADS)
Long, Zi-Xuan; Zhang, Yi
2014-11-01
This paper focuses on the Noether symmetries and the conserved quantities for both holonomic and nonholonomic systems based on a new non-conservative dynamical model introduced by El-Nabulsi. First, the El-Nabulsi dynamical model which is based on a fractional integral extended by periodic laws is introduced, and El-Nabulsi—Hamilton's canonical equations for non-conservative Hamilton system with holonomic or nonholonomic constraints are established. Second, the definitions and criteria of El-Nabulsi—Noether symmetrical transformations and quasi-symmetrical transformations are presented in terms of the invariance of El-Nabulsi—Hamilton action under the infinitesimal transformations of the group. Finally, Noether's theorems for the non-conservative Hamilton system under the El-Nabulsi dynamical system are established, which reveal the relationship between the Noether symmetry and the conserved quantity of the system.
Earthquake likelihood model testing
Schorlemmer, D.; Gerstenberger, M.C.; Wiemer, S.; Jackson, D.D.; Rhoades, D.A.
2007-01-01
INTRODUCTIONThe Regional Earthquake Likelihood Models (RELM) project aims to produce and evaluate alternate models of earthquake potential (probability per unit volume, magnitude, and time) for California. Based on differing assumptions, these models are produced to test the validity of their assumptions and to explore which models should be incorporated in seismic hazard and risk evaluation. Tests based on physical and geological criteria are useful but we focus on statistical methods using future earthquake catalog data only. We envision two evaluations: a test of consistency with observed data and a comparison of all pairs of models for relative consistency. Both tests are based on the likelihood method, and both are fully prospective (i.e., the models are not adjusted to fit the test data). To be tested, each model must assign a probability to any possible event within a specified region of space, time, and magnitude. For our tests the models must use a common format: earthquake rates in specified “bins” with location, magnitude, time, and focal mechanism limits.Seismology cannot yet deterministically predict individual earthquakes; however, it should seek the best possible models for forecasting earthquake occurrence. This paper describes the statistical rules of an experiment to examine and test earthquake forecasts. The primary purposes of the tests described below are to evaluate physical models for earthquakes, assure that source models used in seismic hazard and risk studies are consistent with earthquake data, and provide quantitative measures by which models can be assigned weights in a consensus model or be judged as suitable for particular regions.In this paper we develop a statistical method for testing earthquake likelihood models. A companion paper (Schorlemmer and Gerstenberger 2007, this issue) discusses the actual implementation of these tests in the framework of the RELM initiative.Statistical testing of hypotheses is a common task and a
Probabilistic modeling of earthquakes
NASA Astrophysics Data System (ADS)
Duputel, Z.; Jolivet, R.; Jiang, J.; Simons, M.; Rivera, L. A.; Ampuero, J. P.; Gombert, B.; Minson, S. E.
2015-12-01
By exploiting increasing amounts of geophysical data we are able to produce increasingly sophisticated fault slip models. Such detailed models, while they are essential ingredients towards better understanding fault mechanical behavior, can only inform us in a meaningful way if we can assign uncertainties to the inferred slip parameters. This talk will present our recent efforts to infer fault slip models with realistic error estimates. Bayesian analysis is a useful tool for this purpose as it handles uncertainty in a natural way. One of the biggest obstacles to significant progress in observational earthquake source modeling arises from imperfect predictions of geodetic and seismic data due to uncertainties in the material parameters and fault geometries used in our forward models - the impact of which are generally overlooked. We recently developed physically based statistics for the model prediction error and showed how to account for inaccuracies in the Earth model elastic parameters. We will present applications of this formalism to recent large earthquakes such as the 2014 Pisagua earthquake. We will also discuss novel approaches to integrate the large amount of information available from GPS, InSAR, tide-gauge, tsunami and seismic data.
Nonconservative kinetic exchange model of opinion dynamics with randomness and bounded confidence.
Sen, Parongama
2012-07-01
The concept of a bounded confidence level is incorporated in a nonconservative kinetic exchange model of opinion dynamics model where opinions have continuous values ∈[-1,1]. The characteristics of the unrestricted model, which has one parameter λ representing conviction, undergo drastic changes with the introduction of bounded confidence parametrized by δ. Three distinct regions are identified in the phase diagram in the δ-λ plane and the evidences of a first order phase transition for δ ≥ 0.3 are presented. A neutral state with all opinions equal to zero occurs for λ ≤ λ(c1) ≃ 2/3, independent of δ, while for λ(c(1)) ≤ λ ≤ λ(c(2))(δ), an ordered region is seen to exist where opinions of only one sign prevail. At λ(c(2))(δ), a transition to a disordered state is observed, where individual opinions of both signs coexist and move closer to the extreme values (±1) as λ is increased. For confidence level δ < 0.3, the ordered phase exists for a narrow range of λ only. The line δ = 0 is apparently a line of discontinuity, and this limit is discussed in some detail.
Nonextensive models for earthquakes.
Silva, R; França, G S; Vilar, C S; Alcaniz, J S
2006-02-01
We have revisited the fragment-asperity interaction model recently introduced by Sotolongo-Costa and Posadas [Phy. Rev. Lett. 92, 048501 (2004)] by considering a different definition for mean values in the context of Tsallis nonextensive statistics and introducing a scale between the earthquake energy and the size of fragment epsilon proportional to r3. The energy-distribution function (EDF) deduced in our approach is considerably different from the one obtained in the above reference. We have also tested the viability of this EDF with data from two different catalogs (in three different areas), namely, the NEIC and the Bulletin Seismic of the Revista Brasileira de Geofísica. Although both approaches provide very similar values for the nonextensive parameter , other physical quantities, e.g., energy density, differ considerably by several orders of magnitude.
Nonextensive models for earthquakes
NASA Astrophysics Data System (ADS)
Silva, R.; França, G. S.; Vilar, C. S.; Alcaniz, J. S.
2006-02-01
We have revisited the fragment-asperity interaction model recently introduced by Sotolongo-Costa and Posadas [Phy. Rev. Lett. 92, 048501 (2004)] by considering a different definition for mean values in the context of Tsallis nonextensive statistics and introducing a scale between the earthquake energy and the size of fragment γ∝r3 . The energy-distribution function (EDF) deduced in our approach is considerably different from the one obtained in the above reference. We have also tested the viability of this EDF with data from two different catalogs (in three different areas), namely, the NEIC and the Bulletin Seismic of the Revista Brasileira de Geofísica. Although both approaches provide very similar values for the nonextensive parameter q , other physical quantities, e.g., energy density, differ considerably by several orders of magnitude.
Two models for earthquake forerunners
Mjachkin, V.I.; Brace, W.F.; Sobolev, G.A.; Dieterich, J.H.
1975-01-01
Similar precursory phenomena have been observed before earthquakes in the United States, the Soviet Union, Japan, and China. Two quite different physical models are used to explain these phenomena. According to a model developed by US seismologists, the so-called dilatancy diffusion model, the earthquake occurs near maximum stress, following a period of dilatant crack expansion. Diffusion of water in and out of the dilatant volume is required to explain the recovery of seismic velocity before the earthquake. According to a model developed by Soviet scientists growth of cracks is also involved but diffusion of water in and out of the focal region is not required. With this model, the earthquake is assumed to occur during a period of falling stress and recovery of velocity here is due to crack closure as stress relaxes. In general, the dilatancy diffusion model gives a peaked precursor form, whereas the dry model gives a bay form, in which recovery is well under way before the earthquake. A number of field observations should help to distinguish between the two models: study of post-earthquake recovery, time variation of stress and pore pressure in the focal region, the occurrence of pre-existing faults, and any changes in direction of precursory phenomena during the anomalous period. ?? 1975 Birkha??user Verlag.
NASA Astrophysics Data System (ADS)
Chen, Qiujie; Shen, Yunzhong; Chen, Wu; Zhang, Xingfu; Hsu, Houze
2016-06-01
The main contribution of this study is to improve the GRACE gravity field solution by taking errors of non-conservative acceleration and attitude observations into account. Unlike previous studies, the errors of the attitude and non-conservative acceleration data, and gravity field parameters, as well as accelerometer biases are estimated by means of weighted least squares adjustment. Then we compute a new time series of monthly gravity field models complete to degree and order 60 covering the period Jan. 2003 to Dec. 2012 from the twin GRACE satellites' data. The derived GRACE solution (called Tongji-GRACE02) is compared in terms of geoid degree variances and temporal mass changes with the other GRACE solutions, namely CSR RL05, GFZ RL05a, and JPL RL05. The results show that (1) the global mass signals of Tongji-GRACE02 are generally consistent with those of CSR RL05, GFZ RL05a, and JPL RL05; (2) compared to CSR RL05, the noise of Tongji-GRACE02 is reduced by about 21 % over ocean when only using 300 km Gaussian smoothing, and 60 % or more over deserts (Australia, Kalahari, Karakum and Thar) without using Gaussian smoothing and decorrelation filtering; and (3) for all examples, the noise reductions are more significant than signal reductions, no matter whether smoothing and filtering are applied or not. The comparison with GLDAS data supports that the signals of Tongji-GRACE02 over St. Lawrence River basin are close to those from CSR RL05, GFZ RL05a and JPL RL05, while the GLDAS result shows the best agreement with the Tongji-GRACE02 result.
Bayesian kinematic earthquake source models
NASA Astrophysics Data System (ADS)
Minson, S. E.; Simons, M.; Beck, J. L.; Genrich, J. F.; Galetzka, J. E.; Chowdhury, F.; Owen, S. E.; Webb, F.; Comte, D.; Glass, B.; Leiva, C.; Ortega, F. H.
2009-12-01
Most coseismic, postseismic, and interseismic slip models are based on highly regularized optimizations which yield one solution which satisfies the data given a particular set of regularizing constraints. This regularization hampers our ability to answer basic questions such as whether seismic and aseismic slip overlap or instead rupture separate portions of the fault zone. We present a Bayesian methodology for generating kinematic earthquake source models with a focus on large subduction zone earthquakes. Unlike classical optimization approaches, Bayesian techniques sample the ensemble of all acceptable models presented as an a posteriori probability density function (PDF), and thus we can explore the entire solution space to determine, for example, which model parameters are well determined and which are not, or what is the likelihood that two slip distributions overlap in space. Bayesian sampling also has the advantage that all a priori knowledge of the source process can be used to mold the a posteriori ensemble of models. Although very powerful, Bayesian methods have up to now been of limited use in geophysical modeling because they are only computationally feasible for problems with a small number of free parameters due to what is called the "curse of dimensionality." However, our methodology can successfully sample solution spaces of many hundreds of parameters, which is sufficient to produce finite fault kinematic earthquake models. Our algorithm is a modification of the tempered Markov chain Monte Carlo (tempered MCMC or TMCMC) method. In our algorithm, we sample a "tempered" a posteriori PDF using many MCMC simulations running in parallel and evolutionary computation in which models which fit the data poorly are preferentially eliminated in favor of models which better predict the data. We present results for both synthetic test problems as well as for the 2007 Mw 7.8 Tocopilla, Chile earthquake, the latter of which is constrained by InSAR, local high
Modeling, Forecasting and Mitigating Extreme Earthquakes
NASA Astrophysics Data System (ADS)
Ismail-Zadeh, A.; Le Mouel, J.; Soloviev, A.
2012-12-01
Recent earthquake disasters highlighted the importance of multi- and trans-disciplinary studies of earthquake risk. A major component of earthquake disaster risk analysis is hazards research, which should cover not only a traditional assessment of ground shaking, but also studies of geodetic, paleoseismic, geomagnetic, hydrological, deep drilling and other geophysical and geological observations together with comprehensive modeling of earthquakes and forecasting extreme events. Extreme earthquakes (large magnitude and rare events) are manifestations of complex behavior of the lithosphere structured as a hierarchical system of blocks of different sizes. Understanding of physics and dynamics of the extreme events comes from observations, measurements and modeling. A quantitative approach to simulate earthquakes in models of fault dynamics will be presented. The models reproduce basic features of the observed seismicity (e.g., the frequency-magnitude relationship, clustering of earthquakes, occurrence of extreme seismic events). They provide a link between geodynamic processes and seismicity, allow studying extreme events, influence of fault network properties on seismic patterns and seismic cycles, and assist, in a broader sense, in earthquake forecast modeling. Some aspects of predictability of large earthquakes (how well can large earthquakes be predicted today?) will be also discussed along with possibilities in mitigation of earthquake disasters (e.g., on 'inverse' forensic investigations of earthquake disasters).
Network of epicenters of the Olami-Feder-Christensen model of earthquakes.
Peixoto, Tiago P; Prado, Carmen P C
2006-07-01
We study the dynamics of the Olami-Feder-Christensen (OFC) model of earthquakes, focusing on the behavior of sequences of epicenters regarded as a growing complex network. Besides making a detailed and quantitative study of the effects of the borders (the occurrence of epicenters is dominated by a strong border effect which does not scale with system size), we examine the degree distribution and the degree correlation of the graph. We detect sharp differences between the conservative and nonconservative regimes of the model. Removing border effects, the conservative regime exhibits a Poisson-like degree statistics and is uncorrelated, while the nonconservative has a broad power-law-like distribution of degrees (if the smallest events are ignored), which reproduces the observed behavior of real earthquakes. In this regime the graph has also an unusually strong degree correlation among the vertices with higher degree, which is the result of the existence of temporary attractors for the dynamics: as the system evolves, the epicenters concentrate increasingly on fewer sites, exhibiting strong synchronization, but eventually spread again over the lattice after a series of sufficiently large earthquakes. We propose an analytical description of the dynamics of this growing network, considering a Markov process network with hidden variables, which is able to account for the mentioned properties.
GEM - The Global Earthquake Model
NASA Astrophysics Data System (ADS)
Smolka, A.
2009-04-01
Over 500,000 people died in the last decade due to earthquakes and tsunamis, mostly in the developing world, where the risk is increasing due to rapid population growth. In many seismic regions, no hazard and risk models exist, and even where models do exist, they are intelligible only by experts, or available only for commercial purposes. The Global Earthquake Model (GEM) answers the need for an openly accessible risk management tool. GEM is an internationally sanctioned public private partnership initiated by the Organisation for Economic Cooperation and Development (OECD) which will establish an authoritative standard for calculating and communicating earthquake hazard and risk, and will be designed to serve as the critical instrument to support decisions and actions that reduce earthquake losses worldwide. GEM will integrate developments on the forefront of scientific and engineering knowledge of earthquakes, at global, regional and local scale. The work is organized in three modules: hazard, risk, and socio-economic impact. The hazard module calculates probabilities of earthquake occurrence and resulting shaking at any given location. The risk module calculates fatalities, injuries, and damage based on expected shaking, building vulnerability, and the distribution of population and of exposed values and facilities. The socio-economic impact module delivers tools for making educated decisions to mitigate and manage risk. GEM will be a versatile online tool, with open source code and a map-based graphical interface. The underlying data will be open wherever possible, and its modular input and output will be adapted to multiple user groups: scientists and engineers, risk managers and decision makers in the public and private sectors, and the public-at- large. GEM will be the first global model for seismic risk assessment at a national and regional scale, and aims to achieve broad scientific participation and independence. Its development will occur in a
Parity-nonconserving cold neutron-parahydrogen interactions
NASA Astrophysics Data System (ADS)
Partanen, T. M.
2012-12-01
Three pion-dominated observables of the parity-nonconserving interactions between the cold neutrons and parahydrogen are calculated. The transversely polarized neutron spin rotation, unpolarized neutron longitudinal polarization, and photon asymmetry of the radiative polarized neutron capture are considered. For the numerical evaluation of the observables, the strong interactions are taken into account by the Reid93 potential and the parity-nonconserving interactions by the DDH and EFT models including two different EFT parity-nonconserving two-pion exchange potentials.
Helmstetter, Agnès; Hergarten, Stefan; Sornette, Didier
2004-10-01
Following Phys. Rev. Lett. 88, 238501 (2002)] who discovered aftershocks and foreshocks in the Olami-Feder-Christensen (OFC) discrete block-spring earthquake model, we investigate to what degree the simple toppling mechanism of this model is sufficient to account for the clustering of real seismicity in time and space. We find that synthetic catalogs generated by the OFC model share many properties of real seismicity at a qualitative level: Omori's law (aftershocks) and inverse Omori's law (foreshocks), increase of the number of aftershocks and of the aftershock zone size with the mainshock magnitude. There are, however, significant quantitative differences. The number of aftershocks per mainshock in the OFC model is smaller than in real seismicity, especially for large mainshocks. We find that foreshocks in the OFC catalogs can be in large part described by a simple model of triggered seismicity, such as the epidemic-type aftershock sequence (ETAS) model. But the properties of foreshocks in the OFC model depend on the mainshock magnitude, in qualitative agreement with the critical earthquake model and in disagreement with real seismicity and with the ETAS model.
Helmstetter, Agnes; Hergarten, Stefan; Sornette, Didier
2004-10-01
Following Hergarten and Neugebauer [Phys. Rev. Lett. 88, 238501, 2002] who discovered aftershocks and foreshocks in the Olami-Feder-Christensen (OFC) discrete block-spring earthquake model, we investigate to what degree the simple toppling mechanism of this model is sufficient to account for the clustering of real seismicity in time and space. We find that synthetic catalogs generated by the OFC model share many properties of real seismicity at a qualitative level: Omori's law (aftershocks) and inverse Omori's law (foreshocks), increase of the number of aftershocks and of the aftershock zone size with the mainshock magnitude. There are, however, significant quantitative differences. The number of aftershocks per mainshock in the OFC model is smaller than in real seismicity, especially for large mainshocks. We find that foreshocks in the OFC catalogs can be in large part described by a simple model of triggered seismicity, such as the epidemic-type aftershock sequence (ETAS) model. But the properties of foreshocks in the OFC model depend on the mainshock magnitude, in qualitative agreement with the critical earthquake model and in disagreement with real seismicity and with the ETAS model.
Statistical tests of simple earthquake cycle models
NASA Astrophysics Data System (ADS)
DeVries, Phoebe M. R.; Evans, Eileen L.
2016-12-01
A central goal of observing and modeling the earthquake cycle is to forecast when a particular fault may generate an earthquake: a fault late in its earthquake cycle may be more likely to generate an earthquake than a fault early in its earthquake cycle. Models that can explain geodetic observations throughout the entire earthquake cycle may be required to gain a more complete understanding of relevant physics and phenomenology. Previous efforts to develop unified earthquake models for strike-slip faults have largely focused on explaining both preseismic and postseismic geodetic observations available across a few faults in California, Turkey, and Tibet. An alternative approach leverages the global distribution of geodetic and geologic slip rate estimates on strike-slip faults worldwide. Here we use the Kolmogorov-Smirnov test for similarity of distributions to infer, in a statistically rigorous manner, viscoelastic earthquake cycle models that are inconsistent with 15 sets of observations across major strike-slip faults. We reject a large subset of two-layer models incorporating Burgers rheologies at a significance level of α = 0.05 (those with long-term Maxwell viscosities ηM < 4.0 × 1019 Pa s and ηM > 4.6 × 1020 Pa s) but cannot reject models on the basis of transient Kelvin viscosity ηK. Finally, we examine the implications of these results for the predicted earthquake cycle timing of the 15 faults considered and compare these predictions to the geologic and historical record.
High precision analytical modelling of the solar non-conservative force field
NASA Astrophysics Data System (ADS)
Ziebart, M.; Adhya, S.; Cross, P.
2003-04-01
This paper outlines an automated analytical technique for solar radiation pressure and thermal re-radiation modelling that deals with spacecraft structural complexity, is applicable to many different types of spacecraft, and is simple to use. The technique utilises a simulation of the photon flux based on a pixel array and includes approaches for the thermal response of solar panels and multi-layered insulation. The rationale behind this approach is explained and the details of the modelling method are outlined. The roles of the technique at the stages of design, operation and post-processing analysis are defined. The effects of variations in the solar irradiance are discussed, as is the adaptation of the technique for the analysis of solar radiation torque. Importantly, practical implementation of the output model does not result in any degradation of the source data used in the analysis. The benefits of this technique over empirical approaches are discussed, using results from recent studies. The technique has been applied to the GLONASS IIn spacecraft, and is currently being applied to GPS Block IIR, JASON-1 and ENVISAT.
Parallelization of the Coupled Earthquake Model
NASA Technical Reports Server (NTRS)
Block, Gary; Li, P. Peggy; Song, Yuhe T.
2007-01-01
This Web-based tsunami simulation system allows users to remotely run a model on JPL s supercomputers for a given undersea earthquake. At the time of this reporting, predicting tsunamis on the Internet has never happened before. This new code directly couples the earthquake model and the ocean model on parallel computers and improves simulation speed. Seismometers can only detect information from earthquakes; they cannot detect whether or not a tsunami may occur as a result of the earthquake. When earthquake-tsunami models are coupled with the improved computational speed of modern, high-performance computers and constrained by remotely sensed data, they are able to provide early warnings for those coastal regions at risk. The software is capable of testing NASA s satellite observations of tsunamis. It has been successfully tested for several historical tsunamis, has passed all alpha and beta testing, and is well documented for users.
Multiplicative earthquake likelihood models incorporating strain rates
NASA Astrophysics Data System (ADS)
Rhoades, D. A.; Christophersen, A.; Gerstenberger, M. C.
2017-01-01