An adaptive finite element method for the inequality-constrained Reynolds equation
NASA Astrophysics Data System (ADS)
Gustafsson, Tom; Rajagopal, Kumbakonam R.; Stenberg, Rolf; Videman, Juha
2018-07-01
We present a stabilized finite element method for the numerical solution of cavitation in lubrication, modeled as an inequality-constrained Reynolds equation. The cavitation model is written as a variable coefficient saddle-point problem and approximated by a residual-based stabilized method. Based on our recent results on the classical obstacle problem, we present optimal a priori estimates and derive novel a posteriori error estimators. The method is implemented as a Nitsche-type finite element technique and shown in numerical computations to be superior to the usually applied penalty methods.
2000-12-01
Numerical Simulations ..... ................. .... 42 1.4.1. Impact of a rod on a rigid wall ..... ................. .... 42 1.4.2. Impact of two...dissipative properties of the proposed scheme . . . . 81 II.4. Representative Numerical Simulations ...... ................. ... 84 11.4.1. Forging of...Representative numerical simulations ...... ............. .. 123 111.3. Model Problem II: a Simplified Model of Thin Beams ... ......... ... 127 III
Using Real and Simulated TNOs to Constrain the Outer Solar System
NASA Astrophysics Data System (ADS)
Kaib, Nathan
2018-04-01
Over the past 2-3 decades our understanding of the outer solar system’s history and current state has evolved dramatically. An explosion in the number of detected trans-Neptunian objects (TNOs) coupled with simultaneous advances in numerical models of orbital dynamics has driven this rapid evolution. However, successfully constraining the orbital architecture and evolution of the outer solar system requires accurately comparing simulation results with observational datasets. This process is challenging because observed datasets are influenced by orbital discovery biases as well as TNO size and albedo distributions. Meanwhile, such influences are generally absent from numerical results. Here I will review recent work I and others have undertaken using numerical simulations in concert with catalogs of observed TNOs to constrain the outer solar system’s current orbital architecture and past evolution.
Doyle, Jessica M.; Gleeson, Tom; Manning, Andrew H.; Mayer, K. Ulrich
2015-01-01
Environmental tracers provide information on groundwater age, recharge conditions, and flow processes which can be helpful for evaluating groundwater sustainability and vulnerability. Dissolved noble gas data have proven particularly useful in mountainous terrain because they can be used to determine recharge elevation. However, tracer-derived recharge elevations have not been utilized as calibration targets for numerical groundwater flow models. Herein, we constrain and calibrate a regional groundwater flow model with noble-gas-derived recharge elevations for the first time. Tritium and noble gas tracer results improved the site conceptual model by identifying a previously uncertain contribution of mountain block recharge from the Coast Mountains to an alluvial coastal aquifer in humid southwestern British Columbia. The revised conceptual model was integrated into a three-dimensional numerical groundwater flow model and calibrated to hydraulic head data in addition to recharge elevations estimated from noble gas recharge temperatures. Recharge elevations proved to be imperative for constraining hydraulic conductivity, recharge location, and bedrock geometry, and thus minimizing model nonuniqueness. Results indicate that 45% of recharge to the aquifer is mountain block recharge. A similar match between measured and modeled heads was achieved in a second numerical model that excludes the mountain block (no mountain block recharge), demonstrating that hydraulic head data alone are incapable of quantifying mountain block recharge. This result has significant implications for understanding and managing source water protection in recharge areas, potential effects of climate change, the overall water budget, and ultimately ensuring groundwater sustainability.
CONORBIT: constrained optimization by radial basis function interpolation in trust regions
Regis, Rommel G.; Wild, Stefan M.
2016-09-26
Here, this paper presents CONORBIT (CONstrained Optimization by Radial Basis function Interpolation in Trust regions), a derivative-free algorithm for constrained black-box optimization where the objective and constraint functions are computationally expensive. CONORBIT employs a trust-region framework that uses interpolating radial basis function (RBF) models for the objective and constraint functions, and is an extension of the ORBIT algorithm. It uses a small margin for the RBF constraint models to facilitate the generation of feasible iterates, and extensive numerical tests confirm that such a margin is helpful in improving performance. CONORBIT is compared with other algorithms on 27 test problems, amore » chemical process optimization problem, and an automotive application. Numerical results show that CONORBIT performs better than COBYLA, a sequential penalty derivative-free method, an augmented Lagrangian method, a direct search method, and another RBF-based algorithm on the test problems and on the automotive application.« less
NASA Astrophysics Data System (ADS)
Parsons, R. A.; Nimmo, F.
2010-03-01
SHARAD observations constrain the thickness and dust content of lobate debris aprons (LDAs). Simulations of dust-free ice-sheet flow over a flat surface at 205 K for 10-100 m.y. give LDA lengths and thicknesses that are consistent with observations.
Trajectory optimization and guidance law development for national aerospace plane applications
NASA Technical Reports Server (NTRS)
Calise, A. J.; Flandro, G. A.; Corban, J. E.
1988-01-01
The work completed to date is comprised of the following: a simple vehicle model representative of the aerospace plane concept in the hypersonic flight regime, fuel-optimal climb profiles for the unconstrained and dynamic pressure constrained cases generated using a reduced order dynamic model, an analytic switching condition for transition to rocket powered flight as orbital velocity is approached, simple feedback guidance laws for both the unconstrained and dynamic pressure constrained cases derived via singular perturbation theory and a nonlinear transformation technique, and numerical simulation results for ascent to orbit in the dynamic pressure constrained case.
Liao, Bolin; Zhang, Yunong; Jin, Long
2016-02-01
In this paper, a new Taylor-type numerical differentiation formula is first presented to discretize the continuous-time Zhang neural network (ZNN), and obtain higher computational accuracy. Based on the Taylor-type formula, two Taylor-type discrete-time ZNN models (termed Taylor-type discrete-time ZNNK and Taylor-type discrete-time ZNNU models) are then proposed and discussed to perform online dynamic equality-constrained quadratic programming. For comparison, Euler-type discrete-time ZNN models (called Euler-type discrete-time ZNNK and Euler-type discrete-time ZNNU models) and Newton iteration, with interesting links being found, are also presented. It is proved herein that the steady-state residual errors of the proposed Taylor-type discrete-time ZNN models, Euler-type discrete-time ZNN models, and Newton iteration have the patterns of O(h(3)), O(h(2)), and O(h), respectively, with h denoting the sampling gap. Numerical experiments, including the application examples, are carried out, of which the results further substantiate the theoretical findings and the efficacy of Taylor-type discrete-time ZNN models. Finally, the comparisons with Taylor-type discrete-time derivative model and other Lagrange-type discrete-time ZNN models for dynamic equality-constrained quadratic programming substantiate the superiority of the proposed Taylor-type discrete-time ZNN models once again.
On meeting capital requirements with a chance-constrained optimization model.
Atta Mills, Ebenezer Fiifi Emire; Yu, Bo; Gu, Lanlan
2016-01-01
This paper deals with a capital to risk asset ratio chance-constrained optimization model in the presence of loans, treasury bill, fixed assets and non-interest earning assets. To model the dynamics of loans, we introduce a modified CreditMetrics approach. This leads to development of a deterministic convex counterpart of capital to risk asset ratio chance constraint. We pursue the scope of analyzing our model under the worst-case scenario i.e. loan default. The theoretical model is analyzed by applying numerical procedures, in order to administer valuable insights from a financial outlook. Our results suggest that, our capital to risk asset ratio chance-constrained optimization model guarantees banks of meeting capital requirements of Basel III with a likelihood of 95 % irrespective of changes in future market value of assets.
Fuzzy multi-objective chance-constrained programming model for hazardous materials transportation
NASA Astrophysics Data System (ADS)
Du, Jiaoman; Yu, Lean; Li, Xiang
2016-04-01
Hazardous materials transportation is an important and hot issue of public safety. Based on the shortest path model, this paper presents a fuzzy multi-objective programming model that minimizes the transportation risk to life, travel time and fuel consumption. First, we present the risk model, travel time model and fuel consumption model. Furthermore, we formulate a chance-constrained programming model within the framework of credibility theory, in which the lengths of arcs in the transportation network are assumed to be fuzzy variables. A hybrid intelligent algorithm integrating fuzzy simulation and genetic algorithm is designed for finding a satisfactory solution. Finally, some numerical examples are given to demonstrate the efficiency of the proposed model and algorithm.
Epoch of reionization 21 cm forecasting from MCMC-constrained semi-numerical models
NASA Astrophysics Data System (ADS)
Hassan, Sultan; Davé, Romeel; Finlator, Kristian; Santos, Mario G.
2017-06-01
The recent low value of Planck Collaboration XLVII integrated optical depth to Thomson scattering suggests that the reionization occurred fairly suddenly, disfavouring extended reionization scenarios. This will have a significant impact on the 21 cm power spectrum. Using a semi-numerical framework, we improve our model from instantaneous to include time-integrated ionization and recombination effects, and find that this leads to more sudden reionization. It also yields larger H II bubbles that lead to an order of magnitude more 21 cm power on large scales, while suppressing the small-scale ionization power. Local fluctuations in the neutral hydrogen density play the dominant role in boosting the 21 cm power spectrum on large scales, while recombinations are subdominant. We use a Monte Carlo Markov chain approach to constrain our model to observations of the star formation rate functions at z = 6, 7, 8 from Bouwens et al., the Planck Collaboration XLVII optical depth measurements and the Becker & Bolton ionizing emissivity data at z ˜ 5. We then use this constrained model to perform 21 cm forecasting for Low Frequency Array, Hydrogen Epoch of Reionization Array and Square Kilometre Array in order to determine how well such data can characterize the sources driving reionization. We find that the Mock 21 cm power spectrum alone can somewhat constrain the halo mass dependence of ionizing sources, the photon escape fraction and ionizing amplitude, but combining the Mock 21 cm data with other current observations enables us to separately constrain all these parameters. Our framework illustrates how the future 21 cm data can play a key role in understanding the sources and topology of reionization as observations improve.
A Method to Constrain Mass and Spin of GRB Black Holes within the NDAF Model
NASA Astrophysics Data System (ADS)
Liu, Tong; Xue, Li; Zhao, Xiao-Hong; Zhang, Fu-Wen; Zhang, Bing
2016-04-01
Black holes (BHs) hide themselves behind various astronomical phenomena and their properties, I.e., mass and spin, are usually difficult to constrain. One leading candidate for the central engine model of gamma-ray bursts (GRBs) invokes a stellar mass BH and a neutrino-dominated accretion flow (NDAF), with the relativistic jet launched due to neutrino-anti-neutrino annihilations. Such a model gives rise to a matter-dominated fireball, and is suitable to interpret GRBs with a dominant thermal component with a photospheric origin. We propose a method to constrain BH mass and spin within the framework of this model and apply the method to the thermally dominant GRB 101219B, whose initial jet launching radius, r0, is constrained from the data. Using our numerical model of NDAF jets, we estimate the following constraints on the central BH: mass MBH ˜ 5-9 M⊙, spin parameter a* ≳ 0.6, and disk mass 3 M⊙ ≲ Mdisk ≲ 4 M⊙. Our results also suggest that the NDAF model is a competitive candidate for the central engine of GRBs with a strong thermal component.
How to constrain multi-objective calibrations of the SWAT model using water balance components
USDA-ARS?s Scientific Manuscript database
Automated procedures are often used to provide adequate fits between hydrologic model estimates and observed data. While the models may provide good fits based upon numeric criteria, they may still not accurately represent the basic hydrologic characteristics of the represented watershed. Here we ...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Armentrout, J.M.; Smith-Rouch, L.S.; Bowman, S.A.
1996-08-01
Numeric simulations based on integrated data sets enhance our understanding of depositional geometry and facilitate quantification of depositional processes. Numeric values tested against well-constrained geologic data sets can then be used in iterations testing each variable, and in predicting lithofacies distributions under various depositional scenarios using the principles of sequence stratigraphic analysis. The stratigraphic modeling software provides a broad spectrum of techniques for modeling and testing elements of the petroleum system. Using well-constrained geologic examples, variations in depositional geometry and lithofacies distributions between different tectonic settings (passive vs. active margin) and climate regimes (hothouse vs. icehouse) can provide insight tomore » potential source rock and reservoir rock distribution, maturation timing, migration pathways, and trap formation. Two data sets are used to illustrate such variations: both include a seismic reflection profile calibrated by multiple wells. The first is a Pennsylvanian mixed carbonate-siliciclastic system in the Paradox basin, and the second a Pliocene-Pleistocene siliciclastic system in the Gulf of Mexico. Numeric simulations result in geometry and facies distributions consistent with those interpreted using the integrated stratigraphic analysis of the calibrated seismic profiles. An exception occurs in the Gulf of Mexico study where the simulated sediment thickness from 3.8 to 1.6 Ma within an upper slope minibasin was less than that mapped using a regional seismic grid. Regional depositional patterns demonstrate that this extra thickness was probably sourced from out of the plane of the modeled transect, illustrating the necessity for three-dimensional constraints on two-dimensional modeling.« less
Spectral method for a kinetic swarming model
Gamba, Irene M.; Haack, Jeffrey R.; Motsch, Sebastien
2015-04-28
Here we present the first numerical method for a kinetic description of the Vicsek swarming model. The kinetic model poses a unique challenge, as there is a distribution dependent collision invariant to satisfy when computing the interaction term. We use a spectral representation linked with a discrete constrained optimization to compute these interactions. To test the numerical scheme we investigate the kinetic model at different scales and compare the solution with the microscopic and macroscopic descriptions of the Vicsek model. Lastly, we observe that the kinetic model captures key features such as vortex formation and traveling waves.
Evaluating Micrometeorological Estimates of Groundwater Discharge from Great Basin Desert Playas
NASA Astrophysics Data System (ADS)
Jackson, T.; Halford, K. J.; Gardner, P.
2017-12-01
Groundwater availability studies in the arid southwestern United States traditionally have assumed that groundwater discharge by evapotranspiration (ETg) from desert playas is a significant component of the groundwater budget. This result occurs because desert playa ETg rates are poorly constrained by Bowen Ratio energy budget (BREB) and eddy-covariance (EC) micrometeorological measurement approaches. Best attempts by previous studies to constrain ETg from desert playas have resulted in ETg rates that are below the detection limit of micrometeorological approaches. This study uses numerical models to further constrain desert playa ETg rates that are below the detection limit of EC (0.1 mm/d) and BREB (0.3 mm/d) approaches, and to evaluate the effect of hydraulic properties and salinity-based groundwater-density contrasts on desert playa ETg rates. Numerical models simulated ETg rates from desert playas in Death Valley, California and Dixie Valley, Nevada. Results indicate that actual ETg rates from desert playas are significantly below the upper detection limits provided by the BREB- and EC-based micrometeorological measurements. Discharge from desert playas contribute less than 2 percent of total groundwater discharge from Dixie and Death Valleys, which suggests discharge from desert playas is negligible in other basins. Numerical simulation results also show that ETg from desert playas primarily is limited by differences in hydraulic properties between alluvial fan and playa sediments and, to a lesser extent, by salinity-based groundwater density contrasts.
Numerical modeling of Drangajökull Ice Cap, NW Iceland
NASA Astrophysics Data System (ADS)
Anderson, Leif S.; Jarosch, Alexander H.; Flowers, Gwenn E.; Aðalgeirsdóttir, Guðfinna; Magnússon, Eyjólfur; Pálsson, Finnur; Muñoz-Cobo Belart, Joaquín; Þorsteinsson, Þorsteinn; Jóhannesson, Tómas; Sigurðsson, Oddur; Harning, David; Miller, Gifford H.; Geirsdóttir, Áslaug
2016-04-01
Over the past century the Arctic has warmed twice as fast as the global average. This discrepancy is likely due to feedbacks inherent to the Arctic climate system. These Arctic climate feedbacks are currently poorly quantified, but are essential to future climate predictions based on global circulation modeling. Constraining the magnitude and timing of past Arctic climate changes allows us to test climate feedback parameterizations at different times with different boundary conditions. Because Holocene Arctic summer temperature changes have been largest in the North Atlantic (Kaufman et al., 2004) we focus on constraining the paleoclimate of Iceland. Glaciers are highly sensitive to changes in temperature and precipitation amount. This sensitivity allows for the estimation of paleoclimate using glacier models, modern glacier mass balance data, and past glacier extents. We apply our model to the Drangajökull ice cap (~150 sq. km) in NW Iceland. Our numerical model is resolved in two-dimensions, conserves mass, and applies the shallow-ice-approximation. The bed DEM used in the model runs was constructed from radio echo data surveyed in spring 2014. We constrain the modern surface mass balance of Drangajökull using: 1) ablation and accumulation stakes; 2) ice surface digital elevation models (DEMs) from satellite, airborne LiDAR, and aerial photographs; and 3) full-stokes model-derived vertical ice velocities. The modeled vertical ice velocities and ice surface DEMs are combined to estimate past surface mass balance. We constrain Holocene glacier geometries using moraines and trimlines (e.g., Brynjolfsson, etal, 2014), proglacial-lake cores, and radiocarbon-dated dead vegetation emerging from under the modern glacier. We present a sensitivity analysis of the model to changes in parameters and show the effect of step changes of temperature and precipitation on glacier extent. Our results are placed in context with local lacustrine and marine climate proxies as well as with glacier extent and volume changes across the North Atlantic.
The Disk of 48 Lib Revealed by NPOI
NASA Astrophysics Data System (ADS)
Lembryk, Ludwik; Tycner, C.; Sigut, A.; Zavala, R. T.
2013-01-01
We present a study of the disk around the Be star 48 Lib, where NLTE numerical disk models are being compared to the spectral and interferometric data to constrain the physical properties of the inner disk structure. The computational models are generated using the BEDISK code, which accounts for heating and cooling of various atoms in the disk and assumes solar chemical composition. A large set of self-consistent disk models produced with the BEDISK code is in turn used to generate synthetic spectra and images assuming a wide range of inclination angles using the BERAY code. The aim of this project is to constrain the physical properties as well as the inclination angles using both spectroscopic and interferometric data. The interferometric data were obtained using the Naval Precision Optical Interferometer (NPOI), with the focus on Hydrogen Balmer-alpha emission, which is the strongest emission line present due to the circumstellar structure. Because 48 Lib shows clear asymmetric spectral lines, we discuss how we model the asymmetric peaks of the Halpha line by combining two models computed with different density structures. The corresponding synthetic images of these combined density structures are then Fourier transformed and compared to the interferometric data. This numerical strategy has the potential to easily model the commonly observed variation of the ratio of the violet-to-red (V/R ratio) emission peaks and constrain the long-term variability associated with the disk of 48 Lib as well as other emission-line stars that show similar variability.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dartevelle, Sebastian
2007-10-01
Large-scale volcanic eruptions are hazardous events that cannot be described by detailed and accurate in situ measurement: hence, little to no real-time data exists to rigorously validate current computer models of these events. In addition, such phenomenology involves highly complex, nonlinear, and unsteady physical behaviors upon many spatial and time scales. As a result, volcanic explosive phenomenology is poorly understood in terms of its physics, and inadequately constrained in terms of initial, boundary, and inflow conditions. Nevertheless, code verification and validation become even more critical because more and more volcanologists use numerical data for assessment and mitigation of volcanic hazards.more » In this report, we evaluate the process of model and code development in the context of geophysical multiphase flows. We describe: (1) the conception of a theoretical, multiphase, Navier-Stokes model, (2) its implementation into a numerical code, (3) the verification of the code, and (4) the validation of such a model within the context of turbulent and underexpanded jet physics. Within the validation framework, we suggest focusing on the key physics that control the volcanic clouds—namely, momentum-driven supersonic jet and buoyancy-driven turbulent plume. For instance, we propose to compare numerical results against a set of simple and well-constrained analog experiments, which uniquely and unambiguously represent each of the key-phenomenology. Key« less
NASA Astrophysics Data System (ADS)
Li, Guang
2017-01-01
This paper presents a fast constrained optimization approach, which is tailored for nonlinear model predictive control of wave energy converters (WEC). The advantage of this approach relies on its exploitation of the differential flatness of the WEC model. This can reduce the dimension of the resulting nonlinear programming problem (NLP) derived from the continuous constrained optimal control of WEC using pseudospectral method. The alleviation of computational burden using this approach helps to promote an economic implementation of nonlinear model predictive control strategy for WEC control problems. The method is applicable to nonlinear WEC models, nonconvex objective functions and nonlinear constraints, which are commonly encountered in WEC control problems. Numerical simulations demonstrate the efficacy of this approach.
Constrained evolution in numerical relativity
NASA Astrophysics Data System (ADS)
Anderson, Matthew William
The strongest potential source of gravitational radiation for current and future detectors is the merger of binary black holes. Full numerical simulation of such mergers can provide realistic signal predictions and enhance the probability of detection. Numerical simulation of the Einstein equations, however, is fraught with difficulty. Stability even in static test cases of single black holes has proven elusive. Common to unstable simulations is the growth of constraint violations. This work examines the effect of controlling the growth of constraint violations by solving the constraints periodically during a simulation, an approach called constrained evolution. The effects of constrained evolution are contrasted with the results of unconstrained evolution, evolution where the constraints are not solved during the course of a simulation. Two different formulations of the Einstein equations are examined: the standard ADM formulation and the generalized Frittelli-Reula formulation. In most cases constrained evolution vastly improves the stability of a simulation at minimal computational cost when compared with unconstrained evolution. However, in the more demanding test cases examined, constrained evolution fails to produce simulations with long-term stability in spite of producing improvements in simulation lifetime when compared with unconstrained evolution. Constrained evolution is also examined in conjunction with a wide variety of promising numerical techniques, including mesh refinement and overlapping Cartesian and spherical computational grids. Constrained evolution in boosted black hole spacetimes is investigated using overlapping grids. Constrained evolution proves to be central to the host of innovations required in carrying out such intensive simulations.
Modeling Biodegradation and Reactive Transport: Analytical and Numerical Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, Y; Glascoe, L
The computational modeling of the biodegradation of contaminated groundwater systems accounting for biochemical reactions coupled to contaminant transport is a valuable tool for both the field engineer/planner with limited computational resources and the expert computational researcher less constrained by time and computer power. There exists several analytical and numerical computer models that have been and are being developed to cover the practical needs put forth by users to fulfill this spectrum of computational demands. Generally, analytical models provide rapid and convenient screening tools running on very limited computational power, while numerical models can provide more detailed information with consequent requirementsmore » of greater computational time and effort. While these analytical and numerical computer models can provide accurate and adequate information to produce defensible remediation strategies, decisions based on inadequate modeling output or on over-analysis can have costly and risky consequences. In this chapter we consider both analytical and numerical modeling approaches to biodegradation and reactive transport. Both approaches are discussed and analyzed in terms of achieving bioremediation goals, recognizing that there is always a tradeoff between computational cost and the resolution of simulated systems.« less
Toward Overcoming the Local Minimum Trap in MFBD
2015-07-14
the first two years of this grant: • A. Cornelio, E. Loli -Piccolomini, and J. G. Nagy. Constrained Variable Projection Method for Blind Deconvolution...Cornelio, E. Loli -Piccolomini, and J. G. Nagy. Constrained Numerical Optimization Meth- ods for Blind Deconvolution, Numerical Algorithms, volume 65, issue 1...Publications (published) during reporting period: A. Cornelio, E. Loli Piccolomini, and J. G. Nagy. Constrained Variable Projection Method for Blind
NASA Technical Reports Server (NTRS)
Anderson, John R.; Wilbur, Paul J.
1989-01-01
The potential usefulness of the constrained sheath optics concept as a means of controlling the divergence of low energy, high current density ion beams is examined numerically and experimentally. Numerical results demonstrate that some control of the divergence of typical ion beamlets can be achieved at perveance levels of interest by contouring the surface of the constrained sheath properly. Experimental results demonstrate that a sheath can be constrained by a wire mesh attached to the screen plate of the ion optics system. The numerically predicted beamlet divergence characteristics are shown to depart from those measured experimentally, and additional numerical analysis is used to demonstrate that this departure is probably due to distortions of the sheath caused by the fact that it attempts to conform to the individual wires that make up the sheath constraining mesh. The concept is considered potentially useful in controlling the divergence of ion beamlets in applications where low divergence, low energy, high current density beamlets are being sought, but more work is required to demonstrate this for net beam ion energies as low as 5 eV.
Quantifying How Observations Inform a Numerical Reanalysis of Hawaii
NASA Astrophysics Data System (ADS)
Powell, B. S.
2017-11-01
When assimilating observations into a model via state-estimation, it is possible to quantify how each observation changes the modeled estimate of a chosen oceanic metric. Using an existing 2 year reanalysis of Hawaii that includes more than 31 million observations from satellites, ships, SeaGliders, and autonomous floats, I assess which observations most improve the estimates of the transport and eddy kinetic energy. When the SeaGliders were in the water, they comprised less than 2.5% of the data, but accounted for 23% of the transport adjustment. Because the model physics constrains advanced state-estimation, the prescribed covariances are propagated in time to identify observation-model covariance. I find that observations that constrain the isopycnal tilt across the transport section provide the greatest impact in the analysis. In the case of eddy kinetic energy, observations that constrain the surface-driven upper ocean have more impact. This information can help to identify optimal sampling strategies to improve both state-estimates and forecasts.
A VAS-numerical model impact study using the Gal-Chen variational approach
NASA Technical Reports Server (NTRS)
Aune, Robert M.; Tuccillo, James J.; Uccellini, Louis W.; Petersen, Ralph A.
1987-01-01
A numerical study based on the use of a variational assimilation technique of Gal-Chen (1983, 1986) was conducted to assess the impact of incorporating temperature data from the VISSR Atmospheric Sounder (VAS) into a regional-scale numerical model. A comparison with the results of a control forecast using only conventional data indicated that the assimilation technique successfully combines actual VAS temperature observations with the dynamically balanced model fields without destabilizing the model during the assimilation cycle. Moreover, increasing the temporal frequency of VAS temperature insertions during the assimilation cycle was shown to enhance the impact on the model forecast through successively longer forecast periods. The incorporation of a nudging technique, whereby the model temperature field is constrained toward the VAS 'updated' values during the assimilation cycle, further enhances the impact of the VAS temperature data.
On the Reconstruction of Palaeo-Ice Sheets: Recent Advances and Future Challenges
NASA Technical Reports Server (NTRS)
Stokes, Chris R.; Tarasov, Lev; Blomdin, Robin; Cronin, Thomas M.; Fisher, Timothy G.; Gyllencreutz, Richard; Hattestrand, Clas; Heyman, Jacob; Hindmarsh, Richard C. A.; Hughes, Anna L. C.;
2015-01-01
Reconstructing the growth and decay of palaeo-ice sheets is critical to understanding mechanisms of global climate change and associated sea-level fluctuations in the past, present and future. The significance of palaeo-ice sheets is further underlined by the broad range of disciplines concerned with reconstructing their behaviour, many of which have undergone a rapid expansion since the 1980s. In particular, there has been a major increase in the size and qualitative diversity of empirical data used to reconstruct and date ice sheets, and major improvements in our ability to simulate their dynamics in numerical ice sheet models. These developments have made it increasingly necessary to forge interdisciplinary links between sub-disciplines and to link numerical modelling with observations and dating of proxy records. The aim of this paper is to evaluate recent developments in the methods used to reconstruct ice sheets and outline some key challenges that remain, with an emphasis on how future work might integrate terrestrial and marine evidence together with numerical modelling. Our focus is on pan-ice sheet reconstructions of the last deglaciation, but regional case studies are used to illustrate methodological achievements, challenges and opportunities. Whilst various disciplines have made important progress in our understanding of ice-sheet dynamics, it is clear that data-model integration remains under-used, and that uncertainties remain poorly quantified in both empirically-based and numerical ice-sheet reconstructions. The representation of past climate will continue to be the largest source of uncertainty for numerical modelling. As such, palaeo-observations are critical to constrain and validate modelling. State-of-the-art numerical models will continue to improve both in model resolution and in the breadth of inclusion of relevant processes, thereby enabling more accurate and more direct comparison with the increasing range of palaeo-observations. Thus, the capability is developing to use all relevant palaeo-records to more strongly constrain deglacial (and to a lesser extent pre-LGM) ice sheet evolution. In working towards that goal, the accurate representation of uncertainties is required for both constraint data and model outputs. Close cooperation between modelling and data-gathering communities is essential to ensure this capability is realised and continues to progress.
On the reconstruction of palaeo-ice sheets: Recent advances and future challenges
Stokes, Chris R.; Tarasov, Lev; Blomdin, Robin; Cronin, Thomas M.; Fisher, Timothy G.; Gyllencreutz, Richard; Hattestrand, Clas; Heyman, Jakob; Hindmarsh, Richard C. A.; Hughes, Anna L. C.; Jakobsson, Martin; Kirchner, Nina; Livingstone, Stephen J.; Margold, Martin; Murton, Julian B.; Noormets, Riko; Peltier, W. Richard; Peteet, Dorothy M.; Piper, David J. W.; Preusser, Frank; Renssen, Hans; Roberts, David H.; Roche, Didier M.; Saint-Ange, Francky; Stroeven, Arjen P.; Teller, James T.
2015-01-01
Reconstructing the growth and decay of palaeo-ice sheets is critical to understanding mechanisms of global climate change and associated sea-level fluctuations in the past, present and future. The significance of palaeo-ice sheets is further underlined by the broad range of disciplines concerned with reconstructing their behaviour, many of which have undergone a rapid expansion since the 1980s. In particular, there has been a major increase in the size and qualitative diversity of empirical data used to reconstruct and date ice sheets, and major improvements in our ability to simulate their dynamics in numerical ice sheet models. These developments have made it increasingly necessary to forge interdisciplinary links between sub-disciplines and to link numerical modelling with observations and dating of proxy records. The aim of this paper is to evaluate recent developments in the methods used to reconstruct ice sheets and outline some key challenges that remain, with an emphasis on how future work might integrate terrestrial and marine evidence together with numerical modelling. Our focus is on pan-ice sheet reconstructions of the last deglaciation, but regional case studies are used to illustrate methodological achievements, challenges and opportunities. Whilst various disciplines have made important progress in our understanding of ice-sheet dynamics, it is clear that data-model integration remains under-used, and that uncertainties remain poorly quantified in both empirically-based and numerical ice-sheet reconstructions. The representation of past climate will continue to be the largest source of uncertainty for numerical modelling. As such, palaeo-observations are critical to constrain and validate modelling. State-of-the-art numerical models will continue to improve both in model resolution and in the breadth of inclusion of relevant processes, thereby enabling more accurate and more direct comparison with the increasing range of palaeo-observations. Thus, the capability is developing to use all relevant palaeo-records to more strongly constrain deglacial (and to a lesser extent pre-LGM) ice sheet evolution. In working towards that goal, the accurate representation of uncertainties is required for both constraint data and model outputs. Close cooperation between modelling and data-gathering communities is essential to ensure this capability is realised and continues to progress.
Slab stagnation and detachment under northeast China
NASA Astrophysics Data System (ADS)
Honda, Satoru
2016-03-01
Results of tomography models around the Japanese Islands show the existence of a gap between the horizontally lying (stagnant) slab extending under northeastern China and the fast seismic velocity anomaly in the lower mantle. A simple conversion from the fast velocity anomaly to the low-temperature anomaly shows a similar feature. This feature appears to be inconsistent with the results of numerical simulations on the interaction between the slab and phase transitions with temperature-dependent viscosity. Such numerical models predict a continuous slab throughout the mantle. I extend previous analyses of the tomography model and model calculations to infer the origins of the gap beneath northeastern China. Results of numerical simulations that take the geologic history of the subduction zone into account suggest two possible origins for the gap: (1) the opening of the Japan Sea led to a breaking off of the otherwise continuous subducting slab, or (2) the western edge of the stagnant slab is the previous subducted ridge, which was the plate boundary between the extinct Izanagi and the Pacific plates. Origin (2) suggesting the present horizontally lying slab has accumulated since the ridge subduction, is preferable for explaining the present length of the horizontally lying slab in the upper mantle. Numerical models of origin (1) predict a stagnant slab in the upper mantle that is too short, and a narrow or non-existent gap. Preferred models require rather stronger flow resistance of the 660-km phase change than expected from current estimates of the phase transition property. Future detailed estimates of the amount of the subducted Izanagi plate and the present stagnant slab would be useful to constrain models. A systematic along-arc variation of the slab morphology from the northeast Japan to Kurile arcs is also recognized, and its understanding may constrain the 3D mantle flow there.
NASA Astrophysics Data System (ADS)
Davis, Joshua R.; Giorgis, Scott
2014-11-01
We describe a three-part approach for modeling shape preferred orientation (SPO) data of spheroidal clasts. The first part consists of criteria to determine whether a given SPO and clast shape are compatible. The second part is an algorithm for randomly generating spheroid populations that match a prescribed SPO and clast shape. In the third part, numerical optimization software is used to infer deformation from spheroid populations, by finding the deformation that returns a set of post-deformation spheroids to a minimally anisotropic initial configuration. Two numerical experiments explore the strengths and weaknesses of this approach, while giving information about the sensitivity of the model to noise in data. In monoclinic transpression of oblate rigid spheroids, the model is found to constrain the shortening component but not the simple shear component. This modeling approach is applied to previously published SPO data from the western Idaho shear zone, a monoclinic transpressional zone that deformed a feldspar megacrystic gneiss. Results suggest at most 5 km of shortening, as well as pre-deformation SPO fabric. The shortening estimate is corroborated by a second model that assumes no pre-deformation fabric.
NASA Astrophysics Data System (ADS)
Tamayo-Mas, Elena; Bianchi, Marco; Mansour, Majdi
2018-03-01
This study investigates the impact of model complexity and multi-scale prior hydrogeological data on the interpretation of pumping test data in a dual-porosity aquifer (the Chalk aquifer in England, UK). In order to characterize the hydrogeological properties, different approaches ranging from a traditional analytical solution (Theis approach) to more sophisticated numerical models with automatically calibrated input parameters are applied. Comparisons of results from the different approaches show that neither traditional analytical solutions nor a numerical model assuming a homogenous and isotropic aquifer can adequately explain the observed drawdowns. A better reproduction of the observed drawdowns in all seven monitoring locations is instead achieved when medium and local-scale prior information about the vertical hydraulic conductivity (K) distribution is used to constrain the model calibration process. In particular, the integration of medium-scale vertical K variations based on flowmeter measurements lead to an improvement in the goodness-of-fit of the simulated drawdowns of about 30%. Further improvements (up to 70%) were observed when a simple upscaling approach was used to integrate small-scale K data to constrain the automatic calibration process of the numerical model. Although the analysis focuses on a specific case study, these results provide insights about the representativeness of the estimates of hydrogeological properties based on different interpretations of pumping test data, and promote the integration of multi-scale data for the characterization of heterogeneous aquifers in complex hydrogeological settings.
Pezzulo, Giovanni; Barsalou, Lawrence W.; Cangelosi, Angelo; Fischer, Martin H.; McRae, Ken; Spivey, Michael J.
2013-01-01
Grounded theories assume that there is no central module for cognition. According to this view, all cognitive phenomena, including those considered the province of amodal cognition such as reasoning, numeric, and language processing, are ultimately grounded in (and emerge from) a variety of bodily, affective, perceptual, and motor processes. The development and expression of cognition is constrained by the embodiment of cognitive agents and various contextual factors (physical and social) in which they are immersed. The grounded framework has received numerous empirical confirmations. Still, there are very few explicit computational models that implement grounding in sensory, motor and affective processes as intrinsic to cognition, and demonstrate that grounded theories can mechanistically implement higher cognitive abilities. We propose a new alliance between grounded cognition and computational modeling toward a novel multidisciplinary enterprise: Computational Grounded Cognition. We clarify the defining features of this novel approach and emphasize the importance of using the methodology of Cognitive Robotics, which permits simultaneous consideration of multiple aspects of grounding, embodiment, and situatedness, showing how they constrain the development and expression of cognition. PMID:23346065
Local dynamic subgrid-scale models in channel flow
NASA Technical Reports Server (NTRS)
Cabot, William H.
1994-01-01
The dynamic subgrid-scale (SGS) model has given good results in the large-eddy simulation (LES) of homogeneous isotropic or shear flow, and in the LES of channel flow, using averaging in two or three homogeneous directions (the DA model). In order to simulate flows in general, complex geometries (with few or no homogeneous directions), the dynamic SGS model needs to be applied at a local level in a numerically stable way. Channel flow, which is inhomogeneous and wall-bounded flow in only one direction, provides a good initial test for local SGS models. Tests of the dynamic localization model were performed previously in channel flow using a pseudospectral code and good results were obtained. Numerical instability due to persistently negative eddy viscosity was avoided by either constraining the eddy viscosity to be positive or by limiting the time that eddy viscosities could remain negative by co-evolving the SGS kinetic energy (the DLk model). The DLk model, however, was too expensive to run in the pseudospectral code due to a large near-wall term in the auxiliary SGS kinetic energy (k) equation. One objective was then to implement the DLk model in a second-order central finite difference channel code, in which the auxiliary k equation could be integrated implicitly in time at great reduction in cost, and to assess its performance in comparison with the plane-averaged dynamic model or with no model at all, and with direct numerical simulation (DNS) and/or experimental data. Other local dynamic SGS models have been proposed recently, e.g., constrained dynamic models with random backscatter, and with eddy viscosity terms that are averaged in time over material path lines rather than in space. Another objective was to incorporate and test these models in channel flow.
NASA Astrophysics Data System (ADS)
Ming, Mei-Jun; Xu, Long-Kun; Wang, Fan; Bi, Ting-Jun; Li, Xiang-Yuan
2017-07-01
In this work, a matrix form of numerical algorithm for spectral shift is presented based on the novel nonequilibrium solvation model that is established by introducing the constrained equilibrium manipulation. This form is convenient for the development of codes for numerical solution. By means of the integral equation formulation polarizable continuum model (IEF-PCM), a subroutine has been implemented to compute spectral shift numerically. Here, the spectral shifts of absorption spectra for several popular chromophores, N,N-diethyl-p-nitroaniline (DEPNA), methylenecyclopropene (MCP), acrolein (ACL) and p-nitroaniline (PNA) were investigated in different solvents with various polarities. The computed spectral shifts can explain the available experimental findings reasonably. Discussions were made on the contributions of solute geometry distortion, electrostatic polarization and other non-electrostatic interactions to spectral shift.
Evaluation of gravitational gradients generated by Earth's crustal structures
NASA Astrophysics Data System (ADS)
Novák, Pavel; Tenzer, Robert; Eshagh, Mehdi; Bagherbandi, Mohammad
2013-02-01
Spectral formulas for the evaluation of gravitational gradients generated by upper Earth's mass components are presented in the manuscript. The spectral approach allows for numerical evaluation of global gravitational gradient fields that can be used to constrain gravitational gradients either synthesised from global gravitational models or directly measured by the spaceborne gradiometer on board of the GOCE satellite mission. Gravitational gradients generated by static atmospheric, topographic and continental ice masses are evaluated numerically based on available global models of Earth's topography, bathymetry and continental ice sheets. CRUST2.0 data are then applied for the numerical evaluation of gravitational gradients generated by mass density contrasts within soft and hard sediments, upper, middle and lower crust layers. Combined gravitational gradients are compared to disturbing gravitational gradients derived from a global gravitational model and an idealised Earth's model represented by the geocentric homogeneous biaxial ellipsoid GRS80. The methodology could be used for improved modelling of the Earth's inner structure.
Ren, Hai-Sheng; Ming, Mei-Jun; Ma, Jian-Yi; Li, Xiang-Yuan
2013-08-22
Within the framework of constrained density functional theory (CDFT), the diabatic or charge localized states of electron transfer (ET) have been constructed. Based on the diabatic states, inner reorganization energy λin has been directly calculated. For solvent reorganization energy λs, a novel and reasonable nonequilibrium solvation model is established by introducing a constrained equilibrium manipulation, and a new expression of λs has been formulated. It is found that λs is actually the cost of maintaining the residual polarization, which equilibrates with the extra electric field. On the basis of diabatic states constructed by CDFT, a numerical algorithm using the new formulations with the dielectric polarizable continuum model (D-PCM) has been implemented. As typical test cases, self-exchange ET reactions between tetracyanoethylene (TCNE) and tetrathiafulvalene (TTF) and their corresponding ionic radicals in acetonitrile are investigated. The calculated reorganization energies λ are 7293 cm(-1) for TCNE/TCNE(-) and 5939 cm(-1) for TTF/TTF(+) reactions, agreeing well with available experimental results of 7250 cm(-1) and 5810 cm(-1), respectively.
Numerical study of a matrix-free trust-region SQP method for equality constrained optimization.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heinkenschloss, Matthias; Ridzal, Denis; Aguilo, Miguel Antonio
2011-12-01
This is a companion publication to the paper 'A Matrix-Free Trust-Region SQP Algorithm for Equality Constrained Optimization' [11]. In [11], we develop and analyze a trust-region sequential quadratic programming (SQP) method that supports the matrix-free (iterative, in-exact) solution of linear systems. In this report, we document the numerical behavior of the algorithm applied to a variety of equality constrained optimization problems, with constraints given by partial differential equations (PDEs).
NASA Astrophysics Data System (ADS)
Chandran, A.; Schulz, Marc D.; Burnell, F. J.
2016-12-01
Many phases of matter, including superconductors, fractional quantum Hall fluids, and spin liquids, are described by gauge theories with constrained Hilbert spaces. However, thermalization and the applicability of quantum statistical mechanics has primarily been studied in unconstrained Hilbert spaces. In this paper, we investigate whether constrained Hilbert spaces permit local thermalization. Specifically, we explore whether the eigenstate thermalization hypothesis (ETH) holds in a pinned Fibonacci anyon chain, which serves as a representative case study. We first establish that the constrained Hilbert space admits a notion of locality by showing that the influence of a measurement decays exponentially in space. This suggests that the constraints are no impediment to thermalization. We then provide numerical evidence that ETH holds for the diagonal and off-diagonal matrix elements of various local observables in a generic disorder-free nonintegrable model. We also find that certain nonlocal observables obey ETH.
A MULTIPLE GRID ALGORITHM FOR ONE-DIMENSIONAL TRANSIENT OPEN CHANNEL FLOWS. (R825200)
Numerical modeling of open channel flows with shocks using explicit finite difference schemes is constrained by the choice of time step, which is limited by the CFL stability criteria. To overcome this limitation, in this work we introduce the application of a multiple grid al...
Merging Disparate Data and Numerical Model Results for Dynamically Constrained Nowcasts
1999-09-30
of Delaware Newark, DE 19716 phone: (302) 831-6836 fax: (302) 831-6838 email: brucel @udel.edu Award #: N000149910052 http://newark.cms.udel.edu... brucel /hrd.html LONG-TERM GOALS The long term goal of our research is to quantify submesoscale dynamical processes and understand their interactions
NASA Astrophysics Data System (ADS)
Heldmann, Jennifer L.; Lamb, Justin; Asturias, Daniel; Colaprete, Anthony; Goldstein, David B.; Trafton, Laurence M.; Varghese, Philip L.
2015-07-01
The LCROSS (Lunar Crater Observation and Sensing Satellite) impacted the Cabeus crater near the lunar South Pole on 9 October 2009 and created an impact plume that was observed by the LCROSS Shepherding Spacecraft. Here we analyze data from the ultraviolet-visible spectrometer and visible context camera aboard the spacecraft. We use these data to constrain a numerical model to understand the physical evolution of the resultant plume. The UV-visible light curve peaks in brightness 18 s after impact and then decreases in radiance but never returns to the pre-impact radiance value for the ∼4 min of observation by the Shepherding Spacecraft. The blue:red spectral ratio increases in the first 10 s, decreases over the following 50 s, remains constant for approximately 150 s, and then begins to increase again ∼180 s after impact. Constraining the modeling results with spacecraft observations, we conclude that lofted dust grains remained suspended above the lunar surface for the entire 250 s of observation after impact. The impact plume was composed of both a high angle spike and low angle plume component. Numerical modeling is used to evaluate the relative effects of various plume parameters to further constrain the plume properties when compared with the observational data. Dust particle sizes lofted above the lunar surface were micron to sub-micron in size. Water ice particles were also contained within the ejecta cloud and simultaneously photo-dissociated and sublimated after reaching sunlight.
Capacity-constrained traffic assignment in networks with residual queues
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lam, W.H.K.; Zhang, Y.
2000-04-01
This paper proposes a capacity-constrained traffic assignment model for strategic transport planning in which the steady-state user equilibrium principle is extended for road networks with residual queues. Therefore, the road-exit capacity and the queuing effects can be incorporated into the strategic transport model for traffic forecasting. The proposed model is applicable to the congested network particularly when the traffic demands exceeds the capacity of the network during the peak period. An efficient solution method is proposed for solving the steady-state traffic assignment problem with residual queues. Then a simple numerical example is employed to demonstrate the application of the proposedmore » model and solution method, while an example of a medium-sized arterial highway network in Sioux Falls, South Dakota, is used to test the applicability of the proposed solution to real problems.« less
NASA Astrophysics Data System (ADS)
Valdes-Parada, F. J.; Ostvar, S.; Wood, B. D.; Miller, C. T.
2017-12-01
Modeling of hierarchical systems such as porous media can be performed by different approaches that bridge microscale physics to the macroscale. Among the several alternatives available in the literature, the thermodynamically constrained averaging theory (TCAT) has emerged as a robust modeling approach that provides macroscale models that are consistent across scales. For specific closure relation forms, TCAT models are expressed in terms of parameters that depend upon the physical system under study. These parameters are usually obtained from inverse modeling based upon either experimental data or direct numerical simulation at the pore scale. Other upscaling approaches, such as the method of volume averaging, involve an a priori scheme for parameter estimation for certain microscale and transport conditions. In this work, we show how such a predictive scheme can be implemented in TCAT by studying the simple problem of single-phase passive diffusion in rigid and homogeneous porous media. The components of the effective diffusivity tensor are predicted for several porous media by solving ancillary boundary-value problems in periodic unit cells. The results are validated through a comparison with data from direct numerical simulation. This extension of TCAT constitutes a useful advance for certain classes of problems amenable to this estimation approach.
Evaluation of Proteus as a Tool for the Rapid Development of Models of Hydrologic Systems
NASA Astrophysics Data System (ADS)
Weigand, T. M.; Farthing, M. W.; Kees, C. E.; Miller, C. T.
2013-12-01
Models of modern hydrologic systems can be complex and involve a variety of operators with varying character. The goal is to implement approximations of such models that are both efficient for the developer and computationally efficient, which is a set of naturally competing objectives. Proteus is a Python-based toolbox that supports prototyping of model formulations as well as a wide variety of modern numerical methods and parallel computing. We used Proteus to develop numerical approximations for three models: Richards' equation, a brine flow model derived using the Thermodynamically Constrained Averaging Theory (TCAT), and a multiphase TCAT-based tumor growth model. For Richards' equation, we investigated discontinuous Galerkin solutions with higher order time integration based on the backward difference formulas. The TCAT brine flow model was implemented using Proteus and a variety of numerical methods were compared to hand coded solutions. Finally, an existing tumor growth model was implemented in Proteus to introduce more advanced numerics and allow the code to be run in parallel. From these three example models, Proteus was found to be an attractive open-source option for rapidly developing high quality code for solving existing and evolving computational science models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Tong; Xue, Li; Zhao, Xiao-Hong
Black holes (BHs) hide themselves behind various astronomical phenomena and their properties, i.e., mass and spin, are usually difficult to constrain. One leading candidate for the central engine model of gamma-ray bursts (GRBs) invokes a stellar mass BH and a neutrino-dominated accretion flow (NDAF), with the relativistic jet launched due to neutrino-anti-neutrino annihilations. Such a model gives rise to a matter-dominated fireball, and is suitable to interpret GRBs with a dominant thermal component with a photospheric origin. We propose a method to constrain BH mass and spin within the framework of this model and apply the method to the thermallymore » dominant GRB 101219B, whose initial jet launching radius, r {sub 0}, is constrained from the data. Using our numerical model of NDAF jets, we estimate the following constraints on the central BH: mass M {sub BH} ∼ 5–9 M {sub ⊙}, spin parameter a {sub *} ≳ 0.6, and disk mass 3 M {sub ⊙} ≲ M {sub disk} ≲ 4 M {sub ⊙}. Our results also suggest that the NDAF model is a competitive candidate for the central engine of GRBs with a strong thermal component.« less
Dynamics of non-holonomic systems with stochastic transport
NASA Astrophysics Data System (ADS)
Holm, D. D.; Putkaradze, V.
2018-01-01
This paper formulates a variational approach for treating observational uncertainty and/or computational model errors as stochastic transport in dynamical systems governed by action principles under non-holonomic constraints. For this purpose, we derive, analyse and numerically study the example of an unbalanced spherical ball rolling under gravity along a stochastic path. Our approach uses the Hamilton-Pontryagin variational principle, constrained by a stochastic rolling condition, which we show is equivalent to the corresponding stochastic Lagrange-d'Alembert principle. In the example of the rolling ball, the stochasticity represents uncertainty in the observation and/or error in the computational simulation of the angular velocity of rolling. The influence of the stochasticity on the deterministically conserved quantities is investigated both analytically and numerically. Our approach applies to a wide variety of stochastic, non-holonomically constrained systems, because it preserves the mathematical properties inherited from the variational principle.
Boundary control for a constrained two-link rigid-flexible manipulator with prescribed performance
NASA Astrophysics Data System (ADS)
Cao, Fangfei; Liu, Jinkun
2018-05-01
In this paper, we consider a boundary control problem for a constrained two-link rigid-flexible manipulator. The nonlinear system is described by hybrid ordinary differential equation-partial differential equation (ODE-PDE) dynamic model. Based on the coupled ODE-PDE model, boundary control is proposed to regulate the joint positions and eliminate the elastic vibration simultaneously. With the help of prescribed performance functions, the tracking error can converge to an arbitrarily small residual set and the convergence rate is no less than a certain pre-specified value. Asymptotic stability of the closed-loop system is rigorously proved by the LaSalle's Invariance Principle extended to infinite-dimensional system. Numerical simulations are provided to demonstrate the effectiveness of the proposed controller.
NASA Astrophysics Data System (ADS)
SUN, D.; TONG, L.
2002-05-01
A detailed model for the beams with partially debonded active constraining damping (ACLD) treatment is presented. In this model, the transverse displacement of the constraining layer is considered to be non-identical to that of the host structure. In the perfect bonding region, the viscoelastic core is modelled to carry both peel and shear stresses, while in the debonding area, it is assumed that no peel and shear stresses be transferred between the host beam and the constraining layer. The adhesive layer between the piezoelectric sensor and the host beam is also considered in this model. In active control, the positive position feedback control is employed to control the first mode of the beam. Based on this model, the incompatibility of the transverse displacements of the active constraining layer and the host beam is investigated. The passive and active damping behaviors of the ACLD patch with different thicknesses, locations and lengths are examined. Moreover, the effects of debonding of the damping layer on both passive and active control are examined via a simulation example. The results show that the incompatibility of the transverse displacements is remarkable in the regions near the ends of the ACLD patch especially for the high order vibration modes. It is found that a thinner damping layer may lead to larger shear strain and consequently results in a larger passive and active damping. In addition to the thickness of the damping layer, its length and location are also key factors to the hybrid control. The numerical results unveil that edge debonding can lead to a reduction of both passive and active damping, and the hybrid damping may be more sensitive to the debonding of the damping layer than the passive damping.
Merging Disparate Data and Numerical Model Results for Dynamically Constrained Nowcasts
2000-09-30
of Delaware Newark, DE 19716 phone: (302) 831-6836 fax: (302) 831-6838 email: brucel @udel.edu Award #: N000149910052 http://newark.cms.udel.edu... brucel /hrd.html LONG-TERM GOALS Our long-term goal is to quantify submesoscale dynamical processes in the ocean so that we can better understand
A Measure Approximation for Distributionally Robust PDE-Constrained Optimization Problems
Kouri, Drew Philip
2017-12-19
In numerous applications, scientists and engineers acquire varied forms of data that partially characterize the inputs to an underlying physical system. This data is then used to inform decisions such as controls and designs. Consequently, it is critical that the resulting control or design is robust to the inherent uncertainties associated with the unknown probabilistic characterization of the model inputs. Here in this work, we consider optimal control and design problems constrained by partial differential equations with uncertain inputs. We do not assume a known probabilistic model for the inputs, but rather we formulate the problem as a distributionally robustmore » optimization problem where the outer minimization problem determines the control or design, while the inner maximization problem determines the worst-case probability measure that matches desired characteristics of the data. We analyze the inner maximization problem in the space of measures and introduce a novel measure approximation technique, based on the approximation of continuous functions, to discretize the unknown probability measure. Finally, we prove consistency of our approximated min-max problem and conclude with numerical results.« less
Shock interaction with deformable particles using a constrained interface reinitialization scheme
NASA Astrophysics Data System (ADS)
Sridharan, P.; Jackson, T. L.; Zhang, J.; Balachandar, S.; Thakur, S.
2016-02-01
In this paper, we present axisymmetric numerical simulations of shock propagation in nitromethane over an aluminum particle for post-shock pressures up to 10 GPa. We use the Mie-Gruneisen equation of state to describe both the medium and the particle. The numerical method is a finite-volume based solver on a Cartesian grid, that allows for multi-material interfaces and shocks, and uses a novel constrained reinitialization scheme to precisely preserve particle mass and volume. We compute the unsteady inviscid drag coefficient as a function of time, and show that when normalized by post-shock conditions, the maximum drag coefficient decreases with increasing post-shock pressure. We also compute the mass-averaged particle pressure and show that the observed oscillations inside the particle are on the particle-acoustic time scale. Finally, we present simplified point-particle models that can be used for macroscale simulations. In the Appendix, we extend the isothermal or isentropic assumption concerning the point-force models to non-ideal equations of state, thus justifying their use for the current problem.
Modelling the 21-cm Signal from the Epoch of Reionization and Cosmic Dawn
NASA Astrophysics Data System (ADS)
Choudhury, T. Roy; Datta, Kanan; Majumdar, Suman; Ghara, Raghunath; Paranjape, Aseem; Mondal, Rajesh; Bharadwaj, Somnath; Samui, Saumyadip
2016-12-01
Studying the cosmic dawn and the epoch of reionization through the redshifted 21-cm line are among the major science goals of the SKA1. Their significance lies in the fact that they are closely related to the very first stars in the Universe. Interpreting the upcoming data would require detailed modelling of the relevant physical processes. In this article, we focus on the theoretical models of reionization that have been worked out by various groups working in India with the upcoming SKA in mind. These models include purely analytical and semi-numerical calculations as well as fully numerical radiative transfer simulations. The predictions of the 21-cm signal from these models would be useful in constraining the properties of the early galaxies using the SKA data.
Modeling Real-Time Applications with Reusable Design Patterns
NASA Astrophysics Data System (ADS)
Rekhis, Saoussen; Bouassida, Nadia; Bouaziz, Rafik
Real-Time (RT) applications, which manipulate important volumes of data, need to be managed with RT databases that deal with time-constrained data and time-constrained transactions. In spite of their numerous advantages, RT databases development remains a complex task, since developers must study many design issues related to the RT domain. In this paper, we tackle this problem by proposing RT design patterns that allow the modeling of structural and behavioral aspects of RT databases. We show how RT design patterns can provide design assistance through architecture reuse of reoccurring design problems. In addition, we present an UML profile that represents patterns and facilitates further their reuse. This profile proposes, on one hand, UML extensions allowing to model the variability of patterns in the RT context and, on another hand, extensions inspired from the MARTE (Modeling and Analysis of Real-Time Embedded systems) profile.
Diagnostic Simulations of the Lunar Exosphere using Coma and Tail
NASA Astrophysics Data System (ADS)
Lee, Dong Wook; Kim, Sang J.
2017-10-01
The characteristics of the lunar exosphere can be constrained by comparing simulated models with observational data of the coma and tail (Lee et al., JGR, 2011); and thus far a few independent approaches on this issue have been performed and presented in the literature. Since there are two-different observational constraints for the lunar exosphere, it is interesting to find the best exospheric model that can account for the observed characteristics of the coma and tail. Considering various initial conditions of different sources and space weather, we present preliminary time-dependent simulations between the initial and final stages of the development of the lunar tail. Based on an updated 3-D model, we are planning to conduct numerous simulations to constrain the best model parameters from the coma images obtained from coronagraph observations supported by a NASA monitoring program (Morgan, Killen, and Potter, AGU, 2015) and future tail data.
NASA Astrophysics Data System (ADS)
Zhang, Ju; Jackson, Thomas; Balachandar, Sivaramakrishnan
2015-06-01
We will develop a computational model built upon our verified and validated in-house SDT code to provide improved description of the multiphase blast wave dynamics where solid particles are considered deformable and can even undergo phase transitions. Our SDT computational framework includes a reactive compressible flow solver with sophisticated material interface tracking capability and realistic equation of state (EOS) such as Mie-Gruneisen EOS for multiphase flow modeling. The behavior of diffuse interface models by Shukla et al. (2010) and Tiwari et al. (2013) at different shock impedance ratio will be first examined and characterized. The recent constrained interface reinitialization by Shukla (2014) will then be developed to examine if conservation property can be improved. This work was supported in part by the U.S. Department of Energy and by the Defense Threat Reduction Agency.
NASA Astrophysics Data System (ADS)
Shivakumar, J.; Ashok, M. H.; Khadakbhavi, Vishwanath; Pujari, Sanjay; Nandurkar, Santosh
2018-02-01
The present work focuses on geometrically nonlinear transient analysis of laminated smart composite plates integrated with the patches of Active fiber composites (AFC) using Active constrained layer damping (ACLD) as the distributed actuators. The analysis has been carried out using generalised energy based finite element model. The coupled electromechanical finite element model is derived using Von Karman type nonlinear strain displacement relations and a first-order shear deformation theory (FSDT). Eight-node iso-parametric serendipity elements are used for discretization of the overall plate integrated with AFC patch material. The viscoelastic constrained layer is modelled using GHM method. The numerical results shows the improvement in the active damping characteristics of the laminated composite plates over the passive damping for suppressing the geometrically nonlinear transient vibrations of laminated composite plates with AFC as patch material.
NASA Astrophysics Data System (ADS)
Lim, Yeunhwan; Holt, Jeremy W.
2017-06-01
We investigate the structure of neutron star crusts, including the crust-core boundary, based on new Skyrme mean field models constrained by the bulk-matter equation of state from chiral effective field theory and the ground-state energies of doubly-magic nuclei. Nuclear pasta phases are studied using both the liquid drop model as well as the Thomas-Fermi approximation. We compare the energy per nucleon for each geometry (spherical nuclei, cylindrical nuclei, nuclear slabs, cylindrical holes, and spherical holes) to obtain the ground state phase as a function of density. We find that the size of the Wigner-Seitz cell depends strongly on the model parameters, especially the coefficients of the density gradient interaction terms. We employ also the thermodynamic instability method to check the validity of the numerical solutions based on energy comparisons.
Local Infrasound Variability Related to In Situ Atmospheric Observation
NASA Astrophysics Data System (ADS)
Kim, Keehoon; Rodgers, Arthur; Seastrand, Douglas
2018-04-01
Local infrasound is widely used to constrain source parameters of near-surface events (e.g., chemical explosions and volcanic eruptions). While atmospheric conditions are critical to infrasound propagation and source parameter inversion, local atmospheric variability is often ignored by assuming homogeneous atmospheres, and their impact on the source inversion uncertainty has never been accounted for due to the lack of quantitative understanding of infrasound variability. We investigate atmospheric impacts on local infrasound propagation by repeated explosion experiments with a dense acoustic network and in situ atmospheric measurement. We perform full 3-D waveform simulations with local atmospheric data and numerical weather forecast model to quantify atmosphere-dependent infrasound variability and address the advantage and restriction of local weather data/numerical weather model for sound propagation simulation. Numerical simulations with stochastic atmosphere models also showed nonnegligible influence of atmospheric heterogeneity on infrasound amplitude, suggesting an important role of local turbulence.
A detailed model for simulation of catchment scale subsurface hydrologic processes
NASA Technical Reports Server (NTRS)
Paniconi, Claudio; Wood, Eric F.
1993-01-01
A catchment scale numerical model is developed based on the three-dimensional transient Richards equation describing fluid flow in variably saturated porous media. The model is designed to take advantage of digital elevation data bases and of information extracted from these data bases by topographic analysis. The practical application of the model is demonstrated in simulations of a small subcatchment of the Konza Prairie reserve near Manhattan, Kansas. In a preliminary investigation of computational issues related to model resolution, we obtain satisfactory numerical results using large aspect ratios, suggesting that horizontal grid dimensions may not be unreasonably constrained by the typically much smaller vertical length scale of a catchment and by vertical discretization requirements. Additional tests are needed to examine the effects of numerical constraints and parameter heterogeneity in determining acceptable grid aspect ratios. In other simulations we attempt to match the observed streamflow response of the catchment, and we point out the small contribution of the streamflow component to the overall water balance of the catchment.
Probabilistic numerical methods for PDE-constrained Bayesian inverse problems
NASA Astrophysics Data System (ADS)
Cockayne, Jon; Oates, Chris; Sullivan, Tim; Girolami, Mark
2017-06-01
This paper develops meshless methods for probabilistically describing discretisation error in the numerical solution of partial differential equations. This construction enables the solution of Bayesian inverse problems while accounting for the impact of the discretisation of the forward problem. In particular, this drives statistical inferences to be more conservative in the presence of significant solver error. Theoretical results are presented describing rates of convergence for the posteriors in both the forward and inverse problems. This method is tested on a challenging inverse problem with a nonlinear forward model.
ERIC Educational Resources Information Center
Mahavier, W. Ted
2002-01-01
Describes a two-semester numerical methods course that serves as a research experience for undergraduate students without requiring external funding or the modification of current curriculum. Uses an engineering problem to introduce students to constrained optimization via a variation of the traditional isoperimetric problem of finding the curve…
Liu, Changxin; Gao, Jian; Li, Huiping; Xu, Demin
2018-05-01
The event-triggered control is a promising solution to cyber-physical systems, such as networked control systems, multiagent systems, and large-scale intelligent systems. In this paper, we propose an event-triggered model predictive control (MPC) scheme for constrained continuous-time nonlinear systems with bounded disturbances. First, a time-varying tightened state constraint is computed to achieve robust constraint satisfaction, and an event-triggered scheduling strategy is designed in the framework of dual-mode MPC. Second, the sufficient conditions for ensuring feasibility and closed-loop robust stability are developed, respectively. We show that robust stability can be ensured and communication load can be reduced with the proposed MPC algorithm. Finally, numerical simulations and comparison studies are performed to verify the theoretical results.
SMA Hybrid Composites for Dynamic Response Abatement Applications
NASA Technical Reports Server (NTRS)
Turner, Travis L.
2000-01-01
A recently developed constitutive model and a finite element formulation for predicting the thermomechanical response of Shape Memory Alloy (SMA) hybrid composite (SMAHC) structures is briefly described. Attention is focused on constrained recovery behavior in this study, but the constitutive formulation is also capable of modeling restrained or free recovery. Numerical results are shown for glass/epoxy panel specimens with embedded Nitinol actuators subjected to thermal and acoustic loads. Control of thermal buckling, random response, sonic fatigue, and transmission loss are demonstrated and compared to conventional approaches including addition of conventional composite layers and a constrained layer damping treatment. Embedded SMA actuators are shown to be significantly more effective in dynamic response abatement applications than the conventional approaches and are attractive for combination with other passive and/or active approaches.
NASA Astrophysics Data System (ADS)
Dahdouh, S.; Varsier, N.; Nunez Ochoa, M. A.; Wiart, J.; Peyman, A.; Bloch, I.
2016-02-01
Numerical dosimetry studies require the development of accurate numerical 3D models of the human body. This paper proposes a novel method for building 3D heterogeneous young children models combining results obtained from a semi-automatic multi-organ segmentation algorithm and an anatomy deformation method. The data consist of 3D magnetic resonance images, which are first segmented to obtain a set of initial tissues. A deformation procedure guided by the segmentation results is then developed in order to obtain five young children models ranging from the age of 5 to 37 months. By constraining the deformation of an older child model toward a younger one using segmentation results, we assure the anatomical realism of the models. Using the proposed framework, five models, containing thirteen tissues, are built. Three of these models are used in a prospective dosimetry study to analyze young child exposure to radiofrequency electromagnetic fields. The results lean to show the existence of a relationship between age and whole body exposure. The results also highlight the necessity to specifically study and develop measurements of child tissues dielectric properties.
Evaluating Micrometeorological Estimates of Groundwater Discharge from Great Basin Desert Playas.
Jackson, Tracie R; Halford, Keith J; Gardner, Philip M
2018-03-06
Groundwater availability studies in the arid southwestern United States traditionally have assumed that groundwater discharge by evapotranspiration (ET g ) from desert playas is a significant component of the groundwater budget. However, desert playa ET g rates are poorly constrained by Bowen ratio energy budget (BREB) and eddy-covariance (EC) micrometeorological measurement approaches. Best attempts by previous studies to constrain ET g from desert playas have resulted in ET g rates that are within the measurement error of micrometeorological approaches. This study uses numerical models to further constrain desert playa ET g rates that are within the measurement error of BREB and EC approaches, and to evaluate the effect of hydraulic properties and salinity-based groundwater density contrasts on desert playa ET g rates. Numerical models simulated ET g rates from desert playas in Death Valley, California and Dixie Valley, Nevada. Results indicate that actual ET g rates from desert playas are significantly below the uncertainty thresholds of BREB- and EC-based micrometeorological measurements. Discharge from desert playas likely contributes less than 2% of total groundwater discharge from Dixie and Death Valleys, which suggests discharge from desert playas also is negligible in other basins. Simulation results also show that ET g from desert playas primarily is limited by differences in hydraulic properties between alluvial fan and playa sediments and, to a lesser extent, by salinity-based groundwater density contrasts. Published 2018. This article is a U.S. Government work and is in the public domain in the USA.
Dynamic Parameters of the 2015 Nepal Gorkha Mw7.8 Earthquake Constrained by Multi-observations
NASA Astrophysics Data System (ADS)
Weng, H.; Yang, H.
2017-12-01
Dynamic rupture model can provide much detailed insights into rupture physics that is capable of assessing future seismic risk. Many studies have attempted to constrain the slip-weakening distance, an important parameter controlling friction behavior of rock, for several earthquakes based on dynamic models, kinematic models, and direct estimations from near-field ground motion. However, large uncertainties of the values of the slip-weakening distance still remain, mostly because of the intrinsic trade-offs between the slip-weakening distance and fault strength. Here we use a spontaneously dynamic rupture model to constrain the frictional parameters of the 25 April 2015 Mw7.8 Nepal earthquake, by combining with multiple seismic observations such as high-rate cGPS data, strong motion data, and kinematic source models. With numerous tests we find the trade-off patterns of final slip, rupture speed, static GPS ground displacements, and dynamic ground waveforms are quite different. Combining all the seismic constraints we can conclude a robust solution without a substantial trade-off of average slip-weakening distance, 0.6 m, in contrast to previous kinematical estimation of 5 m. To our best knowledge, this is the first time to robustly determine the slip-weakening distance on seismogenic fault from seismic observations. The well-constrained frictional parameters may be used for future dynamic models to assess seismic hazard, such as estimating the peak ground acceleration (PGA) etc. Similar approach could also be conducted for other great earthquakes, enabling broad estimations of the dynamic parameters in global perspectives that can better reveal the intrinsic physics of earthquakes.
NASA Astrophysics Data System (ADS)
Rudge, J. F.; Alisic Jewell, L.; Rhebergen, S.; Katz, R. F.; Wells, G. N.
2015-12-01
One of the fundamental components in any dynamical model of melt transport is the rheology of partially molten rock. This rheology is poorly understood, and one way in which a better understanding can be obtained is by comparing the results of laboratory deformation experiments to numerical models. Here we present a comparison between numerical models and the laboratory setup of Qi et al. 2013 (EPSL), where a cylinder of partially molten rock containing rigid spherical inclusions was placed under torsion. We have replicated this setup in a finite element model which solves the partial differential equations describing the mechanical process of compaction. These computationally-demanding 3D simulations are only possible due to the recent development of a new preconditioning method for the equations of magma dynamics. The experiments show a distinct pattern of melt-rich and melt-depleted regions around the inclusions. In our numerical models, the pattern of melt varies with key rheological parameters, such as the ratio of bulk to shear viscosity, and the porosity- and strain-rate-dependence of the shear viscosity. These observed melt patterns therefore have the potential to constrain rheological properties. While there are many similarities between the experiments and the numerical models, there are also important differences, which highlight the need for better models of the physics of two-phase mantle/magma dynamics. In particular, the laboratory experiments display more pervasive melt-rich bands than is seen in our numerics.
NASA Astrophysics Data System (ADS)
Chen, Miawjane; Yan, Shangyao; Wang, Sin-Siang; Liu, Chiu-Lan
2015-02-01
An effective project schedule is essential for enterprises to increase their efficiency of project execution, to maximize profit, and to minimize wastage of resources. Heuristic algorithms have been developed to efficiently solve the complicated multi-mode resource-constrained project scheduling problem with discounted cash flows (MRCPSPDCF) that characterize real problems. However, the solutions obtained in past studies have been approximate and are difficult to evaluate in terms of optimality. In this study, a generalized network flow model, embedded in a time-precedence network, is proposed to formulate the MRCPSPDCF with the payment at activity completion times. Mathematically, the model is formulated as an integer network flow problem with side constraints, which can be efficiently solved for optimality, using existing mathematical programming software. To evaluate the model performance, numerical tests are performed. The test results indicate that the model could be a useful planning tool for project scheduling in the real world.
NASA Astrophysics Data System (ADS)
Jackson, Thomas L.; Sridharan, Prashanth; Zhang, Ju; Balachandar, S.
2015-11-01
In this work we present axisymmetric numerical simulations of shock propagating in nitromethane over an aluminum particle for post-shock pressures up to 10 GPa. The numerical method is a finite-volume based solver on a Cartesian grid, which allows for multi-material interfaces and shocks. To preserve particle mass and volume, a novel constraint reinitialization scheme is introduced. We compute the unsteady drag coefficient as a function of post-shock pressure, and show that when normalized by post-shock conditions, the maximum drag coefficient decreases with increasing post-shock pressure. Using this information, we also present a simplified point-particle force model that can be used for mesoscale simulations.
NASA Astrophysics Data System (ADS)
Weigand, T. M.; Miller, C. T.; Dye, A. L.; Gray, W. G.; McClure, J. E.; Rybak, I.
2015-12-01
The thermodynamically constrained averaging theory (TCAT) has been usedto formulate general classes of porous medium models, including newmodels for two-fluid-phase flow. The TCAT approach provides advantagesthat include a firm connection between the microscale, or pore scale,and the macroscale; a thermodynamically consistent basis; explicitinclusion of factors such as interfacial areas, contact angles,interfacial tension, and curvatures; and dynamics of interface movementand relaxation to an equilibrium state. In order to render the TCATmodel solvable, certain closure relations are needed to relate fluidpressure, interfacial areas, curvatures, and relaxation rates. In thiswork, we formulate and solve a TCAT-based two-fluid-phase flow model. We detail the formulation of the model, which is a specific instancefrom a hierarchy of two-fluid-phase flow models that emerge from thetheory. We show the closure problem that must be solved. Using recentresults from high-resolution microscale simulations, we advance a set ofclosure relations that produce a closed model. Lastly, we solve the model using a locally conservative numerical scheme and compare the TCAT model to the traditional model.
Cenozoic sea level and the rise of modern rimmed atolls
Toomey, Michael; Ashton, Andrew; Raymo, Maureen E.; Perron, J. Taylor
2016-01-01
Sea-level records from atolls, potentially spanning the Cenozoic, have been largely overlooked, in part because the processes that control atoll form (reef accretion, carbonate dissolution, sediment transport, vertical motion) are complex and, for many islands, unconstrained on million-year timescales. Here we combine existing observations of atoll morphology and corelog stratigraphy from Enewetak Atoll with a numerical model to (1) constrain the relative rates of subsidence, dissolution and sedimentation that have shaped modern Pacific atolls and (2) construct a record of sea level over the past 8.5 million years. Both the stratigraphy from Enewetak Atoll (constrained by a subsidence rate of ~ 20 m/Myr) and our numerical modeling results suggest that low sea levels (50–125 m below present), and presumably bi-polar glaciations, occurred throughout much of the late Miocene, preceding the warmer climate of the Pliocene, when sea level was higher than present. Carbonate dissolution through the subsequent sea-level fall that accompanied the onset of large glacial cycles in the late Pliocene, along with rapid highstand constructional reef growth, likely drove development of the rimmed atoll morphology we see today.
NASA Astrophysics Data System (ADS)
Pawar, R.; Dash, Z.; Sakaki, T.; Plampin, M. R.; Lassen, R. N.; Illangasekare, T. H.; Zyvoloski, G.
2011-12-01
One of the concerns related to geologic CO2 sequestration is potential leakage of CO2 and its subsequent migration to shallow groundwater resources leading to geochemical impacts. Developing approaches to monitor CO2 migration in shallow aquifer and mitigate leakage impacts will require improving our understanding of gas phase formation and multi-phase flow subsequent to CO2 leakage in shallow aquifers. We are utilizing an integrated approach combining laboratory experiments and numerical simulations to characterize the multi-phase flow of CO2 in shallow aquifers. The laboratory experiments involve a series of highly controlled experiments in which CO2 dissolved water is injected in homogeneous and heterogeneous soil columns and tanks. The experimental results are used to study the effects of soil properties, temperature, pressure gradients and heterogeneities on gas formation and migration. We utilize the Finite Element Heat and Mass (FEHM) simulator (Zyvoloski et al, 2010) to numerically model the experimental results. The numerical models capture the physics of CO2 exsolution, multi-phase fluid flow as well as sand heterogeneity. Experimental observations of pressure, temperature and gas saturations are used to develop and constrain conceptual models for CO2 gas-phase formation and multi-phase CO2 flow in porous media. This talk will provide details of development of conceptual models based on experimental observation, development of numerical models for laboratory experiments and modelling results.
Results of the Workshop on Impact Cratering: Bridging the Gap Between Modeling and Observations
NASA Technical Reports Server (NTRS)
Herrick, Robert (Editor); Pierazzo, Elisabetta (Editor)
2003-01-01
On February 7-9,2003, approximately 60 scientists gathered at the Lunar and Planetary Institute in Houston, Texas, for a workshop devoted to improving knowledge of the impact cratering process. We (co-conveners Elisabetta Pierazzo and Robert Herrick) both focus research efforts on studying the impact cratering process, but the former specializes in numerical modeling while the latter draws inferences from observations of planetary craters. Significant work has been done in several key areas of impact studies over the past several years, but in many respects there seem to be a disconnect between the groups employing different approaches, in particular modeling versus observations. The goal in convening this workshop was to bring together these disparate groups to have an open dialogue for the purposes of answering outstanding questions about the impact process and setting future research directions. We were successful in getting participation from most of the major research groups studying the impact process. Participants gathered from five continents with research specialties ranging from numerical modeling to field geology, and from small-scale experimentation and geochemical sample analysis to seismology and remote sensing.With the assistance of the scientific advisory committee (Bevan French, Kevin Housen, Bill McKinnon, Jay Melosh, and Mike Zolensky), the workshop was divided into a series of sessions devoted to different aspects of the cratering process. Each session was opened by two invited t a b , one given by a specialist in numerical or experimental modeling approaches, and the other by a specialist in geological, geophysical, or geochemical observations. Shorter invited and contributed talks filled out the sessions, which were then concluded with an open discussion time. All modelers were requested to address the question of what observations would better constrain their models, and all observationists were requested to discuss how their observations can constrain modeling efforts.
Spatial Modeling of Geometallurgical Properties: Techniques and a Case Study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deutsch, Jared L., E-mail: jdeutsch@ualberta.ca; Palmer, Kevin; Deutsch, Clayton V.
High-resolution spatial numerical models of metallurgical properties constrained by geological controls and more extensively by measured grade and geomechanical properties constitute an important part of geometallurgy. Geostatistical and other numerical techniques are adapted and developed to construct these high-resolution models accounting for all available data. Important issues that must be addressed include unequal sampling of the metallurgical properties versus grade assays, measurements at different scale, and complex nonlinear averaging of many metallurgical parameters. This paper establishes techniques to address each of these issues with the required implementation details and also demonstrates geometallurgical mineral deposit characterization for a copper–molybdenum deposit inmore » South America. High-resolution models of grades and comminution indices are constructed, checked, and are rigorously validated. The workflow demonstrated in this case study is applicable to many other deposit types.« less
NASA Astrophysics Data System (ADS)
Virgili-Llop, Josep; Zagaris, Costantinos; Park, Hyeongjun; Zappulla, Richard; Romano, Marcello
2018-03-01
An experimental campaign has been conducted to evaluate the performance of two different guidance and control algorithms on a multi-constrained docking maneuver. The evaluated algorithms are model predictive control (MPC) and inverse dynamics in the virtual domain (IDVD). A linear-quadratic approach with a quadratic programming solver is used for the MPC approach. A nonconvex optimization problem results from the IDVD approach, and a nonlinear programming solver is used. The docking scenario is constrained by the presence of a keep-out zone, an entry cone, and by the chaser's maximum actuation level. The performance metrics for the experiments and numerical simulations include the required control effort and time to dock. The experiments have been conducted in a ground-based air-bearing test bed, using spacecraft simulators that float over a granite table.
Supercritical flow characteristics at abrupt expansion structure
NASA Astrophysics Data System (ADS)
Lim, Jia Jun; Puay, How Tion; Zakaria, Nor Azazi
2017-10-01
When dealing with the design of a hydraulic structure, lateral expansion is often necessary for flow emerging at high velocity served as a cross-sectional transition. If the abrupt expansion structure is made to diverge rapidly, it will cause the major part of the flow fail to follow the boundaries. If the transition is too gradual, it will result in a waste of structural material. A preliminary study on the flow structure near the expansion and its relationship with flow parameter is carried out in this study. A two-dimensional depth-averaged model is developed to simulate the supercritical flow at the abrupt expansion structure. Constrained Interpolation Profile (CIP) scheme (which is of third order accuracy) is adopted in the numerical model. Results show that the flow structure and flow characteristics at the abrupt expansion can be reproduced numerically. The validation of numerical result is done against analytical studies. The result from numerical simulation showed good agreement with the analytical solution.
NASA Astrophysics Data System (ADS)
Wang, Xiaoqiang; Ju, Lili; Du, Qiang
2016-07-01
The Willmore flow formulated by phase field dynamics based on the elastic bending energy model has been widely used to describe the shape transformation of biological lipid vesicles. In this paper, we develop and investigate some efficient and stable numerical methods for simulating the unconstrained phase field Willmore dynamics and the phase field Willmore dynamics with fixed volume and surface area constraints. The proposed methods can be high-order accurate and are completely explicit in nature, by combining exponential time differencing Runge-Kutta approximations for time integration with spectral discretizations for spatial operators on regular meshes. We also incorporate novel linear operator splitting techniques into the numerical schemes to improve the discrete energy stability. In order to avoid extra numerical instability brought by use of large penalty parameters in solving the constrained phase field Willmore dynamics problem, a modified augmented Lagrange multiplier approach is proposed and adopted. Various numerical experiments are performed to demonstrate accuracy and stability of the proposed methods.
NASA Technical Reports Server (NTRS)
Lee, Jeh Won
1990-01-01
The objective is the theoretical analysis and the experimental verification of dynamics and control of a two link flexible manipulator with a flexible parallel link mechanism. Nonlinear equations of motion of the lightweight manipulator are derived by the Lagrangian method in symbolic form to better understand the structure of the dynamic model. The resulting equation of motion have a structure which is useful to reduce the number of terms calculated, to check correctness, or to extend the model to higher order. A manipulator with a flexible parallel link mechanism is a constrained dynamic system whose equations are sensitive to numerical integration error. This constrained system is solved using singular value decomposition of the constraint Jacobian matrix. Elastic motion is expressed by the assumed mode method. Mode shape functions of each link are chosen using the load interfaced component mode synthesis. The discrepancies between the analytical model and the experiment are explained using a simplified and a detailed finite element model.
On constraining pilot point calibration with regularization in PEST
Fienen, M.N.; Muffels, C.T.; Hunt, R.J.
2009-01-01
Ground water model calibration has made great advances in recent years with practical tools such as PEST being instrumental for making the latest techniques available to practitioners. As models and calibration tools get more sophisticated, however, the power of these tools can be misapplied, resulting in poor parameter estimates and/or nonoptimally calibrated models that do not suit their intended purpose. Here, we focus on an increasingly common technique for calibrating highly parameterized numerical models - pilot point parameterization with Tikhonov regularization. Pilot points are a popular method for spatially parameterizing complex hydrogeologic systems; however, additional flexibility offered by pilot points can become problematic if not constrained by Tikhonov regularization. The objective of this work is to explain and illustrate the specific roles played by control variables in the PEST software for Tikhonov regularization applied to pilot points. A recent study encountered difficulties implementing this approach, but through examination of that analysis, insight into underlying sources of potential misapplication can be gained and some guidelines for overcoming them developed. ?? 2009 National Ground Water Association.
Improvements to Wire Bundle Thermal Modeling for Ampacity Determination
NASA Technical Reports Server (NTRS)
Rickman, Steve L.; Iannello, Christopher J.; Shariff, Khadijah
2017-01-01
Determining current carrying capacity (ampacity) of wire bundles in aerospace vehicles is critical not only to safety but also to efficient design. Published standards provide guidance on determining wire bundle ampacity but offer little flexibility for configurations where wire bundles of mixed gauges and currents are employed with varying external insulation jacket surface properties. Thermal modeling has been employed in an attempt to develop techniques to assist in ampacity determination for these complex configurations. Previous developments allowed analysis of wire bundle configurations but was constrained to configurations comprised of less than 50 elements. Additionally, for vacuum analyses, configurations with very low emittance external jackets suffered from numerical instability in the solution. A new thermal modeler is presented allowing for larger configurations and is not constrained for low bundle infrared emissivity calculations. Formulation of key internal radiation and interface conductance parameters is discussed including the effects of temperature and air pressure on wire to wire thermal conductance. Test cases comparing model-predicted ampacity and that calculated from standards documents are presented.
NASA Astrophysics Data System (ADS)
Neuberg, J. W.; Thomas, M.; Pascal, K.; Karl, S.
2012-04-01
Geophysical datasets are essential to guide particularly short-term forecasting of volcanic activity. Key parameters are derived from these datasets and interpreted in different ways, however, the biggest impact on the interpretation is not determined by the range of parameters but controlled through the parameterisation and the underlying conceptual model of the volcanic process. On the other hand, the increasing number of sophisticated geophysical models need to be constrained by monitoring data, to transform a merely numerical exercise into a useful forecasting tool. We utilise datasets from the "big three", seismology, deformation and gas emissions, to gain insight in the mutual relationship between conceptual models and constraining data. We show that, e.g. the same seismic dataset can be interpreted with respect to a wide variety of different models with very different implications to forecasting. In turn, different data processing procedures lead to different outcomes even though they are based on the same conceptual model. Unsurprisingly, the most reliable interpretation will be achieved by employing multi-disciplinary models with overlapping constraints.
A NEW THREE-DIMENSIONAL SOLAR WIND MODEL IN SPHERICAL COORDINATES WITH A SIX-COMPONENT GRID
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feng, Xueshang; Zhang, Man; Zhou, Yufen, E-mail: fengx@spaceweather.ac.cn
In this paper, we introduce a new three-dimensional magnetohydrodynamics numerical model to simulate the steady state ambient solar wind from the solar surface to 215 R {sub s} or beyond, and the model adopts a splitting finite-volume scheme based on a six-component grid system in spherical coordinates. By splitting the magnetohydrodynamics equations into a fluid part and a magnetic part, a finite volume method can be used for the fluid part and a constrained-transport method able to maintain the divergence-free constraint on the magnetic field can be used for the magnetic induction part. This new second-order model in space andmore » time is validated when modeling the large-scale structure of the solar wind. The numerical results for Carrington rotation 2064 show its ability to produce structured solar wind in agreement with observations.« less
Solar Corona Simulation Model With Positivity-preserving Property
NASA Astrophysics Data System (ADS)
Feng, X. S.
2015-12-01
Positivity-preserving is one of crucial problems in solar corona simulation. In such numerical simulation of low plasma β region, keeping density and pressure is a first of all matter to obtain physical sound solution. In the present paper, we utilize the maximum-principle-preserving flux limiting technique to develop a class of second order positivity-preserving Godunov finite volume HLL methods for the solar wind plasma MHD equations. Based on the underlying first order building block of positivity preserving Lax-Friedrichs, our schemes, under the constrained transport (CT) and generalized Lagrange multiplier (GLM) framework, can achieve high order accuracy, a discrete divergence-free condition and positivity of the numerical solution simultaneously without extra CFL constraints. Numerical results in four Carrington rotation during the declining, rising, minimum and maximum solar activity phases are provided to demonstrate the performance of modeling small plasma beta with positivity-preserving property of the proposed method.
Accurate Ray-tracing of Realistic Neutron Star Atmospheres for Constraining Their Parameters
NASA Astrophysics Data System (ADS)
Vincent, Frederic H.; Bejger, Michał; Różańska, Agata; Straub, Odele; Paumard, Thibaut; Fortin, Morgane; Madej, Jerzy; Majczyna, Agnieszka; Gourgoulhon, Eric; Haensel, Paweł; Zdunik, Leszek; Beldycki, Bartosz
2018-03-01
Thermal-dominated X-ray spectra of neutron stars in quiescent, transient X-ray binaries and neutron stars that undergo thermonuclear bursts are sensitive to mass and radius. The mass–radius relation of neutron stars depends on the equation of state (EoS) that governs their interior. Constraining this relation accurately is therefore of fundamental importance to understand the nature of dense matter. In this context, we introduce a pipeline to calculate realistic model spectra of rotating neutron stars with hydrogen and helium atmospheres. An arbitrarily fast-rotating neutron star with a given EoS generates the spacetime in which the atmosphere emits radiation. We use the LORENE/NROTSTAR code to compute the spacetime numerically and the ATM24 code to solve the radiative transfer equations self-consistently. Emerging specific intensity spectra are then ray-traced through the neutron star’s spacetime from the atmosphere to a distant observer with the GYOTO code. Here, we present and test our fully relativistic numerical pipeline. To discuss and illustrate the importance of realistic atmosphere models, we compare our model spectra to simpler models like the commonly used isotropic color-corrected blackbody emission. We highlight the importance of considering realistic model-atmosphere spectra together with relativistic ray-tracing to obtain accurate predictions. We also insist upon the crucial impact of the star’s rotation on the observables. Finally, we close a controversy that has been ongoing in the literature in the recent years, regarding the validity of the ATM24 code.
Damping in Space Constructions
NASA Astrophysics Data System (ADS)
de Vreugd, Jan; de Lange, Dorus; Winters, Jasper; Human, Jet; Kamphues, Fred; Tabak, Erik
2014-06-01
Monolithic structures are often used in optomechanical designs for space applications to achieve high dimensional stability and to prevent possible backlash and friction phenomena. The capacity of monolithic structures to dissipate mechanical energy is however limited due to the high Q-factor, which might result in high stresses during dynamic launch loads like random vibration, sine sweeps and shock. To reduce the Q-factor in space applications, the effect of constrained layer damping (CLD) is investigated in this work. To predict the damping increase, the CLD effect is implemented locally at the supporting struts in an existing FE model of an optical instrument. Numerical simulations show that the effect of local damping treatment in this instrument could reduce the vibrational stresses with 30-50%. Validation experiments on a simple structure showed good agreement between measured and predicted damping properties. This paper presents material characterization, material modeling, numerical implementation of damping models in finite element code, numerical results on space hardware and the results of validation experiments.
NASA Astrophysics Data System (ADS)
Delgado, F.; Kubanek, J.; Anderson, K. R.; Lundgren, P.; Pritchard, M. E.
2017-12-01
The 2011-2012 eruption of Cordón Caulle volcano in Chile is the best scientifically observed rhyodacitic eruption and is thus a key place to understand the dynamics of these rare but powerful explosive rhyodacitic eruptions. Because the volatile phase controls both the eruption temporal evolution and the eruptive style, either explosive or effusive, it is important to constrain the physical parameters that drive these eruptions. The eruption began explosively and after two weeks evolved into a hybrid explosive - lava flow effusion whose volume-time evolution we constrain with a series of TanDEM-X Digital Elevation Models. Our data shows the intrusion of a large volume laccolith or cryptodome during the first 2.5 months of the eruption and lava flow effusion only afterwards, with a total volume of 1.4 km3. InSAR data from the ENVISAT and TerraSAR-X missions shows more than 2 m of subsidence during the effusive eruption phase produced by deflation of a finite spheroidal source at a depth of 5 km. In order to constrain the magma total H2O content, crystal cargo, and reservoir pressure drop we numerically solve the coupled set of equations of a pressurized magma reservoir, magma conduit flow and time dependent density, volatile exsolution and viscosity that we use to invert the InSAR and topographic data time series. We compare the best-fit model parameters with independent estimates of magma viscosity and total gas content measured from lava samples. Preliminary modeling shows that although it is not possible to model both the InSAR and the topographic data during the onset of the laccolith emplacement, it is possible to constrain the magma H2O and crystal content, to 4% wt and 30% which agree well with published literature values.
Model-Mapped RPA for Determining the Effective Coulomb Interaction
NASA Astrophysics Data System (ADS)
Sakakibara, Hirofumi; Jang, Seung Woo; Kino, Hiori; Han, Myung Joon; Kuroki, Kazuhiko; Kotani, Takao
2017-04-01
We present a new method to obtain a model Hamiltonian from first-principles calculations. The effective interaction contained in the model is determined on the basis of random phase approximation (RPA). In contrast to previous methods such as projected RPA and constrained RPA (cRPA), the new method named "model-mapped RPA" takes into account the long-range part of the polarization effect to determine the effective interaction in the model. After discussing the problems of cRPA, we present the formulation of the model-mapped RPA, together with a numerical test for the single-band Hubbard model of HgBa2CuO4.
Application of a sparseness constraint in multivariate curve resolution - Alternating least squares.
Hugelier, Siewert; Piqueras, Sara; Bedia, Carmen; de Juan, Anna; Ruckebusch, Cyril
2018-02-13
The use of sparseness in chemometrics is a concept that has increased in popularity. The advantage is, above all, a better interpretability of the results obtained. In this work, sparseness is implemented as a constraint in multivariate curve resolution - alternating least squares (MCR-ALS), which aims at reproducing raw (mixed) data by a bilinear model of chemically meaningful profiles. In many cases, the mixed raw data analyzed are not sparse by nature, but their decomposition profiles can be, as it is the case in some instrumental responses, such as mass spectra, or in concentration profiles linked to scattered distribution maps of powdered samples in hyperspectral images. To induce sparseness in the constrained profiles, one-dimensional and/or two-dimensional numerical arrays can be fitted using a basis of Gaussian functions with a penalty on the coefficients. In this work, a least squares regression framework with L 0 -norm penalty is applied. This L 0 -norm penalty constrains the number of non-null coefficients in the fit of the array constrained without having an a priori on the number and their positions. It has been shown that the sparseness constraint induces the suppression of values linked to uninformative channels and noise in MS spectra and improves the location of scattered compounds in distribution maps, resulting in a better interpretability of the constrained profiles. An additional benefit of the sparseness constraint is a lower ambiguity in the bilinear model, since the major presence of null coefficients in the constrained profiles also helps to limit the solutions for the profiles in the counterpart matrix of the MCR bilinear model. Copyright © 2017 Elsevier B.V. All rights reserved.
Hunting down the best model of inflation with Bayesian evidence
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martin, Jerome; Ringeval, Christophe; Trotta, Roberto
2011-03-15
We present the first calculation of the Bayesian evidence for different prototypical single field inflationary scenarios, including representative classes of small field and large field models. This approach allows us to compare inflationary models in a well-defined statistical way and to determine the current 'best model of inflation'. The calculation is performed numerically by interfacing the inflationary code FieldInf with MultiNest. We find that small field models are currently preferred, while large field models having a self-interacting potential of power p>4 are strongly disfavored. The class of small field models as a whole has posterior odds of approximately 3 ratiomore » 1 when compared with the large field class. The methodology and results presented in this article are an additional step toward the construction of a full numerical pipeline to constrain the physics of the early Universe with astrophysical observations. More accurate data (such as the Planck data) and the techniques introduced here should allow us to identify conclusively the best inflationary model.« less
Models of volcanic eruption hazards
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wohletz, K.H.
1992-01-01
Volcanic eruptions pose an ever present but poorly constrained hazard to life and property for geothermal installations in volcanic areas. Because eruptions occur sporadically and may limit field access, quantitative and systematic field studies of eruptions are difficult to complete. Circumventing this difficulty, laboratory models and numerical simulations are pivotal in building our understanding of eruptions. For example, the results of fuel-coolant interaction experiments show that magma-water interaction controls many eruption styles. Applying these results, increasing numbers of field studies now document and interpret the role of external water eruptions. Similarly, numerical simulations solve the fundamental physics of high-speed fluidmore » flow and give quantitative predictions that elucidate the complexities of pyroclastic flows and surges. A primary goal of these models is to guide geologists in searching for critical field relationships and making their interpretations. Coupled with field work, modeling is beginning to allow more quantitative and predictive volcanic hazard assessments.« less
Models of volcanic eruption hazards
NASA Astrophysics Data System (ADS)
Wohletz, K. H.
Volcanic eruptions pose an ever present but poorly constrained hazard to life and property for geothermal installations in volcanic areas. Because eruptions occur sporadically and may limit field access, quantitative and systematic field studies of eruptions are difficult to complete. Circumventing this difficulty, laboratory models and numerical simulations are pivotal in building our understanding of eruptions. For example, the results of fuel-coolant interaction experiments show that magma-water interaction controls many eruption styles. Applying these results, increasing numbers of field studies now document and interpret the role of external water eruptions. Similarly, numerical simulations solve the fundamental physics of high-speed fluid flow and give quantitative predictions that elucidate the complexities of pyroclastic flows and surges. A primary goal of these models is to guide geologists in searching for critical field relationships and making their interpretations. Coupled with field work, modeling is beginning to allow more quantitative and predictive volcanic hazard assessments.
An enhanced beam model for constrained layer damping and a parameter study of damping contribution
NASA Astrophysics Data System (ADS)
Xie, Zhengchao; Shepard, W. Steve, Jr.
2009-01-01
An enhanced analytical model is presented based on an extension of previous models for constrained layer damping (CLD) in beam-like structures. Most existing CLD models are based on the assumption that shear deformation in the core layer is the only source of damping in the structure. However, previous research has shown that other types of deformation in the core layer, such as deformations from longitudinal extension and transverse compression, can also be important. In the enhanced analytical model developed here, shear, extension, and compression deformations are all included. This model can be used to predict the natural frequencies and modal loss factors. The numerical study shows that compared to other models, this enhanced model is accurate in predicting the dynamic characteristics. As a result, the model can be accepted as a general computation model. With all three types of damping included and the formulation used here, it is possible to study the impact of the structure's geometry and boundary conditions on the relative contribution of each type of damping. To that end, the relative contributions in the frequency domain for a few sample cases are presented.
Quantifying in vivo laxity in the anterior cruciate ligament and individual knee joint structures.
Westover, L M; Sinaei, N; Küpper, J C; Ronsky, J L
2016-11-01
A custom knee loading apparatus (KLA), when used in conjunction with magnetic resonance imaging, enables in vivo measurement of the gross anterior laxity of the knee joint. A numerical model was applied to the KLA to understand the contribution of the individual joint structures and to estimate the stiffness of the anterior-cruciate ligament (ACL). The model was evaluated with a cadaveric study using an in situ knee loading apparatus and an ElectroForce test system. A constrained optimization solution technique was able to predict the restraining forces within the soft-tissue structures and joint contact. The numerical model presented here allowed in vivo prediction of the material stiffness parameters of the ACL in response to applied anterior loading. Promising results were obtained for in vivo load sharing within the structures. The numerical model overestimated the ACL forces by 27.61-92.71%. This study presents a novel approach to estimate ligament stiffness and provides the basis to develop a robust and accurate measure of in vivo knee joint laxity.
Finite-density effects in the Fredrickson-Andersen and Kob-Andersen kinetically-constrained models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Teomy, Eial, E-mail: eialteom@post.tau.ac.il; Shokef, Yair, E-mail: shokef@tau.ac.il
2014-08-14
We calculate the corrections to the thermodynamic limit of the critical density for jamming in the Kob-Andersen and Fredrickson-Andersen kinetically-constrained models, and find them to be finite-density corrections, and not finite-size corrections. We do this by introducing a new numerical algorithm, which requires negligible computer memory since contrary to alternative approaches, it generates at each point only the necessary data. The algorithm starts from a single unfrozen site and at each step randomly generates the neighbors of the unfrozen region and checks whether they are frozen or not. Our results correspond to systems of size greater than 10{sup 7} ×more » 10{sup 7}, much larger than any simulated before, and are consistent with the rigorous bounds on the asymptotic corrections. We also find that the average number of sites that seed a critical droplet is greater than 1.« less
NASA Astrophysics Data System (ADS)
Hadi, Fatemeh; Janbozorgi, Mohammad; Sheikhi, M. Reza H.; Metghalchi, Hameed
2016-10-01
The rate-controlled constrained-equilibrium (RCCE) method is employed to study the interactions between mixing and chemical reaction. Considering that mixing can influence the RCCE state, the key objective is to assess the accuracy and numerical performance of the method in simulations involving both reaction and mixing. The RCCE formulation includes rate equations for constraint potentials, density and temperature, which allows taking account of mixing alongside chemical reaction without splitting. The RCCE is a dimension reduction method for chemical kinetics based on thermodynamics laws. It describes the time evolution of reacting systems using a series of constrained-equilibrium states determined by RCCE constraints. The full chemical composition at each state is obtained by maximizing the entropy subject to the instantaneous values of the constraints. The RCCE is applied to a spatially homogeneous constant pressure partially stirred reactor (PaSR) involving methane combustion in oxygen. Simulations are carried out over a wide range of initial temperatures and equivalence ratios. The chemical kinetics, comprised of 29 species and 133 reaction steps, is represented by 12 RCCE constraints. The RCCE predictions are compared with those obtained by direct integration of the same kinetics, termed detailed kinetics model (DKM). The RCCE shows accurate prediction of combustion in PaSR with different mixing intensities. The method also demonstrates reduced numerical stiffness and overall computational cost compared to DKM.
NASA Astrophysics Data System (ADS)
Han, Xiaobao; Li, Huacong; Jia, Qiusheng
2017-12-01
For dynamic decoupling of polynomial linear parameter varying(PLPV) system, a robust dominance pre-compensator design method is given. The parameterized precompensator design problem is converted into an optimal problem constrained with parameterized linear matrix inequalities(PLMI) by using the conception of parameterized Lyapunov function(PLF). To solve the PLMI constrained optimal problem, the precompensator design problem is reduced into a normal convex optimization problem with normal linear matrix inequalities (LMI) constraints on a new constructed convex polyhedron. Moreover, a parameter scheduling pre-compensator is achieved, which satisfies robust performance and decoupling performances. Finally, the feasibility and validity of the robust diagonal dominance pre-compensator design method are verified by the numerical simulation on a turbofan engine PLPV model.
Observations of Circumstellar Thermochemical Equilibrium: The Case of Phosphorus
NASA Technical Reports Server (NTRS)
Milam, Stefanie N.; Charnley, Steven B.
2011-01-01
We will present observations of phosphorus-bearing species in circumstellar envelopes, including carbon- and oxygen-rich shells 1. New models of thermochemical equilibrium chemistry have been developed to interpret, and constrained by these data. These calculations will also be presented and compared to the numerous P-bearing species already observed in evolved stars. Predictions for other viable species will be made for observations with Herschel and ALMA.
Phase field benchmark problems for dendritic growth and linear elasticity
Jokisaari, Andrea M.; Voorhees, P. W.; Guyer, Jonathan E.; ...
2018-03-26
We present the second set of benchmark problems for phase field models that are being jointly developed by the Center for Hierarchical Materials Design (CHiMaD) and the National Institute of Standards and Technology (NIST) along with input from other members in the phase field community. As the integrated computational materials engineering (ICME) approach to materials design has gained traction, there is an increasing need for quantitative phase field results. New algorithms and numerical implementations increase computational capabilities, necessitating standard problems to evaluate their impact on simulated microstructure evolution as well as their computational performance. We propose one benchmark problem formore » solidifiication and dendritic growth in a single-component system, and one problem for linear elasticity via the shape evolution of an elastically constrained precipitate. We demonstrate the utility and sensitivity of the benchmark problems by comparing the results of 1) dendritic growth simulations performed with different time integrators and 2) elastically constrained precipitate simulations with different precipitate sizes, initial conditions, and elastic moduli. As a result, these numerical benchmark problems will provide a consistent basis for evaluating different algorithms, both existing and those to be developed in the future, for accuracy and computational efficiency when applied to simulate physics often incorporated in phase field models.« less
Phase field benchmark problems for dendritic growth and linear elasticity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jokisaari, Andrea M.; Voorhees, P. W.; Guyer, Jonathan E.
We present the second set of benchmark problems for phase field models that are being jointly developed by the Center for Hierarchical Materials Design (CHiMaD) and the National Institute of Standards and Technology (NIST) along with input from other members in the phase field community. As the integrated computational materials engineering (ICME) approach to materials design has gained traction, there is an increasing need for quantitative phase field results. New algorithms and numerical implementations increase computational capabilities, necessitating standard problems to evaluate their impact on simulated microstructure evolution as well as their computational performance. We propose one benchmark problem formore » solidifiication and dendritic growth in a single-component system, and one problem for linear elasticity via the shape evolution of an elastically constrained precipitate. We demonstrate the utility and sensitivity of the benchmark problems by comparing the results of 1) dendritic growth simulations performed with different time integrators and 2) elastically constrained precipitate simulations with different precipitate sizes, initial conditions, and elastic moduli. As a result, these numerical benchmark problems will provide a consistent basis for evaluating different algorithms, both existing and those to be developed in the future, for accuracy and computational efficiency when applied to simulate physics often incorporated in phase field models.« less
NASA Astrophysics Data System (ADS)
Glišović, P.; Forte, A. M.; Moucha, R.
2012-08-01
One of the outstanding problems in modern geodynamics is the development of thermal convection models that are consistent with the present-day flow dynamics in the Earth's mantle, in accord with seismic tomographic images of 3-D Earth structure, and that are also capable of providing a time-dependent evolution of the mantle thermal structure that is as 'realistic' (Earth-like) as possible. A successful realization of this objective would provide a realistic model of 3-D mantle convection that has optimal consistency with a wide suite of seismic, geodynamic and mineral physical constraints on mantle structure and thermodynamic properties. To address this challenge, we have constructed a time-dependent, compressible convection model in 3-D spherical geometry that is consistent with tomography-based instantaneous flow dynamics, using an updated and revised pseudo-spectral numerical method. The novel feature of our numerical solutions is that the equations of conservation of mass and momentum are solved only once in terms of spectral Green's functions. We initially focus on the theory and numerical methods employed to solve the equation of thermal energy conservation using the Green's function solutions for the equation of motion, with special attention placed on the numerical accuracy and stability of the convection solutions. A particular concern is the verification of the global energy balance in the dissipative, compressible-mantle formulation we adopt. Such validation is essential because we then present geodynamically constrained convection solutions over billion-year timescales, starting from present-day seismically constrained thermal images of the mantle. The use of geodynamically constrained spectral Green's functions facilitates the modelling of the dynamic impact on the mantle evolution of: (1) depth-dependent thermal conductivity profiles, (2) extreme variations of viscosity over depth and (3) different surface boundary conditions, in this case mobile surface plates and a rigid surface. The thermal interpretation of seismic tomography models does not provide a radial profile of the horizontally averaged temperature (i.e. the geotherm) in the mantle. One important goal of this study is to obtain a steady-state geotherm with boundary layers which satisfies energy balance of the system and provides the starting point for more realistic numerical simulations of the Earth's evolution. We obtain surface heat flux in the range of Earth-like values : 37 TW for a rigid surface and 44 TW for a surface with tectonic plates coupled to the mantle flow. Also, our convection simulations deliver CMB heat flux that is on the high end of previously estimated values, namely 13 TW and 20 TW, for rigid and plate-like surface boundary conditions, respectively. We finally employ these two end-member surface boundary conditions to explore the very-long-time scale evolution of convection over billion-year time windows. These billion-year-scale simulations will allow us to determine the extent to which a 'memory' of the starting tomography-based thermal structure is preserved and hence to explore the longevity of the structures in the present-day mantle. The two surface boundary conditions, along with the geodynamically inferred radial viscosity profiles, yield steady-state convective flows that are dominated by long wavelengths throughout the lower mantle. The rigid-surface condition yields a spectrum of mantle heterogeneity dominated by spherical harmonic degree 3 and 4, and the plate-like surface condition yields a pattern dominated by degree 1. Our exploration of the time-dependence of the spatial heterogeneity shows that, for both types of surface boundary condition, deep-mantle hot upwellings resolved in the present-day tomography model are durable and stable features. These deeply rooted mantle plumes show remarkable longevity over very long geological time spans, mainly owing to the geodynamically inferred high viscosity in the lower mantle.
Dynamic Financial Constraints: Distinguishing Mechanism Design from Exogenously Incomplete Regimes*
Karaivanov, Alexander; Townsend, Robert M.
2014-01-01
We formulate and solve a range of dynamic models of constrained credit/insurance that allow for moral hazard and limited commitment. We compare them to full insurance and exogenously incomplete financial regimes (autarky, saving only, borrowing and lending in a single asset). We develop computational methods based on mechanism design, linear programming, and maximum likelihood to estimate, compare, and statistically test these alternative dynamic models with financial/information constraints. Our methods can use both cross-sectional and panel data and allow for measurement error and unobserved heterogeneity. We estimate the models using data on Thai households running small businesses from two separate samples. We find that in the rural sample, the exogenously incomplete saving only and borrowing regimes provide the best fit using data on consumption, business assets, investment, and income. Family and other networks help consumption smoothing there, as in a moral hazard constrained regime. In contrast, in urban areas, we find mechanism design financial/information regimes that are decidedly less constrained, with the moral hazard model fitting best combined business and consumption data. We perform numerous robustness checks in both the Thai data and in Monte Carlo simulations and compare our maximum likelihood criterion with results from other metrics and data not used in the estimation. A prototypical counterfactual policy evaluation exercise using the estimation results is also featured. PMID:25246710
Modeling and Simulation of the Gonghe geothermal field (Qinghai, China) Constrained by Geophysical
NASA Astrophysics Data System (ADS)
Zeng, Z.; Wang, K.; Zhao, X.; Huai, N.; He, R.
2017-12-01
The Gonghe geothermal field in Qinghai is important because of its variety of geothermal resource types. Now, the Gonghe geothermal field has been a demonstration area of geothermal development and utilization in China. It has been the topic of numerous geophysical investigations conducted to determine the depth to and the nature of the heat source, and to image the channel of heat flow. This work focuses on the causes of geothermal fields used numerical simulation method constrained by geophysical data. At first, by analyzing and inverting an magnetotelluric (MT) measurements profile across this area we obtain the deep resistivity distribution. Using the gravity anomaly inversion constrained by the resistivity profile, the density of the basins and the underlying rocks can be calculated. Combined with the measured parameters of rock thermal conductivity, the 2D geothermal conceptual model of Gonghe area is constructed. Then, the unstructured finite element method is used to simulate the heat conduction equation and the geothermal field. Results of this model were calibrated with temperature data for the observation well. A good match was achieved between the measured values and the model's predicted values. At last, geothermal gradient and heat flow distribution of this model are calculated(fig.1.). According to the results of geophysical exploration, there is a low resistance and low density region (d5) below the geothermal field. We recognize that this anomaly is generated by tectonic motion, and this tectonic movement creates a mantle-derived heat upstream channel. So that the anomalous basement heat flow values are higher than in other regions. The model's predicted values simulated using that boundary condition has a good match with the measured values. The simulated heat flow values show that the mantle-derived heat flow migrates through the boundary of the low-resistance low-density anomaly area to the Gonghe geothermal field, with only a small fraction moving to other regions. Therefore, the mantle-derived heat flow across the tectonic channel to the cohesive continuous supply heat for Gonghe geothermal field, is the main the main causes of abundant geothermal resources.
Efficient Compressed Sensing Based MRI Reconstruction using Nonconvex Total Variation Penalties
NASA Astrophysics Data System (ADS)
Lazzaro, D.; Loli Piccolomini, E.; Zama, F.
2016-10-01
This work addresses the problem of Magnetic Resonance Image Reconstruction from highly sub-sampled measurements in the Fourier domain. It is modeled as a constrained minimization problem, where the objective function is a non-convex function of the gradient of the unknown image and the constraints are given by the data fidelity term. We propose an algorithm, Fast Non Convex Reweighted (FNCR), where the constrained problem is solved by a reweighting scheme, as a strategy to overcome the non-convexity of the objective function, with an adaptive adjustment of the penalization parameter. We propose a fast iterative algorithm and we can prove that it converges to a local minimum because the constrained problem satisfies the Kurdyka-Lojasiewicz property. Moreover the adaptation of non convex l0 approximation and penalization parameters, by means of a continuation technique, allows us to obtain good quality solutions, avoiding to get stuck in unwanted local minima. Some numerical experiments performed on MRI sub-sampled data show the efficiency of the algorithm and the accuracy of the solution.
Mean Field Type Control with Congestion (II): An Augmented Lagrangian Method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Achdou, Yves, E-mail: achdou@ljll.univ-paris-diderot.fr; Laurière, Mathieu
This work deals with a numerical method for solving a mean-field type control problem with congestion. It is the continuation of an article by the same authors, in which suitably defined weak solutions of the system of partial differential equations arising from the model were discussed and existence and uniqueness were proved. Here, the focus is put on numerical methods: a monotone finite difference scheme is proposed and shown to have a variational interpretation. Then an Alternating Direction Method of Multipliers for solving the variational problem is addressed. It is based on an augmented Lagrangian. Two kinds of boundary conditionsmore » are considered: periodic conditions and more realistic boundary conditions associated to state constrained problems. Various test cases and numerical results are presented.« less
Constraining Slab Breakoff Induced Magmatism through Numerical Modelling
NASA Astrophysics Data System (ADS)
Freeburn, R.; Van Hunen, J.; Maunder, B. L.; Magni, V.; Bouilhol, P.
2015-12-01
Post-collisional magmatism is markedly different in nature and composition than pre-collisional magmas. This is widely interpreted to mark a change in the thermal structure of the system due to the loss of the oceanic slab (slab breakoff), allowing a different source to melt. Early modelling studies suggest that when breakoff takes place at depths shallower than the overriding lithosphere, magmatism occurs through both the decompression of upwelling asthenopshere into the slab window and the thermal perturbation of the overriding lithosphere (Davies & von Blanckenburg, 1995; van de Zedde & Wortel, 2001). Interpretations of geochemical data which invoke slab breakoff as a means of generating magmatism mostly assume these shallow depths. However more recent modelling results suggest that slab breakoff is likely to occur deeper (e.g. Andrews & Billen, 2009; Duretz et al., 2011; van Hunen & Allen, 2011). Here we test the extent to which slab breakoff is a viable mechanism for generating melting in post-collisional settings. Using 2-D numerical models we conduct a parametric study, producing models displaying a range of dynamics with breakoff depths ranging from 150 - 300 km. Key models are further analysed to assess the extent of melting. We consider the mantle wedge above the slab to be hydrated, and compute the melt fraction by using a simple parameterised solidus. Our models show that breakoff at shallow depths can generate a short-lived (< 3 Myr) pulse of mantle melting, through the hydration of hotter, undepleted asthenosphere flowing in from behind the detached slab. However, our results do not display the widespread, prolonged style of magmatism, observed in many post-collisional areas, suggesting that this magmatism may be generated via alternative mechanisms. This further implies that using magmatic observations to constrain slab breakoff is not straightforward.
NASA Astrophysics Data System (ADS)
Chai, Xintao; Tang, Genyang; Peng, Ronghua; Liu, Shaoyong
2018-03-01
Full-waveform inversion (FWI) reconstructs the subsurface properties from acquired seismic data via minimization of the misfit between observed and simulated data. However, FWI suffers from considerable computational costs resulting from the numerical solution of the wave equation for each source at each iteration. To reduce the computational burden, constructing supershots by combining several sources (aka source encoding) allows mitigation of the number of simulations at each iteration, but it gives rise to crosstalk artifacts because of interference between the individual sources of the supershot. A modified Gauss-Newton FWI (MGNFWI) approach showed that as long as the difference between the initial and true models permits a sparse representation, the ℓ _1-norm constrained model updates suppress subsampling-related artifacts. However, the spectral-projected gradient ℓ _1 (SPGℓ _1) algorithm employed by MGNFWI is rather complicated that makes its implementation difficult. To facilitate realistic applications, we adapt a linearized Bregman (LB) method to sparsity-promoting FWI (SPFWI) because of the efficiency and simplicity of LB in the framework of ℓ _1-norm constrained optimization problem and compressive sensing. Numerical experiments performed with the BP Salt model, the Marmousi model and the BG Compass model verify the following points. The FWI result with LB solving ℓ _1-norm sparsity-promoting problem for the model update outperforms that generated by solving ℓ _2-norm problem in terms of crosstalk elimination and high-fidelity results. The simpler LB method performs comparably and even superiorly to the complicated SPGℓ _1 method in terms of computational efficiency and model quality, making the LB method a viable alternative for realistic implementations of SPFWI.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Furuuchi, Kazuyuki; Sperling, Marcus, E-mail: kazuyuki.furuuchi@manipal.edu, E-mail: marcus.sperling@univie.ac.at
2017-05-01
We study quantum tunnelling in Dante's Inferno model of large field inflation. Such a tunnelling process, which will terminate inflation, becomes problematic if the tunnelling rate is rapid compared to the Hubble time scale at the time of inflation. Consequently, we constrain the parameter space of Dante's Inferno model by demanding a suppressed tunnelling rate during inflation. The constraints are derived and explicit numerical bounds are provided for representative examples. Our considerations are at the level of an effective field theory; hence, the presented constraints have to hold regardless of any UV completion.
NASA Astrophysics Data System (ADS)
Belkina, T. A.; Konyukhova, N. B.; Kurochkin, S. V.
2016-01-01
Previous and new results are used to compare two mathematical insurance models with identical insurance company strategies in a financial market, namely, when the entire current surplus or its constant fraction is invested in risky assets (stocks), while the rest of the surplus is invested in a risk-free asset (bank account). Model I is the classical Cramér-Lundberg risk model with an exponential claim size distribution. Model II is a modification of the classical risk model (risk process with stochastic premiums) with exponential distributions of claim and premium sizes. For the survival probability of an insurance company over infinite time (as a function of its initial surplus), there arise singular problems for second-order linear integrodifferential equations (IDEs) defined on a semiinfinite interval and having nonintegrable singularities at zero: model I leads to a singular constrained initial value problem for an IDE with a Volterra integral operator, while II model leads to a more complicated nonlocal constrained problem for an IDE with a non-Volterra integral operator. A brief overview of previous results for these two problems depending on several positive parameters is given, and new results are presented. Additional results are concerned with the formulation, analysis, and numerical study of "degenerate" problems for both models, i.e., problems in which some of the IDE parameters vanish; moreover, passages to the limit with respect to the parameters through which we proceed from the original problems to the degenerate ones are singular for small and/or large argument values. Such problems are of mathematical and practical interest in themselves. Along with insurance models without investment, they describe the case of surplus completely invested in risk-free assets, as well as some noninsurance models of surplus dynamics, for example, charity-type models.
Can the Ocean's Heat Engine Control Horizontal Circulation? Insights From the Caspian Sea
NASA Astrophysics Data System (ADS)
Bruneau, Nicolas; Zika, Jan; Toumi, Ralf
2017-10-01
We investigate the role of the ocean's heat engine in setting horizontal circulation using a numerical model of the Caspian Sea. The Caspian Sea can be seen as a virtual laboratory—a compromise between realistic global models that are hampered by long equilibration times and idealized basin geometry models, which are not constrained by observations. We find that increases in vertical mixing drive stronger thermally direct overturning and consequent conversion of available potential to kinetic energy. Numerical solutions with water mass structures closest to observations overturn 0.02-0.04 × 106 m3/s (sverdrup) representing the first estimate of Caspian Sea overturning. Our results also suggest that the overturning is thermally forced increasing in intensity with increasing vertical diffusivity. Finally, stronger thermally direct overturning is associated with a stronger horizontal circulation in the Caspian Sea. This suggests that the ocean's heat engine can strongly impact broader horizontal circulations in the ocean.
A unified wall function for compressible turbulence modelling
NASA Astrophysics Data System (ADS)
Ong, K. C.; Chan, A.
2018-05-01
Turbulence modelling near the wall often requires a high mesh density clustered around the wall and the first cells adjacent to the wall to be placed in the viscous sublayer. As a result, the numerical stability is constrained by the smallest cell size and hence requires high computational overhead. In the present study, a unified wall function is developed which is valid for viscous sublayer, buffer sublayer and inertial sublayer, as well as including effects of compressibility, heat transfer and pressure gradient. The resulting wall function applies to compressible turbulence modelling for both isothermal and adiabatic wall boundary conditions with the non-zero pressure gradient. Two simple wall function algorithms are implemented for practical computation of isothermal and adiabatic wall boundary conditions. The numerical results show that the wall function evaluates the wall shear stress and turbulent quantities of wall adjacent cells at wide range of non-dimensional wall distance and alleviate the number and size of cells required.
A Grobner Basis Solution for Lightning Ground Flash Fraction Retrieval
NASA Technical Reports Server (NTRS)
Solakiewicz, Richard; Attele, Rohan; Koshak, William
2011-01-01
A Bayesian inversion method was previously introduced for retrieving the fraction of ground flashes in a set of flashes observed from a (low earth orbiting or geostationary) satellite lightning imager. The method employed a constrained mixed exponential distribution model to describe the lightning optical measurements. To obtain the optimum model parameters, a scalar function was minimized by a numerical method. In order to improve this optimization, we introduce a Grobner basis solution to obtain analytic representations of the model parameters that serve as a refined initialization scheme to the numerical optimization. Using the Grobner basis, we show that there are exactly 2 solutions involving the first 3 moments of the (exponentially distributed) data. When the mean of the ground flash optical characteristic (e.g., such as the Maximum Group Area, MGA) is larger than that for cloud flashes, then a unique solution can be obtained.
How can we constrain the amount of heat producing elements in the interior of Mars?
NASA Astrophysics Data System (ADS)
Grott, M.; Plesa, A.; Breuer, D.
2013-12-01
The InSight (Interior Exploration using Seismic Investigations, Geodesy and Heat Transport) mission to be launched in 2016 will study Mars' deep interior and help improving our knowledge about the interior structure and the thermal evolution of the planet - the latter is also directly linked to its volcanic history and atmospheric evolution. Measurements planned with the two main instruments, SEIS (Seismic Experiment for Interior Structure) and HP3 (Heat Flow and Physical Properties Package) aim to constrain the main structure of the planet, i.e. core, mantle and crust as well as the rate at which the planet loses the interior heat over its surface. Since the surface heat flow depends on the amount of radiogenic heat elements (HPE) present in the interior, it offers a measurable quantity which could constrain the heat budget. Being the principal agent regulating the heat budget which in turn influences partial melting in the interior, crustal and atmospheric evolution, the heat producing elements have a major impact on the entire the present temperature thermal history of the planet. To constrain the radiogenic heat elements of the planet from the surface heat flow is possible assuming that the urey number of the planet, which describes the contribution of internal heat production to the surface heat loss, is known. We have tested this assumption by calculating the thermal evolution of the planet with fully dynamical numerical simulations and by comparing the obtained present-day urey number for a set of different models/parameters (Fig. 1). For one-plate planets like Mars, numerical models show - in contrast to models for the Earth, where plate tectonics play a major role adding more complexity to the system - that the urey ratio is mainly sensitive to two effects: the efficiency of cooling due to the temperature-dependence of the viscosity and the mean half-life time of the long lived radiogenic isotopes. The temperature-dependence of the viscosity results in the so-called thermostat effect regulating the interior temperature such that the present-day temperatures are independent of the initial temperature distribution. If the thermostat effect is efficient as we show for the assumed Martian mantle rheology, and if the system is not dominated by radioactive isotopes like Thorium with a half-life much longer than the age of the planet as in the model of [3], all numerical simulations show similar today's values for the urey number (Fig. 1). Knowing the surface heat loss from the upcoming heat flow measurements planned for the InSight mission, one can distinguish then between different radiogenic heat source models [1, 2, 3, 4]. REFERENCES [1] Wänke et al., 94; [2] Lodders & Fegley, 97; [3] Morgan & Anders, 79; [4] Treiman et al., 86 Fig. 1: a) the influence of the reference viscosity and initial upper thermal boundary layer (TBL) on the urey ratio using HPE density from [1]; b) different models for HPE density; c) the urey ratio for different HPE models and 1e22 Pa s reference viscosity.
Single-particle dispersion in stably stratified turbulence
NASA Astrophysics Data System (ADS)
Sujovolsky, N. E.; Mininni, P. D.; Rast, M. P.
2018-03-01
We present models for single-particle dispersion in vertical and horizontal directions of stably stratified flows. The model in the vertical direction is based on the observed Lagrangian spectrum of the vertical velocity, while the model in the horizontal direction is a combination of a continuous-time eddy-constrained random walk process with a contribution to transport from horizontal winds. Transport at times larger than the Lagrangian turnover time is not universal and dependent on these winds. The models yield results in good agreement with direct numerical simulations of stratified turbulence, for which single-particle dispersion differs from the well-studied case of homogeneous and isotropic turbulence.
Confirmation and calibration of computer modeling of tsunamis produced by Augustine volcano, Alaska
Beget, James E.; Kowalik, Zygmunt
2006-01-01
Numerical modeling has been used to calculate the characteristics of a tsunami generated by a landslide into Cook Inlet from Augustine Volcano. The modeling predicts travel times of ca. 50-75 minutes to the nearest populated areas, and indicates that significant wave amplification occurs near Mt. Iliamna on the western side of Cook Inlet, and near the Nanwelak and the Homer-Anchor Point areas on the east side of Cook Inlet. Augustine volcano last produced a tsunami during an eruption in 1883, and field evidence of the extent and height of the 1883 tsunamis can be used to test and constrain the results of the computer modeling. Tsunami deposits on Augustine Island indicate waves near the landslide source were more than 19 m high, while 1883 tsunami deposits in distal sites record waves 6-8 m high. Paleotsunami deposits were found at sites along the coast near Mt. Iliamna, Nanwelak, and Homer, consistent with numerical modeling indicating significant tsunami wave amplification occurs in these areas.
Numerical study of dam-break induced tsunami-like bore with a hump of different slopes
NASA Astrophysics Data System (ADS)
Cheng, Du; Zhao, Xi-zeng; Zhang, Da-ke; Chen, Yong
2017-12-01
Numerical simulation of dam-break wave, as an imitation of tsunami hydraulic bore, with a hump of different slopes is performed in this paper using an in-house code, named a Constrained Interpolation Profile (CIP)-based model. The model is built on a Cartesian grid system with the Navier Stokes equations using a CIP method for the flow solver, and employs an immersed boundary method (IBM) for the treatment of solid body boundary. A more accurate interface capturing scheme, the Tangent of hyperbola for interface capturing/Slope weighting (THINC/SW) scheme, is adopted as the interface capturing method. Then, the CIP-based model is applied to simulate the dam break flow problem in a bumpy channel. Considerable attention is paid to the spilling type reflected bore, the following spilling type wave breaking, free surface profiles and water level variations over time. Computations are compared with available experimental data and other numerical results quantitatively and qualitatively. Further investigation is conducted to analyze the influence of variable slopes on the flow features of the tsunami-like bore.
Simulations of Ground Motion in Southern California based upon the Spectral-Element Method
NASA Astrophysics Data System (ADS)
Tromp, J.; Komatitsch, D.; Liu, Q.
2003-12-01
We use the spectral-element method to simulate ground motion generated by recent well-recorded small earthquakes in Southern California. Simulations are performed using a new sedimentary basin model that is constrained by hundreds of petroleum industry well logs and more than twenty thousand kilometers of seismic reflection profiles. The numerical simulations account for 3D variations of seismic wave speeds and density, topography and bathymetry, and attenuation. Simulations for several small recent events demonstrate that the combination of a detailed sedimentary basin model and an accurate numerical technique facilitates the simulation of ground motion at periods of 2 seconds and longer inside the Los Angeles basin and 6 seconds and longer elsewhere. Peak ground displacement, velocity and acceleration maps illustrate that significant amplification occurs in the basin. Centroid-Moment Tensor mechanisms are obtained based upon Pnl and surface waveforms and numerically calculated 3D Frechet derivatives. We use a combination of waveform and waveform-envelope misfit criteria, and facilitate pure double-couple or zero-trace moment-tensor inversions.
NASA Astrophysics Data System (ADS)
Liu, Qiang; Chattopadhyay, Aditi; Gu, Haozhong; Liu, Qiang; Chattopadhyay, Aditi; Zhou, Xu
2000-08-01
The use of a special type of smart material, known as segmented constrained layer (SCL) damping, is investigated for improved rotor aeromechanical stability. The rotor blade load-carrying member is modeled using a composite box beam with arbitrary wall thickness. The SCLs are bonded to the upper and lower surfaces of the box beam to provide passive damping. A finite-element model based on a hybrid displacement theory is used to accurately capture the transverse shear effects in the composite primary structure and the viscoelastic and the piezoelectric layers within the SCL. Detailed numerical studies are presented to assess the influence of the number of actuators and their locations for improved aeromechanical stability. Ground and air resonance analysis models are implemented in the rotor blade built around the composite box beam with segmented SCLs. A classic ground resonance model and an air resonance model are used in the rotor-body coupled stability analysis. The Pitt dynamic inflow model is used in the air resonance analysis under hover condition. Results indicate that the surface bonded SCLs significantly increase rotor lead-lag regressive modal damping in the coupled rotor-body system.
A Test of Maxwell's Z Model Using Inverse Modeling
NASA Technical Reports Server (NTRS)
Anderson, J. L. B.; Schultz, P. H.; Heineck, T.
2003-01-01
In modeling impact craters a small region of energy and momentum deposition, commonly called a "point source", is often assumed. This assumption implies that an impact is the same as an explosion at some depth below the surface. Maxwell's Z Model, an empirical point-source model derived from explosion cratering, has previously been compared with numerical impact craters with vertical incidence angles, leading to two main inferences. First, the flowfield center of the Z Model must be placed below the target surface in order to replicate numerical impact craters. Second, for vertical impacts, the flow-field center cannot be stationary if the value of Z is held constant; rather, the flow-field center migrates downward as the crater grows. The work presented here evaluates the utility of the Z Model for reproducing both vertical and oblique experimental impact data obtained at the NASA Ames Vertical Gun Range (AVGR). Specifically, ejection angle data obtained through Three-Dimensional Particle Image Velocimetry (3D PIV) are used to constrain the parameters of Maxwell's Z Model, including the value of Z and the depth and position of the flow-field center via inverse modeling.
Extending semi-numeric reionization models to the first stars and galaxies
NASA Astrophysics Data System (ADS)
Koh, Daegene; Wise, John H.
2018-03-01
Semi-numeric methods have made it possible to efficiently model the epoch of reionization (EoR). While most implementations involve a reduction to a simple three-parameter model, we introduce a new mass-dependent ionizing efficiency parameter that folds in physical parameters that are constrained by the latest numerical simulations. This new parametrization enables the effective modelling of a broad range of host halo masses containing ionizing sources, extending from the smallest Population III host haloes with M ˜ 106 M⊙, which are often ignored, to the rarest cosmic peaks with M ˜ 1012 M⊙ during EoR. We compare the resulting ionizing histories with a typical three-parameter model and also compare with the latest constraints from the Planck mission. Our model results in an optical depth due to Thomson scattering, τe = 0.057, that is consistent with Planck. The largest difference in our model is shown in the resulting bubble size distributions that peak at lower characteristic sizes and are broadened. We also consider the uncertainties of the various physical parameters, and comparing the resulting ionizing histories broadly disfavours a small contribution from galaxies. The smallest haloes cease a meaningful contribution to the ionizing photon budget after z = 10, implying that they play a role in determining the start of EoR and little else.
Observed and Simulated Eddy Diffusivity Upstream of the Drake Passage
NASA Astrophysics Data System (ADS)
Tulloch, R.; Ferrari, R. M.; Marshall, J.
2012-12-01
Estimates of eddy diffusivity in the Southern Ocean are poorly constrained due to lack of observations. We compare the first direct estimate of isopycnal eddy diffusivity upstream of the Drake Passage (from Ledwell et al. 2011) with a numerical simulation. The estimate is computed from a point tracer release as part of the Diapycnal and Isopycnal Mixing Experiment in the Southern Ocean (DIMES). We find that the observational diffusivity estimate of about 500m^2/s at 1500m depth is close to that computed in a data-constrained, 1/20th of a degree simulation of the Drake Passage region. This tracer estimate also agrees with Lagrangian float calculations in the model. The role of mean flow suppression of eddy diffusivity at shallower depths will also be discussed.
Numerical modeling of landslides and generated seismic waves: The Bingham Canyon Mine landslides
NASA Astrophysics Data System (ADS)
Miallot, H.; Mangeney, A.; Capdeville, Y.; Hibert, C.
2016-12-01
Landslides are important natural hazards and key erosion processes. They create long period surface waves that can be recorded by regional and global seismic networks. The seismic signals are generated by acceleration/deceleration of the mass sliding over the topography. They consist in a unique and powerful tool to detect, characterize and quantify the landslide dynamics. We investigate here the processes at work during the two massive landslides that struck the Bingham Canyon Mine on the 10th April 2013. We carry a combined analysis of the generated seismic signals and the landslide processes computed with a 3D modeling on a complex topography. Forces computed by broadband seismic waveform inversion are used to constrain the study and particularly the force-source and the bulk dynamic. The source time function are obtained by a 3D model (Shaltop) where rheological parameters can be adjusted. We first investigate the influence of the initial shape of the sliding mass which strongly affects the whole landslide dynamic. We also see that the initial shape of the source mass of the first landslide constrains pretty well the second landslide source mass. We then investigate the effect of a rheological parameter, the frictional angle, that strongly influences the resulted computed seismic source function. We test here numerous friction laws as the frictional Coulomb law and a velocity-weakening friction law. Our results show that the force waveform fitting the observed data is highly variable depending on these different choices.
A constrained reconstruction technique of hyperelasticity parameters for breast cancer assessment
NASA Astrophysics Data System (ADS)
Mehrabian, Hatef; Campbell, Gordon; Samani, Abbas
2010-12-01
In breast elastography, breast tissue usually undergoes large compression resulting in significant geometric and structural changes. This implies that breast elastography is associated with tissue nonlinear behavior. In this study, an elastography technique is presented and an inverse problem formulation is proposed to reconstruct parameters characterizing tissue hyperelasticity. Such parameters can potentially be used for tumor classification. This technique can also have other important clinical applications such as measuring normal tissue hyperelastic parameters in vivo. Such parameters are essential in planning and conducting computer-aided interventional procedures. The proposed parameter reconstruction technique uses a constrained iterative inversion; it can be viewed as an inverse problem. To solve this problem, we used a nonlinear finite element model corresponding to its forward problem. In this research, we applied Veronda-Westmann, Yeoh and polynomial models to model tissue hyperelasticity. To validate the proposed technique, we conducted studies involving numerical and tissue-mimicking phantoms. The numerical phantom consisted of a hemisphere connected to a cylinder, while we constructed the tissue-mimicking phantom from polyvinyl alcohol with freeze-thaw cycles that exhibits nonlinear mechanical behavior. Both phantoms consisted of three types of soft tissues which mimic adipose, fibroglandular tissue and a tumor. The results of the simulations and experiments show feasibility of accurate reconstruction of tumor tissue hyperelastic parameters using the proposed method. In the numerical phantom, all hyperelastic parameters corresponding to the three models were reconstructed with less than 2% error. With the tissue-mimicking phantom, we were able to reconstruct the ratio of the hyperelastic parameters reasonably accurately. Compared to the uniaxial test results, the average error of the ratios of the parameters reconstructed for inclusion to the middle and external layers were 13% and 9.6%, respectively. Given that the parameter ratios of the abnormal tissues to the normal ones range from three times to more than ten times, this accuracy is sufficient for tumor classification.
NASA Astrophysics Data System (ADS)
Shao, H.; Huang, Y.; Kolditz, O.
2015-12-01
Multiphase flow problems are numerically difficult to solve, as it often contains nonlinear Phase transition phenomena A conventional technique is to introduce the complementarity constraints where fluid properties such as liquid saturations are confined within a physically reasonable range. Based on such constraints, the mathematical model can be reformulated into a system of nonlinear partial differential equations coupled with variational inequalities. They can be then numerically handled by optimization algorithms. In this work, two different approaches utilizing the complementarity constraints based on persistent primary variables formulation[4] are implemented and investigated. The first approach proposed by Marchand et.al[1] is using "local complementary constraints", i.e. coupling the constraints with the local constitutive equations. The second approach[2],[3] , namely the "global complementary constrains", applies the constraints globally with the mass conservation equation. We will discuss how these two approaches are applied to solve non-isothermal componential multiphase flow problem with the phase change phenomenon. Several benchmarks will be presented for investigating the overall numerical performance of different approaches. The advantages and disadvantages of different models will also be concluded. References[1] E.Marchand, T.Mueller and P.Knabner. Fully coupled generalized hybrid-mixed finite element approximation of two-phase two-component flow in porous media. Part I: formulation and properties of the mathematical model, Computational Geosciences 17(2): 431-442, (2013). [2] A. Lauser, C. Hager, R. Helmig, B. Wohlmuth. A new approach for phase transitions in miscible multi-phase flow in porous media. Water Resour., 34,(2011), 957-966. [3] J. Jaffré, and A. Sboui. Henry's Law and Gas Phase Disappearance. Transp. Porous Media. 82, (2010), 521-526. [4] A. Bourgeat, M. Jurak and F. Smaï. Two-phase partially miscible flow and transport modeling in porous media : application to gas migration in a nuclear waste repository, Comp.Geosciences. (2009), Volume 13, Number 1, 29-42.
Numerical Relations and Skill Level Constrain Co-Adaptive Behaviors of Agents in Sports Teams
Silva, Pedro; Travassos, Bruno; Vilar, Luís; Aguiar, Paulo; Davids, Keith; Araújo, Duarte; Garganta, Júlio
2014-01-01
Similar to other complex systems in nature (e.g., a hunting pack, flocks of birds), sports teams have been modeled as social neurobiological systems in which interpersonal coordination tendencies of agents underpin team swarming behaviors. Swarming is seen as the result of agent co-adaptation to ecological constraints of performance environments by collectively perceiving specific possibilities for action (affordances for self and shared affordances). A major principle of invasion team sports assumed to promote effective performance is to outnumber the opposition (creation of numerical overloads) during different performance phases (attack and defense) in spatial regions adjacent to the ball. Such performance principles are assimilated by system agents through manipulation of numerical relations between teams during training in order to create artificially asymmetrical performance contexts to simulate overloaded and underloaded situations. Here we evaluated effects of different numerical relations differentiated by agent skill level, examining emergent inter-individual, intra- and inter-team coordination. Groups of association football players (national – NLP and regional-level – RLP) participated in small-sided and conditioned games in which numerical relations between system agents were manipulated (5v5, 5v4 and 5v3). Typical grouping tendencies in sports teams (major ranges, stretch indices, distances of team centers to goals and distances between the teams' opposing line-forces in specific team sectors) were recorded by plotting positional coordinates of individual agents through continuous GPS tracking. Results showed that creation of numerical asymmetries during training constrained agents' individual dominant regions, the underloaded teams' compactness and each team's relative position on-field, as well as distances between specific team sectors. We also observed how skill level impacted individual and team coordination tendencies. Data revealed emergence of co-adaptive behaviors between interacting neurobiological social system agents in the context of sport performance. Such observations have broader implications for training design involving manipulations of numerical relations between interacting members of social collectives. PMID:25191870
Numerical relations and skill level constrain co-adaptive behaviors of agents in sports teams.
Silva, Pedro; Travassos, Bruno; Vilar, Luís; Aguiar, Paulo; Davids, Keith; Araújo, Duarte; Garganta, Júlio
2014-01-01
Similar to other complex systems in nature (e.g., a hunting pack, flocks of birds), sports teams have been modeled as social neurobiological systems in which interpersonal coordination tendencies of agents underpin team swarming behaviors. Swarming is seen as the result of agent co-adaptation to ecological constraints of performance environments by collectively perceiving specific possibilities for action (affordances for self and shared affordances). A major principle of invasion team sports assumed to promote effective performance is to outnumber the opposition (creation of numerical overloads) during different performance phases (attack and defense) in spatial regions adjacent to the ball. Such performance principles are assimilated by system agents through manipulation of numerical relations between teams during training in order to create artificially asymmetrical performance contexts to simulate overloaded and underloaded situations. Here we evaluated effects of different numerical relations differentiated by agent skill level, examining emergent inter-individual, intra- and inter-team coordination. Groups of association football players (national--NLP and regional-level--RLP) participated in small-sided and conditioned games in which numerical relations between system agents were manipulated (5v5, 5v4 and 5v3). Typical grouping tendencies in sports teams (major ranges, stretch indices, distances of team centers to goals and distances between the teams' opposing line-forces in specific team sectors) were recorded by plotting positional coordinates of individual agents through continuous GPS tracking. Results showed that creation of numerical asymmetries during training constrained agents' individual dominant regions, the underloaded teams' compactness and each team's relative position on-field, as well as distances between specific team sectors. We also observed how skill level impacted individual and team coordination tendencies. Data revealed emergence of co-adaptive behaviors between interacting neurobiological social system agents in the context of sport performance. Such observations have broader implications for training design involving manipulations of numerical relations between interacting members of social collectives.
NASA Astrophysics Data System (ADS)
Zhu, Hejun
2018-04-01
Recently, seismologists observed increasing seismicity in North Texas and Oklahoma. Based on seismic observations and other geophysical measurements, numerous studies suggested links between the increasing seismicity and wastewater injection during unconventional oil and gas exploration. To better monitor seismic events and investigate their triggering mechanisms, we need an accurate 3D crustal wavespeed model for the study region. Considering the uneven distribution of earthquakes in this area, seismic tomography with local earthquake records have difficulties achieving even illumination. To overcome this limitation, in this study, ambient noise cross-correlation functions are used to constrain subsurface variations in wavespeeds. I use adjoint tomography to iteratively fit frequency-dependent phase differences between observed and predicted band-limited Green's functions. The spectral-element method is used to numerically calculate the band-limited Green's functions and the adjoint method is used to calculate misfit gradients with respect to wavespeeds. Twenty five preconditioned conjugate gradient iterations are used to update model parameters and minimize data misfits. Features in the new crustal model TO25 correlates well with geological provinces in the study region, including the Llano uplift, the Anadarko basin and the Ouachita orogenic front, etc. In addition, there are relatively good correlations between seismic results with gravity and magnetic observations. This new crustal model can be used to better constrain earthquake source parameters in North Texas and Oklahoma, such as epicenter location as well as moment tensor solutions, which are important for investigating triggering mechanisms between these induced earthquakes and unconventional oil and gas exploration activities.
Big data integration for regional hydrostratigraphic mapping
NASA Astrophysics Data System (ADS)
Friedel, M. J.
2013-12-01
Numerical models provide a way to evaluate groundwater systems, but determining the hydrostratigraphic units (HSUs) used in devising these models remains subjective, nonunique, and uncertain. A novel geophysical-hydrogeologic data integration scheme is proposed to constrain the estimation of continuous HSUs. First, machine-learning and multivariate statistical techniques are used to simultaneously integrate borehole hydrogeologic (lithology, hydraulic conductivity, aqueous field parameters, dissolved constituents) and geophysical (gamma, spontaneous potential, and resistivity) measurements. Second, airborne electromagnetic measurements are numerically inverted to obtain subsurface resistivity structure at randomly selected locations. Third, the machine-learning algorithm is trained using the borehole hydrostratigraphic units and inverted airborne resistivity profiles. The trained machine-learning algorithm is then used to estimate HSUs at independent resistivity profile locations. We demonstrate efficacy of the proposed approach to map the hydrostratigraphy of a heterogeneous surficial aquifer in northwestern Nebraska.
Numerical cell model investigating cellular carbon fluxes in Emiliania huxleyi.
Holtz, Lena-Maria; Wolf-Gladrow, Dieter; Thoms, Silke
2015-01-07
Coccolithophores play a crucial role in the marine carbon cycle and thus it is interesting to know how they will respond to climate change. After several decades of research the interplay between intracellular processes and the marine carbonate system is still not well understood. On the basis of experimental findings given in literature, a numerical cell model is developed that describes inorganic carbon fluxes between seawater and the intracellular sites of calcite precipitation and photosynthetic carbon fixation. The implemented cell model consists of four compartments, for each of which the carbonate system is resolved individually. The four compartments are connected to each other via H(+), CO2, and HCO3(-) fluxes across the compartment-confining membranes. For CO2 accumulation around RubisCO, an energy-efficient carbon concentrating mechanism is proposed that relies on diffusive CO2 uptake. At low external CO2 concentrations and high light intensities, CO2 diffusion does not suffice to cover the carbon demand of photosynthesis and an additional uptake of external HCO3(-) becomes essential. The model is constrained by data of Emiliania huxleyi, the numerically most abundant coccolithophore species in the present-day ocean. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.
Revisiting the direct detection of dark matter in simplified models
NASA Astrophysics Data System (ADS)
Li, Tong
2018-07-01
In this work we numerically re-examine the loop-induced WIMP-nucleon scattering cross section for the simplified dark matter models and the constraint set by the latest direct detection experiment. We consider a fermion, scalar or vector dark matter component from five simplified models with leptophobic spin-0 mediators coupled only to Standard Model quarks and dark matter particles. The tree-level WIMP-nucleon cross sections in these models are all momentum-suppressed. We calculate the non-suppressed spin-independent WIMP-nucleon cross sections from loop diagrams and investigate the constrained space of dark matter mass and mediator mass by Xenon1T. The constraints from indirect detection and collider search are also discussed.
Semismooth Newton method for gradient constrained minimization problem
NASA Astrophysics Data System (ADS)
Anyyeva, Serbiniyaz; Kunisch, Karl
2012-08-01
In this paper we treat a gradient constrained minimization problem, particular case of which is the elasto-plastic torsion problem. In order to get the numerical approximation to the solution we have developed an algorithm in an infinite dimensional space framework using the concept of the generalized (Newton) differentiation. Regularization was done in order to approximate the problem with the unconstrained minimization problem and to make the pointwise maximum function Newton differentiable. Using semismooth Newton method, continuation method was developed in function space. For the numerical implementation the variational equations at Newton steps are discretized using finite elements method.
NASA Astrophysics Data System (ADS)
Arai, Shun; Nishizawa, Atsushi
2018-05-01
Gravitational waves (GW) are generally affected by modification of a gravity theory during propagation at cosmological distances. We numerically perform a quantitative analysis on Horndeski theory at the cosmological scale to constrain the Horndeski theory by GW observations in a model-independent way. We formulate a parametrization for a numerical simulation based on the Monte Carlo method and obtain the classification of the models that agrees with cosmic accelerating expansion within observational errors of the Hubble parameter. As a result, we find that a large group of the models in the Horndeski theory that mimic cosmic expansion of the Λ CDM model can be excluded from the simultaneous detection of a GW and its electromagnetic transient counterpart. Based on our result and the latest detection of GW170817 and GRB170817A, we conclude that the subclass of Horndeski theory including arbitrary functions G4 and G5 can hardly explain cosmic accelerating expansion without fine-tuning.
Using natural laboratories and modeling to decipher lithospheric rheology
NASA Astrophysics Data System (ADS)
Sobolev, Stephan
2013-04-01
Rheology is obviously important for geodynamic modeling but at the same time rheological parameters appear to be least constrained. Laboratory experiments give rather large ranges of rheological parameters and their scaling to nature is not entirely clear. Therefore finding rheological proxies in nature is very important. One way to do that is finding appropriate values of rheological parameter by fitting models to the lithospheric structure in the highly deformed regions where lithospheric structure and geologic evolution is well constrained. Here I will present two examples of such studies at plate boundaries. One case is the Dead Sea Transform (DST) that comprises a boundary between African and Arabian plates. During the last 15- 20 Myr more than 100 km of left lateral transform displacement has been accumulated on the DST and about 10 km thick Dead Sea Basin (DSB) was formed in the central part of the DST. Lithospheric structure and geological evolution of DST and DSB is rather well constrained by a number of interdisciplinary projects including DESERT and DESIRE projects leaded by the GFZ Potsdam. Detailed observations reveal apparently contradictory picture. From one hand widespread igneous activity, especially in the last 5 Myr, thin (60-80 km) lithosphere constrained from seismic data and absence of seismicity below the Moho, seem to be quite natural for this tectonically active plate boundary. However, surface heat flow of less than 50-60mW/m2 and deep seismicity in the lower crust ( deeper than 20 km) reported for this region are apparently inconsistent with the tectonic settings specific for an active continental plate boundary and with the crustal structure of the DSB. To address these inconsistencies which comprise what I call the "DST heat-flow paradox", a 3D numerical thermo-mechanical model was developed operating with non-linear elasto-visco-plastic rheology of the lithosphere. Results of the numerical experiments show that the entire set of observations for the DSB can be explained within the classical pull-apart model assuming that (1) the lithosphere has been thermally eroded at about 20 Ma, just before the active faulting at the DST, and (2) the uppermost mantle in the region have relatively weak rheology consistent with the experimental data for wet olivine or pyroxenite. Another example is modeling of the collision of India and Eurasia in Tibet. Our recent thermo-mechanical model (see abstract by Tympel et al) reproduce well many important features of this orogeny, including observed convergence and distance of underthrusting of Indian lithosphere beneath Tibet, if long-term friction at India-Eurasia interface is about 0.04- 0.05, which is typical for oceanic subduction zones, but is unexpected low for continental setting.
NASA Technical Reports Server (NTRS)
Binzel, Richard P.
1992-01-01
The present evaluation of the use of new observational methods for exploring solar system evolutionary processes gives attention to illustrative cases from the constraining of near-earth asteroid sources and the discovery of main-belt asteroid fragments which indicate Vesta to be a source of basaltic achondrite meteorites. The coupling of observational constraints with numerical models clarifies cratering and collisional evolution for both main-belt and Trojan asteroids.
2015-09-01
scattering albedo (SSA) according to Hapke theory assuming bidirectional scattering at nadir look angles and uses a constrained linear model on the computed...following Hapke 9 (1993); and Mustard and Pieters 18 (1987)) assuming the reflectance spectra are bidirectional . SSA spectra were also generated...from AVIRIS data collected during a JPL/USGS campaign in response to the Deep Water Horizon (DWH) oil spill incident. 27 Out of the numerous
The Soil Carbon Paradigm Shift: Triangulating Theories, Measurements, and Models
NASA Astrophysics Data System (ADS)
Blankinship, J. C.; Crow, S. E.; Schimel, J.; Sierra, C. A.; Schaedel, C.; Plante, A. F.; Thompson, A.; Berhe, A. A.; Druhan, J. L.; Heckman, K. A.; Keiluweit, M.; Lawrence, C. R.; Marin-Spiotta, E.; Rasmussen, C.; Wagai, R.; Wieder, W. R.
2016-12-01
Predicting global responses of soil carbon (C) to environmental change remains confounded by a number of paradigms that have emerged from separate approaches. A prevailing paradigm in biogeochemistry interprets soil C as discrete pools based on estimated or measured turnover times (e.g., CENTURY model). An alternative is emerging that envisions the stabilization of soil C in tension between decomposition by microbial agents and protection by physical and chemical mechanisms. We propose an approach to bridge the gap between different paradigms, and to improve soil C forecasting by conceptualizing each paradigm as a triangle composed of three nodes: theory, analytical measurement, and numerical model. Paradigms tend to emerge from what can either be represented in models or measured using analytical instruments. But they gain power when all three elements are integrated in a balanced trinity. Our goal was to compare how theory, measurement, and model fit together in our understanding of soil C to learn from past successes, evaluate the strengths and weaknesses of current paradigms, and guide development of new understanding. We used a case-study approach to analyze each corner of the paradigm-triangle: i) paradigms that have strong theory but are constrained by weak linkages with measurements or models, ii) paradigms with robust models that have weak linkages with theory or measurements, and iii) paradigms with many measurements but little theoretical support or ability to be parameterized in numerical models. We conclude that established models like CENTURY dominate because theory and measurements that underlie the model form strong linkages that previously created a balanced triangle. Evolving paradigms based on physical protection and microbial agency are still struggling to gain traction because the theory is challenging to represent in models. The explicit examination of the strengths of emerging paradigms can, therefore, help refine and accelerate our ability to constrain projections of soil C dynamics.
Gravitational Wave Signals from the First Massive Black Hole Seeds
NASA Astrophysics Data System (ADS)
Hartwig, Tilman; Agarwal, Bhaskar; Regan, John A.
2018-05-01
Recent numerical simulations reveal that the isothermal collapse of pristine gas in atomic cooling haloes may result in stellar binaries of supermassive stars with M* ≳ 104M⊙. For the first time, we compute the in-situ merger rate for such massive black hole remnants by combining their abundance and multiplicity estimates. For black holes with initial masses in the range 104 - 6M⊙ merging at redshifts z ≳ 15 our optimistic model predicts that LISA should be able to detect 0.6 mergers per year. This rate of detection can be attributed, without confusion, to the in-situ mergers of seeds from the collapse of very massive stars. Equally, in the case where LISA observes no mergers from heavy seeds at z ≳ 15 we can constrain the combined number density, multiplicity, and coalesence times of these high-redshift systems. This letter proposes gravitational wave signatures as a means to constrain theoretical models and processes that govern the abundance of massive black hole seeds in the early Universe.
Cournot games with network effects for electric power markets
NASA Astrophysics Data System (ADS)
Spezia, Carl John
The electric utility industry is moving from regulated monopolies with protected service areas to an open market with many wholesale suppliers competing for consumer load. This market is typically modeled by a Cournot game oligopoly where suppliers compete by selecting profit maximizing quantities. The classical Cournot model can produce multiple solutions when the problem includes typical power system constraints. This work presents a mathematical programming formulation of oligopoly that produces unique solutions when constraints limit the supplier outputs. The formulation casts the game as a supply maximization problem with power system physical limits and supplier incremental profit functions as constraints. The formulation gives Cournot solutions identical to other commonly used algorithms when suppliers operate within the constraints. Numerical examples demonstrate the feasibility of the theory. The results show that the maximization formulation will give system operators more transmission capacity when compared to the actions of suppliers in a classical constrained Cournot game. The results also show that the profitability of suppliers in constrained networks depends on their location relative to the consumers' load concentration.
Escaray, Francisco J; Menendez, Ana B; Gárriz, Andrés; Pieckenstain, Fernando L; Estrella, María J; Castagno, Luis N; Carrasco, Pedro; Sanjuán, Juan; Ruiz, Oscar A
2012-01-01
The genus Lotus comprises around 100 annual and perennial species with worldwide distribution. The relevance of Lotus japonicus as a model plant has been recently demonstrated in numerous studies. In addition, some of the Lotus species show a great potential for adaptation to a number of abiotic stresses. Therefore, they are relevant components of grassland ecosystems in environmentally constrained areas of several South American countries and Australia, where they are used for livestock production. Also, the fact that the roots of these species form rhizobial and mycorrhizal associations makes the annual L. japonicus a suitable model plant for legumes, particularly in studies directed to recognize the mechanisms intervening in the tolerance to abiotic factors in the field, where these interactions occur. These properties justify the increased utilization of some Lotus species as a strategy for dunes revegetation and reclamation of heavy metal-contaminated or burned soils in Europe. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Stability of micro-Cassie states on rough substrates
NASA Astrophysics Data System (ADS)
Guo, Zhenjiang; Liu, Yawei; Lohse, Detlef; Zhang, Xuehua; Zhang, Xianren
2015-06-01
We numerically study different forms of nanoscale gaseous domains on a model for rough surfaces. Our calculations based on the constrained lattice density functional theory show that the inter-connectivity of pores surrounded by neighboring nanoposts, which model the surface roughness, leads to the formation of stable microscopic Cassie states. We investigate the dependence of the stability of the micro-Cassie states on substrate roughness, fluid-solid interaction, and chemical potential and then address the differences between the origin of the micro-Cassie states and that of surface nanobubbles within similar models. Finally, we show that the micro-Cassie states share some features with experimentally observed micropancakes at solid-water interfaces.
Stellar nucleosynthesis and chemical evolution of the solar neighborhood
NASA Technical Reports Server (NTRS)
Clayton, Donald D.
1988-01-01
Current theoretical models of nucleosynthesis (N) in stars are reviewed, with an emphasis on their implications for Galactic chemical evolution. Topics addressed include the Galactic population II red giants and early N; N in the big bang; star formation, stellar evolution, and the ejection of thermonuclearly evolved debris; the chemical evolution of an idealized disk galaxy; analytical solutions for a closed-box model with continuous infall; and nuclear burning processes and yields. Consideration is given to shell N in massive stars, N related to degenerate cores, and the types of observational data used to constrain N models. Extensive diagrams, graphs, and tables of numerical data are provided.
Artificial bee colony algorithm for constrained possibilistic portfolio optimization problem
NASA Astrophysics Data System (ADS)
Chen, Wei
2015-07-01
In this paper, we discuss the portfolio optimization problem with real-world constraints under the assumption that the returns of risky assets are fuzzy numbers. A new possibilistic mean-semiabsolute deviation model is proposed, in which transaction costs, cardinality and quantity constraints are considered. Due to such constraints the proposed model becomes a mixed integer nonlinear programming problem and traditional optimization methods fail to find the optimal solution efficiently. Thus, a modified artificial bee colony (MABC) algorithm is developed to solve the corresponding optimization problem. Finally, a numerical example is given to illustrate the effectiveness of the proposed model and the corresponding algorithm.
NASA Astrophysics Data System (ADS)
Tang, Peipei; Wang, Chengjing; Dai, Xiaoxia
2016-04-01
In this paper, we propose a majorized Newton-CG augmented Lagrangian-based finite element method for 3D elastic frictionless contact problems. In this scheme, we discretize the restoration problem via the finite element method and reformulate it to a constrained optimization problem. Then we apply the majorized Newton-CG augmented Lagrangian method to solve the optimization problem, which is very suitable for the ill-conditioned case. Numerical results demonstrate that the proposed method is a very efficient algorithm for various large-scale 3D restorations of geological models, especially for the restoration of geological models with complicated faults.
Constrained variability of modeled T:ET ratio across biomes
NASA Astrophysics Data System (ADS)
Fatichi, Simone; Pappas, Christoforos
2017-07-01
A large variability (35-90%) in the ratio of transpiration to total evapotranspiration (referred here as T:ET) across biomes or even at the global scale has been documented by a number of studies carried out with different methodologies. Previous empirical results also suggest that T:ET does not covary with mean precipitation and has a positive dependence on leaf area index (LAI). Here we use a mechanistic ecohydrological model, with a refined process-based description of evaporation from the soil surface, to investigate the variability of T:ET across biomes. Numerical results reveal a more constrained range and higher mean of T:ET (70 ± 9%, mean ± standard deviation) when compared to observation-based estimates. T:ET is confirmed to be independent from mean precipitation, while it is found to be correlated with LAI seasonally but uncorrelated across multiple sites. Larger LAI increases evaporation from interception but diminishes ground evaporation with the two effects largely compensating each other. These results offer mechanistic model-based evidence to the ongoing research about the patterns of T:ET and the factors influencing its magnitude across biomes.
NASA Astrophysics Data System (ADS)
Bentley, M. J.; Hein, A. S.; Sugden, D. E.; Whitehouse, P. L.; Shanks, R.; Xu, S.; Freeman, S. P. H. T.
2017-02-01
The retreat history of the Antarctic Ice Sheet is important for understanding rapid deglaciation, as well as to constrain numerical ice sheet models and ice loading models required for glacial isostatic adjustment modelling. There is particular debate about the extent of grounded ice in the Weddell Sea embayment at the Last Glacial Maximum, and its subsequent deglacial history. Here we provide a new dataset of geomorphological observations and cosmogenic nuclide surface exposure ages of erratic samples that constrain the deglacial history of the Pensacola Mountains, adjacent to the present day Foundation Ice Stream and Academy Glacier in the southern Weddell Sea embayment. We show there is evidence of at least two glaciations, the first of which was relatively old and warm-based, and a more recent cold-based glaciation. During the most recent glaciation ice thickened by at least 450 m in the Williams Hills and at least 380 m on Mt Bragg. Progressive thinning from these sites was well underway by 10 ka BP and ice reached present levels by 2.5 ka BP, and is broadly similar to the relatively modest thinning histories in the southern Ellsworth Mountains. The thinning history is consistent with, but does not mandate, a Late Holocene retreat of the grounding line to a smaller-than-present configuration, as has been recently hypothesized based on ice sheet and glacial isostatic modelling. The data also show that clasts with complex exposure histories are pervasive and that clast recycling is highly site-dependent. These new data provide constraints on a reconstruction of the retreat history of the formerly-expanded Foundation Ice Stream, derived using a numerical flowband model.
3D-PTV around Operational Wind Turbines
NASA Astrophysics Data System (ADS)
Brownstein, Ian; Dabiri, John
2016-11-01
Laboratory studies and numerical simulations of wind turbines are typically constrained in how they can inform operational turbine behavior. Laboratory experiments are usually unable to match both pertinent parameters of full-scale wind turbines, the Reynolds number (Re) and tip speed ratio, using scaled-down models. Additionally, numerical simulations of the flow around wind turbines are constrained by the large domain size and high Re that need to be simulated. When these simulations are preformed, turbine geometry is typically simplified resulting in flow structures near the rotor not being well resolved. In order to bypass these limitations, a quantitative flow visualization method was developed to take in situ measurements of the flow around wind turbines at the Field Laboratory for Optimized Wind Energy (FLOWE) in Lancaster, CA. The apparatus constructed was able to seed an approximately 9m x 9m x 5m volume in the wake of the turbine using artificial snow. Quantitative measurements were obtained by tracking the evolution of the artificial snow using a four camera setup. The methodology for calibrating and collecting data, as well as preliminary results detailing the flow around a 2kW vertical-axis wind turbine (VAWT), will be presented.
Tidal disruption of dissipative planetesimals
NASA Technical Reports Server (NTRS)
Mizuno, H.; Boss, A. P.
1985-01-01
A self-consistent numerical model is developed for the tidal disruption of a solid planetesimal. The planetesimal is treated as a highly viscous, slightly compressible fluid whose disturbed parts are an inviscid, pressureless fluid undergoing distortion and disruption. The distortions were constrained to being symmetrical above and below the equatorial plane. The tidal potential is expanded in terms of Legendre polynomials, which eliminates the center of mass acceleration effects, permitting definition of equations of motion in a noninertial frame. Consideration is given to viscous dissipation and to characteristics of the solid-atmosphere boundary. The model is applied to sample cases in one, two and three dimensions.
Pigache, Francois; Messine, Frédéric; Nogarede, Bertrand
2007-07-01
This paper deals with a deterministic and rational way to design piezoelectric transformers in radial mode. The proposed approach is based on the study of the inverse problem of design and on its reformulation as a mixed constrained global optimization problem. The methodology relies on the association of the analytical models for describing the corresponding optimization problem and on an exact global optimization software, named IBBA and developed by the second author to solve it. Numerical experiments are presented and compared in order to validate the proposed approach.
Superiorization-based multi-energy CT image reconstruction
Yang, Q; Cong, W; Wang, G
2017-01-01
The recently-developed superiorization approach is efficient and robust for solving various constrained optimization problems. This methodology can be applied to multi-energy CT image reconstruction with the regularization in terms of the prior rank, intensity and sparsity model (PRISM). In this paper, we propose a superiorized version of the simultaneous algebraic reconstruction technique (SART) based on the PRISM model. Then, we compare the proposed superiorized algorithm with the Split-Bregman algorithm in numerical experiments. The results show that both the Superiorized-SART and the Split-Bregman algorithms generate good results with weak noise and reduced artefacts. PMID:28983142
A numerical study of the effects of wind tunnel wall proximity on an airfoil model
NASA Technical Reports Server (NTRS)
Potsdam, Mark; Roberts, Leonard
1990-01-01
A procedure was developed for modeling wind tunnel flows using computational fluid dynamics. Using this method, a numerical study was undertaken to explore the effects of solid wind tunnel wall proximity and Reynolds number on a two-dimensional airfoil model at low speed. Wind tunnel walls are located at varying wind tunnel height to airfoil chord ratios and the results are compared with freestream flow in the absence of wind tunnel walls. Discrepancies between the constrained and unconstrained flows can be attributed to the presence of the walls. Results are for a Mach Number of 0.25 at angles of attack through stall. A typical wind tunnel Reynolds number of 1,200,000 and full-scale flight Reynolds number of 6,000,000 were investigated. At this low Mach number, wind tunnel wall corrections to Mach number and angle of attack are supported. Reynolds number effects are seen to be a consideration in wind tunnel testing and wall interference correction methods. An unstructured grid Navier-Stokes code is used with a Baldwin-Lomax turbulence model. The numerical method is described since unstructured flow solvers present several difficulties and fundamental differences from structured grid codes, especially in the area of turbulence modeling and grid generation.
Numerical Modeling in Geodynamics: Success, Failure and Perspective
NASA Astrophysics Data System (ADS)
Ismail-Zadeh, A.
2005-12-01
A real success in numerical modeling of dynamics of the Earth can be achieved only by multidisciplinary research teams of experts in geodynamics, applied and pure mathematics, and computer science. The success in numerical modeling is based on the following basic, but simple, rules. (i) People need simplicity most, but they understand intricacies best (B. Pasternak, writer). Start from a simple numerical model, which describes basic physical laws by a set of mathematical equations, and move then to a complex model. Never start from a complex model, because you cannot understand the contribution of each term of the equations to the modeled geophysical phenomenon. (ii) Study the numerical methods behind your computer code. Otherwise it becomes difficult to distinguish true and erroneous solutions to the geodynamic problem, especially when your problem is complex enough. (iii) Test your model versus analytical and asymptotic solutions, simple 2D and 3D model examples. Develop benchmark analysis of different numerical codes and compare numerical results with laboratory experiments. Remember that the numerical tool you employ is not perfect, and there are small bugs in every computer code. Therefore the testing is the most important part of your numerical modeling. (iv) Prove (if possible) or learn relevant statements concerning the existence, uniqueness and stability of the solution to the mathematical and discrete problems. Otherwise you can solve an improperly-posed problem, and the results of the modeling will be far from the true solution of your model problem. (v) Try to analyze numerical models of a geological phenomenon using as less as possible tuning model variables. Already two tuning variables give enough possibilities to constrain your model well enough with respect to observations. The data fitting sometimes is quite attractive and can take you far from a principal aim of your numerical modeling: to understand geophysical phenomena. (vi) If the number of tuning model variables are greater than two, test carefully the effect of each of the variables on the modeled phenomenon. Remember: With four exponents I can fit an elephant (E. Fermi, physicist). (vii) Make your numerical model as accurate as possible, but never put the aim to reach a great accuracy: Undue precision of computations is the first symptom of mathematical illiteracy (N. Krylov, mathematician). How complex should be a numerical model? A model which images any detail of the reality is as useful as a map of scale 1:1 (J. Robinson, economist). This message is quite important for geoscientists, who study numerical models of complex geodynamical processes. I believe that geoscientists will never create a model of the real Earth dynamics, but we should try to model the dynamics such a way to simulate basic geophysical processes and phenomena. Does a particular model have a predictive power? Each numerical model has a predictive power, otherwise the model is useless. The predictability of the model varies with its complexity. Remember that a solution to the numerical model is an approximate solution to the equations, which have been chosen in believe that they describe dynamic processes of the Earth. Hence a numerical model predicts dynamics of the Earth as well as the mathematical equations describe this dynamics. What methodological advances are still needed for testable geodynamic modeling? Inverse (time-reverse) numerical modeling and data assimilation are new methodologies in geodynamics. The inverse modeling can allow to test geodynamic models forward in time using restored (from present-day observations) initial conditions instead of unknown conditions.
NASA Astrophysics Data System (ADS)
Kajiwara, Yoshiyuki; Shiraishi, Junya; Kobayashi, Shoei; Yamagami, Tamotsu
2009-03-01
A digital phase-locked loop (PLL) with a linearly constrained adaptive filter (LCAF) has been studied for higher-linear-density optical discs. LCAF has been implemented before an interpolated timing recovery (ITR) PLL unit in order to improve the quality of phase error calculation by using an adaptively equalized partial response (PR) signal. Coefficient update of an asynchronous sampled adaptive FIR filter with a least-mean-square (LMS) algorithm has been constrained by a projection matrix in order to suppress the phase shift of the tap coefficients of the adaptive filter. We have developed projection matrices that are suitable for Blu-ray disc (BD) drive systems by numerical simulation. Results have shown the properties of the projection matrices. Then, we have designed the read channel system of the ITR PLL with an LCAF model on the FPGA board for experiments. Results have shown that the LCAF improves the tilt margins of 30 gigabytes (GB) recordable BD (BD-R) and 33 GB BD read-only memory (BD-ROM) with a sufficient LMS adaptation stability.
NASA Astrophysics Data System (ADS)
Lu, Jianbo; Li, Dewei; Xi, Yugeng
2013-07-01
This article is concerned with probability-based constrained model predictive control (MPC) for systems with both structured uncertainties and time delays, where a random input delay and multiple fixed state delays are included. The process of input delay is governed by a discrete-time finite-state Markov chain. By invoking an appropriate augmented state, the system is transformed into a standard structured uncertain time-delay Markov jump linear system (MJLS). For the resulting system, a multi-step feedback control law is utilised to minimise an upper bound on the expected value of performance objective. The proposed design has been proved to stabilise the closed-loop system in the mean square sense and to guarantee constraints on control inputs and system states. Finally, a numerical example is given to illustrate the proposed results.
NASA Astrophysics Data System (ADS)
Liu, Qiang; Chattopadhyay, Aditi
2000-06-01
Aeromechanical stability plays a critical role in helicopter design and lead-lag damping is crucial to this design. In this paper, the use of segmented constrained damping layer (SCL) treatment and composite tailoring is investigated for improved rotor aeromechanical stability using formal optimization technique. The principal load-carrying member in the rotor blade is represented by a composite box beam, of arbitrary thickness, with surface bonded SCLs. A comprehensive theory is used to model the smart box beam. A ground resonance analysis model and an air resonance analysis model are implemented in the rotor blade built around the composite box beam with SCLs. The Pitt-Peters dynamic inflow model is used in air resonance analysis under hover condition. A hybrid optimization technique is used to investigate the optimum design of the composite box beam with surface bonded SCLs for improved damping characteristics. Parameters such as stacking sequence of the composite laminates and placement of SCLs are used as design variables. Detailed numerical studies are presented for aeromechanical stability analysis. It is shown that optimum blade design yields significant increase in rotor lead-lag regressive modal damping compared to the initial system.
Powder agglomeration in a microgravity environment
NASA Technical Reports Server (NTRS)
Cawley, James D.
1994-01-01
This is the final report for NASA Grant NAG3-755 entitled 'Powder Agglomeration in a Microgravity Environment.' The research program included both two types of numerical models and two types of experiments. The numerical modeling included the use of Monte Carlo type simulations of agglomerate growth including hydrodynamic screening and molecular dynamics type simulations of the rearrangement of particles within an agglomerate under a gravitational field. Experiments included direct observation of the agglomeration of submicron alumina and indirect observation, using small angle light scattering, of the agglomeration of colloidal silica and aluminum monohydroxide. In the former class of experiments, the powders were constrained to move on a two-dimensional surface oriented to minimize the effect of gravity. In the latter, some experiments involved mixture of suspensions containing particles of opposite charge which resulted in agglomeration on a very short time scale relative to settling under gravity.
Numerical methods for the inverse problem of density functional theory
Jensen, Daniel S.; Wasserman, Adam
2017-07-17
Here, the inverse problem of Kohn–Sham density functional theory (DFT) is often solved in an effort to benchmark and design approximate exchange-correlation potentials. The forward and inverse problems of DFT rely on the same equations but the numerical methods for solving each problem are substantially different. We examine both problems in this tutorial with a special emphasis on the algorithms and error analysis needed for solving the inverse problem. Two inversion methods based on partial differential equation constrained optimization and constrained variational ideas are introduced. We compare and contrast several different inversion methods applied to one-dimensional finite and periodic modelmore » systems.« less
Numerical methods for the inverse problem of density functional theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jensen, Daniel S.; Wasserman, Adam
Here, the inverse problem of Kohn–Sham density functional theory (DFT) is often solved in an effort to benchmark and design approximate exchange-correlation potentials. The forward and inverse problems of DFT rely on the same equations but the numerical methods for solving each problem are substantially different. We examine both problems in this tutorial with a special emphasis on the algorithms and error analysis needed for solving the inverse problem. Two inversion methods based on partial differential equation constrained optimization and constrained variational ideas are introduced. We compare and contrast several different inversion methods applied to one-dimensional finite and periodic modelmore » systems.« less
Pelletier, J.D.; Mayer, L.; Pearthree, P.A.; House, P.K.; Demsey, K.A.; Klawon, J.K.; Vincent, K.R.
2005-01-01
Millions of people in the western United States live near the dynamic, distributary channel networks of alluvial fans where flood behavior is complex and poorly constrained. Here we test a new comprehensive approach to alluvial-fan flood hazard assessment that uses four complementary methods: two-dimensional raster-based hydraulic modeling, satellite-image change detection, fieldbased mapping of recent flood inundation, and surficial geologic mapping. Each of these methods provides spatial detail lacking in the standard method and each provides critical information for a comprehensive assessment. Our numerical model simultaneously solves the continuity equation and Manning's equation (Chow, 1959) using an implicit numerical method. It provides a robust numerical tool for predicting flood flows using the large, high-resolution Digital Elevation Models (DEMs) necessary to resolve the numerous small channels on the typical alluvial fan. Inundation extents and flow depths of historic floods can be reconstructed with the numerical model and validated against field- and satellite-based flood maps. A probabilistic flood hazard map can also be constructed by modeling multiple flood events with a range of specified discharges. This map can be used in conjunction with a surficial geologic map to further refine floodplain delineation on fans. To test the accuracy of the numerical model, we compared model predictions of flood inundation and flow depths against field- and satellite-based flood maps for two recent extreme events on the southern Tortolita and Harquahala piedmonts in Arizona. Model predictions match the field- and satellite-based maps closely. Probabilistic flood hazard maps based on the 10 yr, 100 yr, and maximum floods were also constructed for the study areas using stream gage records and paleoflood deposits. The resulting maps predict spatially complex flood hazards that strongly reflect small-scale topography and are consistent with surficial geology. In contrast, FEMA Flood Insurance Rate Maps (FIRMs) based on the FAN model predict uniformly high flood risk across the study areas without regard for small-scale topography and surficial geology. ?? 2005 Geological Society of America.
Qualitative simulation for process modeling and control
NASA Technical Reports Server (NTRS)
Dalle Molle, D. T.; Edgar, T. F.
1989-01-01
A qualitative model is developed for a first-order system with a proportional-integral controller without precise knowledge of the process or controller parameters. Simulation of the qualitative model yields all of the solutions to the system equations. In developing the qualitative model, a necessary condition for the occurrence of oscillatory behavior is identified. Initializations that cannot exhibit oscillatory behavior produce a finite set of behaviors. When the phase-space behavior of the oscillatory behavior is properly constrained, these initializations produce an infinite but comprehensible set of asymptotically stable behaviors. While the predictions include all possible behaviors of the real system, a class of spurious behaviors has been identified. When limited numerical information is included in the model, the number of predictions is significantly reduced.
Local seismic hazard assessment in explosive volcanic settings by 3D numerical analyses
NASA Astrophysics Data System (ADS)
Razzano, Roberto; Pagliaroli, Alessandro; Moscatelli, Massimiliano; Gaudiosi, Iolanda; Avalle, Alessandra; Giallini, Silvia; Marcini, Marco; Polpetta, Federica; Simionato, Maurizio; Sirianni, Pietro; Sottili, Gianluca; Vignaroli, Gianluca; Bellanova, Jessica; Calamita, Giuseppe; Perrone, Angela; Piscitelli, Sabatino
2017-04-01
This work deals with the assessment of local seismic response in the explosive volcanic settings by reconstructing the subsoil model of the Stracciacappa maar (Sabatini Volcanic District, central Italy), whose pyroclastic succession records eruptive phases ended about 0.09 Ma ago. Heterogeneous characteristics of the Stracciacappa maar (stratification, structural setting, lithotypes, and thickness variation of depositional units) make it an ideal case history for understanding mechanisms and processes leading to modifications of amplitude-frequency-duration of seismic waves generated at earthquake sources and propagating through volcanic settings. New geological map and cross sections, constrained with recently acquired geotechnical and geophysical data, illustrate the complex geometric relationships among different depositional units forming the maar. A composite interfingering between internal lacustrine sediments and epiclastic debris, sourced from the rim, fills the crater floor; a 45 meters thick continuous coring borehole was drilled in the maar with sampling of undisturbed samples. Electrical Resistivity Tomography surveys and 2D passive seismic arrays were also carried out for constraining the geological model and the velocity profile of the S-waves, respectively. Single station noise measurements were collected in order to define natural amplification frequencies. Finally, the nonlinear cyclic soil behaviour was investigated through simple shear tests on the undisturbed samples. The collected dataset was used to define the subsoil model for 3D finite difference site response numerical analyses by using FLAC 3D software (ITASCA). Moreover, 1D and 2D numerical analyses were carried out for comparison purposes. Two different scenarios were selected as input motions: a moderate magnitude (volcanic event) and a high magnitude (tectonic event). Both earthquake scenarios revealed significant ground motion amplification (up to 15 in terms of spectral acceleration at about 1 s) essentially related to 2D/3D phenomena associated to sharp lateral variations of mechanical properties within the Stracciacappa maar. Our results are relevant to face the assessment of local seismic response in similar volcanic settings in highly urbanised environments elsewhere.
NASA Astrophysics Data System (ADS)
Brunner, Philip; Doherty, J.; Simmons, Craig T.
2012-07-01
The data set used for calibration of regional numerical models which simulate groundwater flow and vadose zone processes is often dominated by head observations. It is to be expected therefore, that parameters describing vadose zone processes are poorly constrained. A number of studies on small spatial scales explored how additional data types used in calibration constrain vadose zone parameters or reduce predictive uncertainty. However, available studies focused on subsets of observation types and did not jointly account for different measurement accuracies or different hydrologic conditions. In this study, parameter identifiability and predictive uncertainty are quantified in simulation of a 1-D vadose zone soil system driven by infiltration, evaporation and transpiration. The worth of different types of observation data (employed individually, in combination, and with different measurement accuracies) is evaluated by using a linear methodology and a nonlinear Pareto-based methodology under different hydrological conditions. Our main conclusions are (1) Linear analysis provides valuable information on comparative parameter and predictive uncertainty reduction accrued through acquisition of different data types. Its use can be supplemented by nonlinear methods. (2) Measurements of water table elevation can support future water table predictions, even if such measurements inform the individual parameters of vadose zone models to only a small degree. (3) The benefits of including ET and soil moisture observations in the calibration data set are heavily dependent on depth to groundwater. (4) Measurements of groundwater levels, measurements of vadose ET or soil moisture poorly constrain regional groundwater system forcing functions.
A constrained-gradient method to control divergence errors in numerical MHD
NASA Astrophysics Data System (ADS)
Hopkins, Philip F.
2016-10-01
In numerical magnetohydrodynamics (MHD), a major challenge is maintaining nabla \\cdot {B}=0. Constrained transport (CT) schemes achieve this but have been restricted to specific methods. For more general (meshless, moving-mesh, ALE) methods, `divergence-cleaning' schemes reduce the nabla \\cdot {B} errors; however they can still be significant and can lead to systematic errors which converge away slowly. We propose a new constrained gradient (CG) scheme which augments these with a projection step, and can be applied to any numerical scheme with a reconstruction. This iteratively approximates the least-squares minimizing, globally divergence-free reconstruction of the fluid. Unlike `locally divergence free' methods, this actually minimizes the numerically unstable nabla \\cdot {B} terms, without affecting the convergence order of the method. We implement this in the mesh-free code GIZMO and compare various test problems. Compared to cleaning schemes, our CG method reduces the maximum nabla \\cdot {B} errors by ˜1-3 orders of magnitude (˜2-5 dex below typical errors if no nabla \\cdot {B} cleaning is used). By preventing large nabla \\cdot {B} at discontinuities, this eliminates systematic errors at jumps. Our CG results are comparable to CT methods; for practical purposes, the nabla \\cdot {B} errors are eliminated. The cost is modest, ˜30 per cent of the hydro algorithm, and the CG correction can be implemented in a range of numerical MHD methods. While for many problems, we find Dedner-type cleaning schemes are sufficient for good results, we identify a range of problems where using only Powell or `8-wave' cleaning can produce order-of-magnitude errors.
Dynamics of Compressible Convection and Thermochemical Mantle Convection
NASA Astrophysics Data System (ADS)
Liu, Xi
The Earth's long-wavelength geoid anomalies have long been used to constrain the dynamics and viscosity structure of the mantle in an isochemical, whole-mantle convection model. However, there is strong evidence that the seismically observed large low shear velocity provinces (LLSVPs) in the lowermost mantle are chemically distinct and denser than the ambient mantle. In this thesis, I investigated how chemically distinct and dense piles influence the geoid. I formulated dynamically self-consistent 3D spherical convection models with realistic mantle viscosity structure which reproduce Earth's dominantly spherical harmonic degree-2 convection. The models revealed a compensation effect of the chemically dense LLSVPs. Next, I formulated instantaneous flow models based on seismic tomography to compute the geoid and constrain mantle viscosity assuming thermochemical convection with the compensation effect. Thermochemical models reconcile the geoid observations. The viscosity structure inverted for thermochemical models is nearly identical to that of whole-mantle models, and both prefer weak transition zone. Our results have implications for mineral physics, seismic tomographic studies, and mantle convection modelling. Another part of this thesis describes analyses of the influence of mantle compressibility on thermal convection in an isoviscous and compressible fluid with infinite Prandtl number. A new formulation of the propagator matrix method is implemented to compute the critical Rayleigh number and the corresponding eigenfunctions for compressible convection. Heat flux and thermal boundary layer properties are quantified in numerical models and scaling laws are developed.
McGuire, Luke; Kean, Jason W.; Staley, Dennis M.; Rengers, Francis K.; Wasklewicz, Thad A.
2016-01-01
Mountain watersheds recently burned by wildfire often experience greater amounts of runoff and increased rates of sediment transport relative to similar unburned areas. Given the sedimentation and debris flow threats caused by increases in erosion, more work is needed to better understand the physical mechanisms responsible for the observed increase in sediment transport in burned environments and the time scale over which a heightened geomorphic response can be expected. In this study, we quantified the relative importance of different hillslope erosion mechanisms during two postwildfire rainstorms at a drainage basin in Southern California by combining terrestrial laser scanner-derived maps of topographic change, field measurements, and numerical modeling of overland flow and sediment transport. Numerous debris flows were initiated by runoff at our study area during a long-duration storm of relatively modest intensity. Despite the presence of a well-developed rill network, numerical model results suggest that the majority of eroded hillslope sediment during this long-duration rainstorm was transported by raindrop-induced sediment transport processes, highlighting the importance of raindrop-driven processes in supplying channels with potential debris flow material. We also used the numerical model to explore relationships between postwildfire storm characteristics, vegetation cover, soil infiltration capacity, and the total volume of eroded sediment from a synthetic hillslope for different end-member erosion regimes. This study adds to our understanding of sediment transport in steep, postwildfire landscapes and shows how data from field monitoring can be combined with numerical modeling of sediment transport to isolate the processes leading to increased erosion in burned areas.
Rathfelder, K M; Abriola, L M; Taylor, T P; Pennell, K D
2001-04-01
A numerical model of surfactant enhanced solubilization was developed and applied to the simulation of nonaqueous phase liquid recovery in two-dimensional heterogeneous laboratory sand tank systems. Model parameters were derived from independent, small-scale, batch and column experiments. These parameters included viscosity, density, solubilization capacity, surfactant sorption, interfacial tension, permeability, capillary retention functions, and interphase mass transfer correlations. Model predictive capability was assessed for the evaluation of the micellar solubilization of tetrachloroethylene (PCE) in the two-dimensional systems. Predicted effluent concentrations and mass recovery agreed reasonably well with measured values. Accurate prediction of enhanced solubilization behavior in the sand tanks was found to require the incorporation of pore-scale, system-dependent, interphase mass transfer limitations, including an explicit representation of specific interfacial contact area. Predicted effluent concentrations and mass recovery were also found to depend strongly upon the initial NAPL entrapment configuration. Numerical results collectively indicate that enhanced solubilization processes in heterogeneous, laboratory sand tank systems can be successfully simulated using independently measured soil parameters and column-measured mass transfer coefficients, provided that permeability and NAPL distributions are accurately known. This implies that the accuracy of model predictions at the field scale will be constrained by our ability to quantify soil heterogeneity and NAPL distribution.
Nonlinear system modeling based on bilinear Laguerre orthonormal bases.
Garna, Tarek; Bouzrara, Kais; Ragot, José; Messaoud, Hassani
2013-05-01
This paper proposes a new representation of discrete bilinear model by developing its coefficients associated to the input, to the output and to the crossed product on three independent Laguerre orthonormal bases. Compared to classical bilinear model, the resulting model entitled bilinear-Laguerre model ensures a significant parameter number reduction as well as simple recursive representation. However, such reduction still constrained by an optimal choice of Laguerre pole characterizing each basis. To do so, we develop a pole optimization algorithm which constitutes an extension of that proposed by Tanguy et al.. The bilinear-Laguerre model as well as the proposed pole optimization algorithm are illustrated and tested on a numerical simulations and validated on the Continuous Stirred Tank Reactor (CSTR) System. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.
Carbon Isotope Biogeochemistry of Methane from Anoxic Sediments
NASA Technical Reports Server (NTRS)
Blair, Neal E.
1993-01-01
The isotopic composition of naturally occurring methane was used to constrain the tropospheric budget of that radiatively active gas. Numerous studies have shown that the isotopic composition is not constant, even for a specific source, and may vary temporally and spatially. The objective was to develop a process-level model that reproduced the seasonal variations in the C-13/C-12 composition of methane observed at the coastal site, Cape Lookout Bight, NC. Details of the mass balance are provided. Experiments and models designed to determine what factors incluence C-13/C-12 ratio of dissolved CO2 are reported. All the factors described were combined in a model that faithfully reproduces the seasonal C-13/C-12 variations observed at Cape Lookout. The model is described.
Rong, Qiangqiang; Cai, Yanpeng; Chen, Bing; Yue, Wencong; Yin, Xin'an; Tan, Qian
2017-02-15
In this research, an export coefficient based dual inexact two-stage stochastic credibility constrained programming (ECDITSCCP) model was developed through integrating an improved export coefficient model (ECM), interval linear programming (ILP), fuzzy credibility constrained programming (FCCP) and a fuzzy expected value equation within a general two stage programming (TSP) framework. The proposed ECDITSCCP model can effectively address multiple uncertainties expressed as random variables, fuzzy numbers, pure and dual intervals. Also, the model can provide a direct linkage between pre-regulated management policies and the associated economic implications. Moreover, the solutions under multiple credibility levels can be obtained for providing potential decision alternatives for decision makers. The proposed model was then applied to identify optimal land use structures for agricultural NPS pollution mitigation in a representative upstream subcatchment of the Miyun Reservoir watershed in north China. Optimal solutions of the model were successfully obtained, indicating desired land use patterns and nutrient discharge schemes to get a maximum agricultural system benefits under a limited discharge permit. Also, numerous results under multiple credibility levels could provide policy makers with several options, which could help get an appropriate balance between system benefits and pollution mitigation. The developed ECDITSCCP model can be effectively applied to addressing the uncertain information in agricultural systems and shows great applicability to the land use adjustment for agricultural NPS pollution mitigation. Copyright © 2016 Elsevier B.V. All rights reserved.
Chang, Wen-Jer; Huang, Bo-Jyun
2014-11-01
The multi-constrained robust fuzzy control problem is investigated in this paper for perturbed continuous-time nonlinear stochastic systems. The nonlinear system considered in this paper is represented by a Takagi-Sugeno fuzzy model with perturbations and state multiplicative noises. The multiple performance constraints considered in this paper include stability, passivity and individual state variance constraints. The Lyapunov stability theory is employed to derive sufficient conditions to achieve the above performance constraints. By solving these sufficient conditions, the contribution of this paper is to develop a parallel distributed compensation based robust fuzzy control approach to satisfy multiple performance constraints for perturbed nonlinear systems with multiplicative noises. At last, a numerical example for the control of perturbed inverted pendulum system is provided to illustrate the applicability and effectiveness of the proposed multi-constrained robust fuzzy control method. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
Liu, Qingshan; Guo, Zhishan; Wang, Jun
2012-02-01
In this paper, a one-layer recurrent neural network is proposed for solving pseudoconvex optimization problems subject to linear equality and bound constraints. Compared with the existing neural networks for optimization (e.g., the projection neural networks), the proposed neural network is capable of solving more general pseudoconvex optimization problems with equality and bound constraints. Moreover, it is capable of solving constrained fractional programming problems as a special case. The convergence of the state variables of the proposed neural network to achieve solution optimality is guaranteed as long as the designed parameters in the model are larger than the derived lower bounds. Numerical examples with simulation results illustrate the effectiveness and characteristics of the proposed neural network. In addition, an application for dynamic portfolio optimization is discussed. Copyright © 2011 Elsevier Ltd. All rights reserved.
Model and algorithm based on accurate realization of dwell time in magnetorheological finishing.
Song, Ci; Dai, Yifan; Peng, Xiaoqiang
2010-07-01
Classically, a dwell-time map is created with a method such as deconvolution or numerical optimization, with the input being a surface error map and influence function. This dwell-time map is the numerical optimum for minimizing residual form error, but it takes no account of machine dynamics limitations. The map is then reinterpreted as machine speeds and accelerations or decelerations in a separate operation. In this paper we consider combining the two methods in a single optimization by the use of a constrained nonlinear optimization model, which regards both the two-norm of the surface residual error and the dwell-time gradient as an objective function. This enables machine dynamic limitations to be properly considered within the scope of the optimization, reducing both residual surface error and polishing times. Further simulations are introduced to demonstrate the feasibility of the model, and the velocity map is reinterpreted from the dwell time, meeting the requirement of velocity and the limitations of accelerations or decelerations. Indeed, the model and algorithm can also apply to other computer-controlled subaperture methods.
Quantitative Relationships Involving Additive Differences: Numerical Resilience
ERIC Educational Resources Information Center
Ramful, Ajay; Ho, Siew Yin
2014-01-01
This case study describes the ways in which problems involving additive differences with unknown starting quantities, constrain the problem solver in articulating the inherent quantitative relationship. It gives empirical evidence to show how numerical reasoning takes over as a Grade 6 student instantiates the quantitative relation by resorting to…
NASA Astrophysics Data System (ADS)
Grose, C. J.
2008-05-01
Numerical geodynamics models of heat transfer are typically thought of as specialized topics of research requiring knowledge of specialized modelling software, linux platforms, and state-of-the-art finite-element codes. I have implemented analytical and numerical finite-difference techniques with Microsoft Excel 2007 spreadsheets to solve for complex solid-earth heat transfer problems for use by students, teachers, and practicing scientists without specialty in geodynamics modelling techniques and applications. While implementation of equations for use in Excel spreadsheets is occasionally cumbersome, once case boundary structure and node equations are developed, spreadsheet manipulation becomes routine. Model experimentation by modifying parameter values, geometry, and grid resolution makes Excel a useful tool whether in the classroom at the undergraduate or graduate level or for more engaging student projects. Furthermore, the ability to incorporate complex geometries and heat-transfer characteristics makes it ideal for first and occasionally higher order geodynamics simulations to better understand and constrain the results of professional field research in a setting that does not require the constraints of state-of-the-art modelling codes. The straightforward expression and manipulation of model equations in excel can also serve as a medium to better understand the confusing notations of advanced mathematical problems. To illustrate the power and robustness of computation and visualization in spreadsheet models I focus primarily on one-dimensional analytical and two-dimensional numerical solutions to two case problems: (i) the cooling of oceanic lithosphere and (ii) temperatures within subducting slabs. Excel source documents will be made available.
Fixman compensating potential for general branched molecules
NASA Astrophysics Data System (ADS)
Jain, Abhinandan; Kandel, Saugat; Wagner, Jeffrey; Larsen, Adrien; Vaidehi, Nagarajan
2013-12-01
The technique of constraining high frequency modes of molecular motion is an effective way to increase simulation time scale and improve conformational sampling in molecular dynamics simulations. However, it has been shown that constraints on higher frequency modes such as bond lengths and bond angles stiffen the molecular model, thereby introducing systematic biases in the statistical behavior of the simulations. Fixman proposed a compensating potential to remove such biases in the thermodynamic and kinetic properties calculated from dynamics simulations. Previous implementations of the Fixman potential have been limited to only short serial chain systems. In this paper, we present a spatial operator algebra based algorithm to calculate the Fixman potential and its gradient within constrained dynamics simulations for branched topology molecules of any size. Our numerical studies on molecules of increasing complexity validate our algorithm by demonstrating recovery of the dihedral angle probability distribution function for systems that range in complexity from serial chains to protein molecules. We observe that the Fixman compensating potential recovers the free energy surface of a serial chain polymer, thus annulling the biases caused by constraining the bond lengths and bond angles. The inclusion of Fixman potential entails only a modest increase in the computational cost in these simulations. We believe that this work represents the first instance where the Fixman potential has been used for general branched systems, and establishes the viability for its use in constrained dynamics simulations of proteins and other macromolecules.
NASA Astrophysics Data System (ADS)
Smekens, J.; Clarke, A. B.; De'Michieli Vitturi, M.; Moore, G. M.
2012-12-01
Mt. Semeru is one of the most active explosive volcanoes on the island of Java in Indonesia. The current eruption style consists of small but frequent explosions and/or gas releases (several times a day) accompanied by continuous lava effusion that sporadically produces block-and-ash flows down the SE flank of the volcano. Semeru presents a unique opportunity to investigate the magma ascent conditions that produce this kind of persistent periodic behavior and the coexistence of explosive and effusive eruptions. In this work we use DOMEFLOW, a 1.5D transient isothermal numerical model, to investigate the dynamics of lava extrusion at Semeru. Petrologic observations from tephra and ballistic samples collected at the summit help us constrain the initial conditions of the system. Preliminary model runs produced periodic lava extrusion and pulses of gas release at the vent, with a cycle period on the order of hours, even though a steady magma supply rate was prescribed at the bottom of the conduit. Enhanced shallow permeability implemented in the model appears to create a dense plug in the shallow subsurface, which in turn plays a critical role in creating and controlling the observed periodic behavior. We measured SO2 fluxes just above the vent, using a custom UV imaging system. The device consists of two high-sensitivity CCD cameras with narrow UV filters centered at 310 and 330 nm, and a USB2000+ spectrometer for calibration and distance correction. The method produces high-frequency flux series with an accurate determination of the wind speed and plume geometry. The model results, when combined with gas measurements, and measurements of sulfur in both the groundmass and melt inclusions in eruptive products, could be used to create a volatile budget of the system. Furthermore, a well-calibrated model of the system will ultimately allow the characteristic periodicity and corresponding gas flux to be used as a proxy for magma supply rate.
NASA Astrophysics Data System (ADS)
Schwarz, J. M.; Zhang, Tao; Das, Moumita
2013-03-01
At the leading edge of a crawling cell, the actin cytoskeleton extends itself in a particular direction via a branched crosslinked network of actin filaments with some overall alignment. This network is known as the lamellipodium. Branching via the complex Arp2/3 occurs at a reasonably well-defined angle of 70 degrees from the plus end of the mother filament such that Arp2/3 can be modeled as an angle-constraining crosslinker. Freely-rotating crosslinkers, such as alpha-actinin, are also present in lamellipodia. Therefore, we study the interplay between these two types of crosslinkers, angle-constraining and free-rotating, both analytically and numerically, to begin to quantify the mechanics of lamellipodia. We also investigate how the orientational ordering of the filaments affects this interplay. Finally, while role of Arp2/3 as a nucleator for filaments along the leading edge of a crawling cell has been studied intensely, much less is known about its mechanical contribution. Our work seeks to fill in this important gap in modeling the mechanics of lamellipodia.
Rhelogical constraints on ridge formation on Icy Satellites
NASA Astrophysics Data System (ADS)
Rudolph, M. L.; Manga, M.
2010-12-01
The processes responsible for forming ridges on Europa remain poorly understood. We use a continuum damage mechanics approach to model ridge formation. The main objectives of this contribution are to constrain (1) choice of rheological parameters and (2) maximum ridge size and rate of formation. The key rheological parameters to constrain appear in the evolution equation for a damage variable (D): ˙ {D} = B <<σ >>r}(1-D){-k-α D (p)/(μ ) and in the equation relating damage accumulation to volumetric changes, Jρ 0 = δ (1-D). Similar damage evolution laws have been applied to terrestrial glaciers and to the analysis of rock mechanics experiments. However, it is reasonable to expect that, like viscosity, the rheological constants B, α , and δ depend strongly on temperature, composition, and ice grain size. In order to determine whether the damage model is appropriate for Europa’s ridges, we must find values of the unknown damage parameters that reproduce ridge topography. We perform a suite of numerical experiments to identify the region of parameter space conducive to ridge production and show the sensitivity to changes in each unknown parameter.
Modelling cell motility and chemotaxis with evolving surface finite elements
Elliott, Charles M.; Stinner, Björn; Venkataraman, Chandrasekhar
2012-01-01
We present a mathematical and a computational framework for the modelling of cell motility. The cell membrane is represented by an evolving surface, with the movement of the cell determined by the interaction of various forces that act normal to the surface. We consider external forces such as those that may arise owing to inhomogeneities in the medium and a pressure that constrains the enclosed volume, as well as internal forces that arise from the reaction of the cells' surface to stretching and bending. We also consider a protrusive force associated with a reaction–diffusion system (RDS) posed on the cell membrane, with cell polarization modelled by this surface RDS. The computational method is based on an evolving surface finite-element method. The general method can account for the large deformations that arise in cell motility and allows the simulation of cell migration in three dimensions. We illustrate applications of the proposed modelling framework and numerical method by reporting on numerical simulations of a model for eukaryotic chemotaxis and a model for the persistent movement of keratocytes in two and three space dimensions. Movies of the simulated cells can be obtained from http://homepages.warwick.ac.uk/∼maskae/CV_Warwick/Chemotaxis.html. PMID:22675164
A new potential for the numerical simulations of electrolyte solutions on a hypersphere
NASA Astrophysics Data System (ADS)
Caillol, Jean-Michel
1993-12-01
We propose a new way of performing numerical simulations of the restricted primitive model of electrolytes—and related models—on a hypersphere. In this new approach, the system is viewed as a single component fluid of charged bihard spheres constrained to move at the surface of a four dimensional sphere. A charged bihard sphere is defined as the rigid association of two antipodal charged hard spheres of opposite signs. These objects interact via a simple analytical potential obtained by solving the Poisson-Laplace equation on the hypersphere. This new technique of simulation enables a precise determination of the chemical potential of the charged species in the canonical ensemble by a straightforward application of Widom's insertion method. Comparisons with previous simulations demonstrate the efficiency and the reliability of the method.
A feasible DY conjugate gradient method for linear equality constraints
NASA Astrophysics Data System (ADS)
LI, Can
2017-09-01
In this paper, we propose a feasible conjugate gradient method for solving linear equality constrained optimization problem. The method is an extension of the Dai-Yuan conjugate gradient method proposed by Dai and Yuan to linear equality constrained optimization problem. It can be applied to solve large linear equality constrained problem due to lower storage requirement. An attractive property of the method is that the generated direction is always feasible and descent direction. Under mild conditions, the global convergence of the proposed method with exact line search is established. Numerical experiments are also given which show the efficiency of the method.
Analytical approximation and numerical simulations for periodic travelling water waves
NASA Astrophysics Data System (ADS)
Kalimeris, Konstantinos
2017-12-01
We present recent analytical and numerical results for two-dimensional periodic travelling water waves with constant vorticity. The analytical approach is based on novel asymptotic expansions. We obtain numerical results in two different ways: the first is based on the solution of a constrained optimization problem, and the second is realized as a numerical continuation algorithm. Both methods are applied on some examples of non-constant vorticity. This article is part of the theme issue 'Nonlinear water waves'.
NASA Astrophysics Data System (ADS)
Pozzer, A.; Ojha, N.; Tost, H.; Joeckel, P.; Fischer, H.; Ziereis, H.; Zahn, A.; Tomsche, L.; Lelieveld, J.
2017-12-01
The impacts of Asian monsoon on the tropospheric chemistry are difficult to simulate in numerical models due to the lack of accurate emission inventories over the Asian region and the strong influence of parameterized processes such as convection and lightning. Further, the lack of observational data over the region during the monsoon period reduce drastically the capability to evaluate numerical models. Here, we combine simulations using the global EMAC (ECHAM5/MESSy2 Atmospheric Chemistry) model with the observational dataset based on the OMO campaign (July-August 2015) to study the tropospheric composition in the Asian monsoon anticyclone. The results of the simulations capture the C-shape of the CO vertical profiles, typically observed during the summer monsoon. The observed spatio-temporal variations in O3, CO, and NOy are reproduced by EMAC, with a better correlation in the upper troposphere (UT). However, the model overestimates NOy and O3 mixing ratios in the anticyclone by 25% and 35%, respectively. A series of numerical experiments showed the strong lightning emissions in the model as the source of this overestimation, with the anthropogenic NOx sources (in Asia) and global soil emissions having lower impact in the UT. A reduction of the lightning NOx emission by 50% leads to a better agreement between the model and OMO observations of NOy and O3. The uncertainties in the lightning emissions are found to considerably influence the OH distribution in the UT over India and downwind. The study reveals existing uncertainties in the estimations of monsoon impact on the tropospheric composition, and highlights the need to constrain numerical simulations with state-of-the-art observations for deriving the budget of trace species of climatic relevance.
NASA Astrophysics Data System (ADS)
Lifton, N. A.; Newall, J. C.; Fredin, O.; Glasser, N. F.; Fabel, D.; Rogozhina, I.; Bernales, J.; Prange, M.; Sams, S.; Eisen, O.; Hättestrand, C.; Harbor, J.; Stroeven, A. P.
2017-12-01
Numerical ice sheet models constrained by theory and refined by comparisons with observational data are a central component of work to address the interactions between the cryosphere and changing climate, at a wide range of scales. Such models are tested and refined by comparing model predictions of past ice geometries with field-based reconstructions from geological, geomorphological, and ice core data. However, on the East Antarctic Ice sheet, there are few empirical data with which to reconstruct changes in ice sheet geometry in the Dronning Maud Land (DML) region. In addition, there is poor control on the regional climate history of the ice sheet margin, because ice core locations, where detailed reconstructions of climate history exist, are located on high inland domes. This leaves numerical models of regional glaciation history in this near-coastal area largely unconstrained. MAGIC-DML is an ongoing Swedish-US-Norwegian-German-UK collaboration with a focus on improving ice sheet models by combining advances in numerical modeling with filling critical data gaps that exist in our knowledge of the timing and pattern of ice surface changes on the western Dronning Maud Land margin. A combination of geomorphological mapping using remote sensing data, field investigations, cosmogenic nuclide surface exposure dating, and numerical ice-sheet modeling are being used in an iterative manner to produce a comprehensive reconstruction of the glacial history of western Dronning Maud Land. We will present an overview of the project, as well as field observations and preliminary in situ cosmogenic nuclide measurements from the 2016/17 expedition.
Hybrid modeling of spatial continuity for application to numerical inverse problems
Friedel, Michael J.; Iwashita, Fabio
2013-01-01
A novel two-step modeling approach is presented to obtain optimal starting values and geostatistical constraints for numerical inverse problems otherwise characterized by spatially-limited field data. First, a type of unsupervised neural network, called the self-organizing map (SOM), is trained to recognize nonlinear relations among environmental variables (covariates) occurring at various scales. The values of these variables are then estimated at random locations across the model domain by iterative minimization of SOM topographic error vectors. Cross-validation is used to ensure unbiasedness and compute prediction uncertainty for select subsets of the data. Second, analytical functions are fit to experimental variograms derived from original plus resampled SOM estimates producing model variograms. Sequential Gaussian simulation is used to evaluate spatial uncertainty associated with the analytical functions and probable range for constraining variables. The hybrid modeling of spatial continuity is demonstrated using spatially-limited hydrologic measurements at different scales in Brazil: (1) physical soil properties (sand, silt, clay, hydraulic conductivity) in the 42 km2 Vargem de Caldas basin; (2) well yield and electrical conductivity of groundwater in the 132 km2 fractured crystalline aquifer; and (3) specific capacity, hydraulic head, and major ions in a 100,000 km2 transboundary fractured-basalt aquifer. These results illustrate the benefits of exploiting nonlinear relations among sparse and disparate data sets for modeling spatial continuity, but the actual application of these spatial data to improve numerical inverse modeling requires testing.
A Survey of Studies on Ignition and Burn of Inertially Confined Fuels
NASA Astrophysics Data System (ADS)
Atzeni, Stefano
2016-10-01
A survey of studies on ignition and burn of inertial fusion fuels is presented. Potentials and issues of different approaches to ignition (central ignition, fast ignition, volume ignition) are addressed by means of simple models and numerical simulations. Both equimolar DT and T-lean mixtures are considered. Crucial issues concerning hot spot formation (implosion symmetry for central ignition; igniting pulse parameters for fast ignition) are briefly discussed. Recent results concerning the scaling of the ignition energy with the implosion velocity and constrained gain curves are also summarized.
Numerical modelling and data assimilation of the Larsen B ice shelf, Antarctic Peninsula.
Vieli, Andreas; Payne, Antony J; Du, Zhijun; Shepherd, Andrew
2006-07-15
In this study, the flow and rheology of pre-collapse Larsen B ice shelf are investigated by using a combination of flow modelling and data assimilation. Observed shelf velocities from satellite interferometry are used to constrain an ice shelf model by using a data assimilation technique based on the control method. In particular, the ice rheology field and the velocities at the inland shelf boundary are simultaneously optimized to get a modelled flow and stress field that is consistent with the observed flow. The application to the Larsen B ice shelf shows that a strong weakening of the ice in the shear zones, mostly along the margins, is necessary to fit the observed shelf flow. This pattern of bands with weak ice is a very robust feature of the inversion, whereas the ice rheology within the main shelf body is found to be not well constrained. This suggests that these weak zones play a major role in the control of the flow of the Larsen B ice shelf and may be the key to understanding the observed pre-collapse thinning and acceleration of Larsen B. Regarding the sensitivity of the stress field to rheology, the consistency of the model with the observed flow seems crucial for any further analysis such as the application of fracture mechanics or perturbation model experiments.
NASA Astrophysics Data System (ADS)
Gnedin, Nickolay Y.; Semenov, Vadim A.; Kravtsov, Andrey V.
2018-04-01
An optimally efficient explicit numerical scheme for solving fluid dynamics equations, or any other parabolic or hyperbolic system of partial differential equations, should allow local regions to advance in time with their own, locally constrained time steps. However, such a scheme can result in violation of the Courant-Friedrichs-Lewy (CFL) condition, which is manifestly non-local. Although the violations can be considered to be "weak" in a certain sense and the corresponding numerical solution may be stable, such calculation does not guarantee the correct propagation speed for arbitrary waves. We use an experimental fluid dynamics code that allows cubic "patches" of grid cells to step with independent, locally constrained time steps to demonstrate how the CFL condition can be enforced by imposing a constraint on the time steps of neighboring patches. We perform several numerical tests that illustrate errors introduced in the numerical solutions by weak CFL condition violations and show how strict enforcement of the CFL condition eliminates these errors. In all our tests the strict enforcement of the CFL condition does not impose a significant performance penalty.
The accuracy of semi-numerical reionization models in comparison with radiative transfer simulations
NASA Astrophysics Data System (ADS)
Hutter, Anne
2018-03-01
We have developed a modular semi-numerical code that computes the time and spatially dependent ionization of neutral hydrogen (H I), neutral (He I) and singly ionized helium (He II) in the intergalactic medium (IGM). The model accounts for recombinations and provides different descriptions for the photoionization rate that are used to calculate the residual H I fraction in ionized regions. We compare different semi-numerical reionization schemes to a radiative transfer (RT) simulation. We use the RT simulation as a benchmark, and find that the semi-numerical approaches produce similar H II and He II morphologies and power spectra of the H I 21cm signal throughout reionization. As we do not track partial ionization of He II, the extent of the double ionized helium (He III) regions is consistently smaller. In contrast to previous comparison projects, the ionizing emissivity in our semi-numerical scheme is not adjusted to reproduce the redshift evolution of the RT simulation, but directly derived from the RT simulation spectra. Among schemes that identify the ionized regions by the ratio of the number of ionization and absorption events on different spatial smoothing scales, we find those that mark the entire sphere as ionized when the ionization criterion is fulfilled to result in significantly accelerated reionization compared to the RT simulation. Conversely, those that flag only the central cell as ionized yield very similar but slightly delayed redshift evolution of reionization, with up to 20% ionizing photons lost. Despite the overall agreement with the RT simulation, our results suggests that constraining ionizing emissivity sensitive parameters from semi-numerical galaxy formation-reionization models are subject to photon nonconservation.
Wiechert, W; de Graaf, A A
1997-07-05
The extension of metabolite balancing with carbon labeling experiments, as described by Marx et al. (Biotechnol. Bioeng. 49: 11-29), results in a much more detailed stationary metabolic flux analysis. As opposed to basic metabolite flux balancing alone, this method enables both flux directions of bidirectional reaction steps to be quantitated. However, the mathematical treatment of carbon labeling systems is much more complicated, because it requires the solution of numerous balance equations that are bilinear with respect to fluxes and fractional labeling. In this study, a universal modeling framework is presented for describing the metabolite and carbon atom flux in a metabolic network. Bidirectional reaction steps are extensively treated and their impact on the system's labeling state is investigated. Various kinds of modeling assumptions, as usually made for metabolic fluxes, are expressed by linear constraint equations. A numerical algorithm for the solution of the resulting linear constrained set of nonlinear equations is developed. The numerical stability problems caused by large bidirectional fluxes are solved by a specially developed transformation method. Finally, the simulation of carbon labeling experiments is facilitated by a flexible software tool for network synthesis. An illustrative simulation study on flux identifiability from available flux and labeling measurements in the cyclic pentose phosphate pathway of a recombinant strain of Zymomonas mobilis concludes this contribution.
Global Evolution of Plasmaspheric Plasma: Spacecraft-Model Reconstructions
NASA Astrophysics Data System (ADS)
Walsh, B.; Welling, D. T.; Morley, S.
2017-12-01
During times of geomagnetic disturbance, material from the plasmasphere will move radially outward into the magnetosphere. Once introduced to the outer magnetosphere, this material has been shown to impact a variety of plasma populations as well as the coupling of energy from the solar wind into the magnetosphere and ionosphere. The magnitude of any of these effects is inherently linked to the density and evolution of the plasmaspheric plasma. Much of our idea of how this population behaves in the outer-magnetosphere is however based on statistical pictures and model results. Here, in-situ measurements from 10 spacecraft are used to constrain a coupled, global numerical modeling in order to identify true spatial extents, time histories, and densities of the plasmasphere and plumes in the outer magnetosphere.
NASA Technical Reports Server (NTRS)
Mason, G. M.; Ng, C. K.; Klecker, B.; Green, G.
1989-01-01
Impulsive solar energetic particle (SEP) events are studied to: (1) describe a distinct class of SEP ion events observed in interplanetary space, and (2) test models of focused transport through detailed comparisons of numerical model prediction with the data. An attempt will also be made to describe the transport and scattering properties of the interplanetary medium during the times these events are observed and to derive source injection profiles in these events. ISEE 3 and Helios 1 magnetic field and plasma data are used to locate the approximate coronal connection points of the spacecraft to organize the particle anisotropy data and to constrain some free parameters in the modeling of flare events.
Majorana dark matter with B+L gauge symmetry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chao, Wei; Guo, Huai-Ke; Zhang, Yongchao
Here, we present a new model that extends the Standard Model (SM) with the local B + L symmetry, and point out that the lightest new fermion, introduced to cancel anomalies and stabilized automatically by the B + L symmetry, can serve as the cold dark matter candidate. We also study constraints on the model from Higgs measurements, electroweak precision measurements as well as the relic density and direct detections of the dark matter. Our numerical results reveal that the pseudo-vector coupling of with Z and the Yukawa coupling with the SM Higgs are highly constrained by the latest resultsmore » of LUX, while there are viable parameter space that could satisfy all the constraints and give testable predictions.« less
Majorana dark matter with B+L gauge symmetry
Chao, Wei; Guo, Huai-Ke; Zhang, Yongchao
2017-04-07
Here, we present a new model that extends the Standard Model (SM) with the local B + L symmetry, and point out that the lightest new fermion, introduced to cancel anomalies and stabilized automatically by the B + L symmetry, can serve as the cold dark matter candidate. We also study constraints on the model from Higgs measurements, electroweak precision measurements as well as the relic density and direct detections of the dark matter. Our numerical results reveal that the pseudo-vector coupling of with Z and the Yukawa coupling with the SM Higgs are highly constrained by the latest resultsmore » of LUX, while there are viable parameter space that could satisfy all the constraints and give testable predictions.« less
NASA Astrophysics Data System (ADS)
Lange, J.; O'Shaughnessy, R.; Boyle, M.; Calderón Bustillo, J.; Campanelli, M.; Chu, T.; Clark, J. A.; Demos, N.; Fong, H.; Healy, J.; Hemberger, D. A.; Hinder, I.; Jani, K.; Khamesra, B.; Kidder, L. E.; Kumar, P.; Laguna, P.; Lousto, C. O.; Lovelace, G.; Ossokine, S.; Pfeiffer, H.; Scheel, M. A.; Shoemaker, D. M.; Szilagyi, B.; Teukolsky, S.; Zlochower, Y.
2017-11-01
We present and assess a Bayesian method to interpret gravitational wave signals from binary black holes. Our method directly compares gravitational wave data to numerical relativity (NR) simulations. In this study, we present a detailed investigation of the systematic and statistical parameter estimation errors of this method. This procedure bypasses approximations used in semianalytical models for compact binary coalescence. In this work, we use the full posterior parameter distribution for only generic nonprecessing binaries, drawing inferences away from the set of NR simulations used, via interpolation of a single scalar quantity (the marginalized log likelihood, ln L ) evaluated by comparing data to nonprecessing binary black hole simulations. We also compare the data to generic simulations, and discuss the effectiveness of this procedure for generic sources. We specifically assess the impact of higher order modes, repeating our interpretation with both l ≤2 as well as l ≤3 harmonic modes. Using the l ≤3 higher modes, we gain more information from the signal and can better constrain the parameters of the gravitational wave signal. We assess and quantify several sources of systematic error that our procedure could introduce, including simulation resolution and duration; most are negligible. We show through examples that our method can recover the parameters for equal mass, zero spin, GW150914-like, and unequal mass, precessing spin sources. Our study of this new parameter estimation method demonstrates that we can quantify and understand the systematic and statistical error. This method allows us to use higher order modes from numerical relativity simulations to better constrain the black hole binary parameters.
Dynamics of Numerics & Spurious Behaviors in CFD Computations. Revised
NASA Technical Reports Server (NTRS)
Yee, Helen C.; Sweby, Peter K.
1997-01-01
The global nonlinear behavior of finite discretizations for constant time steps and fixed or adaptive grid spacings is studied using tools from dynamical systems theory. Detailed analysis of commonly used temporal and spatial discretizations for simple model problems is presented. The role of dynamics in the understanding of long time behavior of numerical integration and the nonlinear stability, convergence, and reliability of using time-marching approaches for obtaining steady-state numerical solutions in computational fluid dynamics (CFD) is explored. The study is complemented with examples of spurious behavior observed in steady and unsteady CFD computations. The CFD examples were chosen to illustrate non-apparent spurious behavior that was difficult to detect without extensive grid and temporal refinement studies and some knowledge from dynamical systems theory. Studies revealed the various possible dangers of misinterpreting numerical simulation of realistic complex flows that are constrained by available computing power. In large scale computations where the physics of the problem under study is not well understood and numerical simulations are the only viable means of solution, extreme care must be taken in both computation and interpretation of the numerical data. The goal of this paper is to explore the important role that dynamical systems theory can play in the understanding of the global nonlinear behavior of numerical algorithms and to aid the identification of the sources of numerical uncertainties in CFD.
NASA Technical Reports Server (NTRS)
Mayr, H. G.; Mengel, J. G.; Chan, K. L.; Trob, D.; Porter, H. C.; Einaudi, Franco (Technical Monitor)
2000-01-01
Special Session: SA03 The mesosphere/lower thermosphere region: Structure, dynamics, composition, and emission. Ground based and satellite observations in the upper mesosphere and lower thermosphere (MLT) reveal large seasonal variations in the horizontal wind fields of the diurnal and semidiurnal tides. To provide an understanding of the observations, we discuss results obtained with our Numerical Spectral Model (NMS) that incorporates the gravity wave Doppler Spread Parameterization (DSP) of Hines. Our model reproduces many of the salient features observed, and we discuss numerical experiments that delineate the important processes involved. Gravity wave momentum deposition and the seasonal variations in the tidal excitation contribute primarily to produce the large equinoctial amplitude maxima in the diurnal tide. Gravity wave induced variations in eddy viscosity, not accounted for in the model, have been shown by Akmaev to be important too. For the semidiurnal tide, with amplitude maximum observed during winter solstice, these processes also contribute, but filtering by the mean zonal circulation is more important. A deficiency of our model is that it cannot reproduce the observed seasonal variations in the phase of the semidiurnal tide, and numerical experiments are being carried out to diagnose the cause and to alleviate this problem. The dynamical components of the upper mesosphere are tightly coupled through non-linear processes and wave filtering, and this may constrain the model and require it to reproduce in detail the observed phenomenology.
Changing the scale of hydrogeophysical aquifer heterogeneity characterization
NASA Astrophysics Data System (ADS)
Paradis, Daniel; Tremblay, Laurie; Ruggeri, Paolo; Brunet, Patrick; Fabien-Ouellet, Gabriel; Gloaguen, Erwan; Holliger, Klaus; Irving, James; Molson, John; Lefebvre, Rene
2015-04-01
Contaminant remediation and management require the quantitative predictive capabilities of groundwater flow and mass transport numerical models. Such models have to encompass source zones and receptors, and thus typically cover several square kilometers. To predict the path and fate of contaminant plumes, these models have to represent the heterogeneous distribution of hydraulic conductivity (K). However, hydrogeophysics has generally been used to image relatively restricted areas of the subsurface (small fractions of km2), so there is a need for approaches defining heterogeneity at larger scales and providing data to constrain conceptual and numerical models of aquifer systems. This communication describes a workflow defining aquifer heterogeneity that was applied over a 12 km2 sub-watershed surrounding a decommissioned landfill emitting landfill leachate. The aquifer is a shallow, 10 to 20 m thick, highly heterogeneous and anisotropic assemblage of littoral sand and silt. Field work involved the acquisition of a broad range of data: geological, hydraulic, geophysical, and geochemical. The emphasis was put on high resolution and continuous hydrogeophysical data, the use of direct-push fully-screened wells and the acquisition of targeted high-resolution hydraulic data covering the range of observed aquifer materials. The main methods were: 1) surface geophysics (ground-penetrating radar and electrical resistivity); 2) direct-push operations with a geotechnical drilling rig (cone penetration tests with soil moisture resistivity CPT/SMR; full-screen well installation); and 3) borehole operations, including high-resolution hydraulic tests and geochemical sampling. New methods were developed to acquire high vertical resolution hydraulic data in direct-push wells, including both vertical and horizontal K (Kv and Kh). Various data integration approaches were used to represent aquifer properties in 1D, 2D and 3D. Using relevant vector machines (RVM), the mechanical and geophysical CPT/SMR measurements were used to recognize hydrofacies (HF) and obtain high-resolution 1D vertical profiles of hydraulic properties. Bayesian sequential simulation of the low-resolution surface-based geoelectrical measurements as well as high-resolution direct-push measurements of the electrical and hydraulic conductivities provided realistic estimates of the spatial distribution of K on a 250-m-long 2D survey line. Following a similar approach, all 1D vertical profiles of K derived from CPT/SMR soundings were integrated with available 2D geoelectrical profiles to obtain the 3D distribution of K over the study area. Numerical models were developed to understand flow and mass transport and assess how indicators could constrain model results and their K distributions. A 2D vertical section model was first developed based on a conceptual representation of heterogeneity which showed a significant effect of layering on flow and transport. The model demonstrated that solute and age tracers provide key model constraints. Additional 2D vertical section models with synthetic representations of low and high K hydrofacies were also developed on the basis of CPT/SMR soundings. These models showed that high-resolution profiles of hydraulic head could help constrain the spatial distribution and continuity of hydrofacies. History matching approaches are still required to simulate geostatistical models of K using hydrogeophysical data, while considering their impact on flow and transport with constraints provided by tracers of solutes and groundwater age.
NASA Astrophysics Data System (ADS)
Roche, V. M.; Sternai, P.; Guillou-Frottier, L.; Jolivet, L.; Gerya, T.
2016-12-01
The Aegean-Anatolian retreating subduction and collision zones have been investigated through 3D numerical geodynamic models involving slab rollback/tearing/breakoff constrained by, for instance, seismic tomography or anisotropy and geochemical proxies. Here we integrate these investigations by using geothermal anomaly measurements from western Turkey. Such data provides insights into the thermal state of the Aegean-Anatolian region at depth and reflects the development of a widespread active geothermal province that is unlikely to be related only to the Quaternary volcanism because this has a too limited extent in space and time. Firstly, we look for possible connections with larger-scale mantle dynamics and use 3D high-resolution petrological and thermo-mechanical numerical models to quantify the potential contribution of the Aegean-Anatolian subduction dynamics to such measured thermal anomalies. Secondly, the subduction-induced thermal signature at the base of the continental crust is then inserted as the imposed basal thermal condition of 2D models dedicated to the understanding of fluid flow in the shallow crust. These models couple heat transfer and fluid flow equations with appropriate fluid and rock physical properties. Results from the 3D numerical models suggest an efficient control of subduction-related asthenospheric return flow on the regional distribution of thermal anomalies. Results from the 2D numerical models also highlight that low angle normal faults (detachments) in the back-arc region can control the bulk of the heat transport and fluid circulation patterns. Such detachments can drain hot crustal and/or mantellic fluids down to several kilometers depths, thus allowing for or fostering deep fluid circulation.
1982-10-01
Element Unconstrained Variational Formulations," Innovativ’e Numerical Analysis For the Applied Engineering Science, R. P. Shaw, et at, Fitor...Initial Boundary Value of Gun Dynamics Solved by Finite Element Unconstrained Variational Formulations," Innovative Numerical Analysis For the Applied ... Engineering Science, R. P. Shaw, et al, Editors, University Press of Virginia, Charlottesville, pp. 733-741, 1980. 2 J. J. Wu, "Solutions to Initial
NASA Technical Reports Server (NTRS)
Cohn, S. E.
1982-01-01
Numerical weather prediction (NWP) is an initial-value problem for a system of nonlinear differential equations, in which initial values are known incompletely and inaccurately. Observational data available at the initial time must therefore be supplemented by data available prior to the initial time, a problem known as meteorological data assimilation. A further complication in NWP is that solutions of the governing equations evolve on two different time scales, a fast one and a slow one, whereas fast scale motions in the atmosphere are not reliably observed. This leads to the so called initialization problem: initial values must be constrained to result in a slowly evolving forecast. The theory of estimation of stochastic dynamic systems provides a natural approach to such problems. For linear stochastic dynamic models, the Kalman-Bucy (KB) sequential filter is the optimal data assimilation method, for linear models, the optimal combined data assimilation-initialization method is a modified version of the KB filter.
Physical mechanisms leading to two-dimensional gas content evolution within a volcanic conduit
NASA Astrophysics Data System (ADS)
Collombet, M.; Burgisser, A.; Chevalier, L. A. C.
2017-12-01
The eruption of viscous magma at the Earth's surface often gives rise to abrupt regime changes. The transition from the gentle effusion of a lava dome to brief but powerful explosions is a common regime change. This transition is often preceded by the sealing of the shallow part of the volcanic conduit and the accumulation of volatile-rich magma underneath, a situation that collects the energy to be brutally released during the subsequent explosion. While conduit sealing is well-documented, volatile accumulation has proven harder to characterize. In this study, we use a 2D conduit flow numerical model including gas loss within the magma and into the wallrock to follow the evolution of gas content during a regime transition. Using various initial porosity distributions, permeability laws and boundary conditions, we track the physical parameters that prevent or enhance gas escape from the magma. Our approach aims to identify the physical processes controlling eruptive transitions and to highlight the importance of using field data observations to constrain numerical models.
A parallel direct-forcing fictitious domain method for simulating microswimmers
NASA Astrophysics Data System (ADS)
Gao, Tong; Lin, Zhaowu
2017-11-01
We present a 3D parallel direct-forcing fictitious domain method for simulating swimming micro-organisms at small Reynolds numbers. We treat the motile micro-swimmers as spherical rigid particles using the ``Squirmer'' model. The particle dynamics are solved on the moving Larangian meshes that overlay upon a fixed Eulerian mesh for solving the fluid motion, and the momentum exchange between the two phases is resolved by distributing pseudo body-forces over the particle interior regions which constrain the background fictitious fluids to follow the particle movement. While the solid and fluid subproblems are solved separately, no inner-iterations are required to enforce numerical convergence. We demonstrate the accuracy and robustness of the method by comparing our results with the existing analytical and numerical studies for various cases of single particle dynamics and particle-particle interactions. We also perform a series of numerical explorations to obtain statistical and rheological measurements to characterize the dynamics and structures of Squirmer suspensions. NSF DMS 1619960.
Bootstrapping the (A1, A2) Argyres-Douglas theory
NASA Astrophysics Data System (ADS)
Cornagliotto, Martina; Lemos, Madalena; Liendo, Pedro
2018-03-01
We apply bootstrap techniques in order to constrain the CFT data of the ( A 1 , A 2) Argyres-Douglas theory, which is arguably the simplest of the Argyres-Douglas models. We study the four-point function of its single Coulomb branch chiral ring generator and put numerical bounds on the low-lying spectrum of the theory. Of particular interest is an infinite family of semi-short multiplets labeled by the spin ℓ. Although the conformal dimensions of these multiplets are protected, their three-point functions are not. Using the numerical bootstrap we impose rigorous upper and lower bounds on their values for spins up to ℓ = 20. Through a recently obtained inversion formula, we also estimate them for sufficiently large ℓ, and the comparison of both approaches shows consistent results. We also give a rigorous numerical range for the OPE coefficient of the next operator in the chiral ring, and estimates for the dimension of the first R-symmetry neutral non-protected multiplet for small spin.
Thermal and Mechanical Buckling and Postbuckling Responses of Selected Curved Composite Panels
NASA Technical Reports Server (NTRS)
Breivik, Nicole L.; Hyer, Michael W.; Starnes, James H., Jr.
1998-01-01
The results of an experimental and numerical study of the buckling and postbuckling responses of selected unstiffened curved composite panels subjected to mechanical end shortening and a uniform temperature increase are presented. The uniform temperature increase induces thermal stresses in the panel when the axial displacement is constrained. An apparatus for testing curved panels at elevated temperature is described, numerical results generated by using a geometrically nonlinear finite element analysis code are presented. Several analytical modeling refinements that provide more accurate representation of the actual experimental conditions, and the relative contribution of each refinement, are discussed. Experimental results and numerical predictions are presented and compared for three loading conditions including mechanical end shortening alone, heating the panels to 250 F followed by mechanical end shortening, and heating the panels to 400 F. Changes in the coefficients of thermal expansion were observed as temperature was increased above 330 F. The effects of these changes on the experimental results are discussed for temperatures up to 400 F.
McGonigle, A. J. S.; James, M. R.; Tamburello, G.; Aiuppa, A.; Delle Donne, D.; Ripepe, M.
2016-01-01
Abstract Recent gas flux measurements have shown that Strombolian explosions are often followed by periods of elevated flux, or “gas codas,” with durations of order a minute. Here we present UV camera data from 200 events recorded at Stromboli volcano to constrain the nature of these codas for the first time, providing estimates for combined explosion plus coda SO2 masses of ≈18–225 kg. Numerical simulations of gas slug ascent show that substantial proportions of the initial gas mass can be distributed into a train of “daughter bubbles” released from the base of the slug, which we suggest, generate the codas, on bursting at the surface. This process could also cause transitioning of slugs into cap bubbles, significantly reducing explosivity. This study is the first attempt to combine high temporal resolution gas flux data with numerical simulations of conduit gas flow to investigate volcanic degassing dynamics. PMID:27478285
Numerical and Experimental Approaches Toward Understanding Lava Flow Heat Transfer
NASA Astrophysics Data System (ADS)
Rumpf, M.; Fagents, S. A.; Hamilton, C.; Crawford, I. A.
2013-12-01
We have performed numerical modeling and experimental studies to quantify the heat transfer from a lava flow into an underlying particulate substrate. This project was initially motivated by a desire to understand the transfer of heat from a lava flow into the lunar regolith. Ancient regolith deposits that have been protected by a lava flow may contain ancient solar wind, solar flare, and galactic cosmic ray products that can give insight into the history of our solar system, provided the records were not heated and destroyed by the overlying lava flow. In addition, lava-substrate interaction is an important aspect of lava fluid dynamics that requires consideration in lava emplacement models Our numerical model determines the depth to which the heat pulse will penetrate beneath a lava flow into the underlying substrate. Rigorous treatment of the temperature dependence of lava and substrate thermal conductivity and specific heat capacity, density, and latent heat release are imperative to an accurate model. Experiments were conducted to verify the numerical model. Experimental containers with interior dimensions of 20 x 20 x 25 cm were constructed from 1 inch thick calcium silicate sheeting. For initial experiments, boxes were packed with lunar regolith simulant (GSC-1) to a depth of 15 cm with thermocouples embedded at regular intervals. Basalt collected at Kilauea Volcano, HI, was melted in a gas forge and poured directly onto the simulant. Initial lava temperatures ranged from ~1200 to 1300 °C. The system was allowed to cool while internal temperatures were monitored by a thermocouple array and external temperatures were monitored by a Forward Looking Infrared (FLIR) video camera. Numerical simulations of the experiments elucidate the details of lava latent heat release and constrain the temperature-dependence of the thermal conductivity of the particulate substrate. The temperature-dependence of thermal conductivity of particulate material is not well known, especially at high temperatures. It is important to have this property well constrained as substrate thermal conductivity is the greatest influence on the rate of lava-substrate heat transfer. At Kilauea and Mauna Loa Volcanoes, Hawaii, and other volcanoes that threaten communities, lava may erupt over a variety of substrate materials including cool lava flows, volcanic tephra, soils, sand, and concrete. The composition, moisture, organic content, porosity, and grain size of the substrate dictate the thermophysical properties, thus affecting the transfer of heat from the lava flow into the substrate and flow mobility. Particulate substrate materials act as insulators, subduing the rate of heat transfer from the flow core. Therefore, lava that flows over a particulate substrate will maintain higher core temperatures over a longer period, enhancing flow mobility and increasing the duration and aerial coverage of the resulting flow. Lava flow prediction models should include substrate specification with temperature dependent material property definitions for an accurate understanding of flow hazards.
Patterns of deoxygenation: sensitivity to natural and anthropogenic drivers
NASA Astrophysics Data System (ADS)
Oschlies, Andreas; Duteil, Olaf; Getzlaff, Julia; Koeve, Wolfgang; Landolfi, Angela; Schmidtko, Sunke
2017-08-01
Observational estimates and numerical models both indicate a significant overall decline in marine oxygen levels over the past few decades. Spatial patterns of oxygen change, however, differ considerably between observed and modelled estimates. Particularly in the tropical thermocline that hosts open-ocean oxygen minimum zones, observations indicate a general oxygen decline, whereas most of the state-of-the-art models simulate increasing oxygen levels. Possible reasons for the apparent model-data discrepancies are examined. In order to attribute observed historical variations in oxygen levels, we here study mechanisms of changes in oxygen supply and consumption with sensitivity model simulations. Specifically, the role of equatorial jets, of lateral and diapycnal mixing processes, of changes in the wind-driven circulation and atmospheric nutrient supply, and of some poorly constrained biogeochemical processes are investigated. Predominantly wind-driven changes in the low-latitude oceanic ventilation are identified as a possible factor contributing to observed oxygen changes in the low-latitude thermocline during the past decades, while the potential role of biogeochemical processes remains difficult to constrain. We discuss implications for the attribution of observed oxygen changes to anthropogenic impacts and research priorities that may help to improve our mechanistic understanding of oxygen changes and the quality of projections into a changing future. This article is part of the themed issue 'Ocean ventilation and deoxygenation in a warming world'.
Dynamic analysis and control of lightweight manipulators with flexible parallel link mechanisms
NASA Technical Reports Server (NTRS)
Lee, Jeh Won
1991-01-01
The flexible parallel link mechanism is designed for increased rigidity to sustain the buckling when it carries a heavy payload. Compared to a one link flexible manipulator, a two link flexible manipulator, especially the flexible parallel mechanism, has more complicated characteristics in dynamics and control. The objective of this research is the theoretical analysis and the experimental verification of dynamics and control of a two link flexible manipulator with a flexible parallel link mechanism. Nonlinear equations of motion of the lightweight manipulator are derived by the Lagrangian method in symbolic form to better understand the structure of the dynamic model. A manipulator with a flexible parallel link mechanism is a constrained dynamic system whose equations are sensitive to numerical integration error. This constrained system is solved using singular value decomposition of the constraint Jacobian matrix. The discrepancies between the analytical model and the experiment are explained using a simplified and a detailed finite element model. The step response of the analytical model and the TREETOPS model match each other well. The nonlinear dynamics is studied using a sinusoidal excitation. The actuator dynamic effect on a flexible robot was investigated. The effects are explained by the root loci and the Bode plot theoretically and experimentally. For the base performance for the advanced control scheme, a simple decoupled feedback scheme is applied.
NASA Technical Reports Server (NTRS)
Padovan, J.; Tovichakchaikul, S.
1983-01-01
This paper will develop a new solution strategy which can handle elastic-plastic-creep problems in an inherently stable manner. This is achieved by introducing a new constrained time stepping algorithm which will enable the solution of creep initiated pre/postbuckling behavior where indefinite tangent stiffnesses are encountered. Due to the generality of the scheme, both monotone and cyclic loading histories can be handled. The presentation will give a thorough overview of current solution schemes and their short comings, the development of constrained time stepping algorithms as well as illustrate the results of several numerical experiments which benchmark the new procedure.
Developing Tools to Test the Thermo-Mechanical Models, Examples at Crustal and Upper Mantle Scale
NASA Astrophysics Data System (ADS)
Le Pourhiet, L.; Yamato, P.; Burov, E.; Gurnis, M.
2005-12-01
Testing geodynamical model is never an easy task. Depending on the spatio-temporal scale of the model, different testable predictions are needed and no magic reciepe exist. This contribution first presents different methods that have been used to test themo-mechanical modeling results at upper crustal, lithospheric and upper mantle scale using three geodynamical examples : the Gulf of Corinth (Greece), the Western Alps, and the Sierra Nevada. At short spatio-temporal scale (e.g. Gulf of Corinth). The resolution of the numerical models is usually sufficient to catch the timing and kinematics of the faults precisely enough to be tested by tectono-stratigraphic arguments. In active deforming area, microseismicity can be compared to the effective rheology and P and T axes of the focal mechanism can be compared with local orientation of the major component of the stress tensor. At lithospheric scale the resolution of the models doesn't permit anymore to constrain the models by direct observations (i.e. structural data from field or seismic reflection). Instead, synthetic P-T-t path may be computed and compared to natural ones in term of rate of exhumation for ancient orogens. Topography may also help but on continent it mainly depends on erosion laws that are complicated to constrain. Deeper in the mantle, the only available constrain are long wave length topographic data and tomographic "data". The major problem to overcome now at lithospheric and upper mantle scale, is that the so called "data" results actually from inverse models of the real data and that those inverse model are based on synthetic models. Post processing P and S wave velocities is not sufficient to be able to make testable prediction at upper mantle scale. Instead of that, direct wave propagations model must be computed. This allows checking if the differences between two models constitute a testable prediction or not. On longer term, we may be able to use those synthetic models to reduce the residue in the inversion of elastic wave arrival time
Topological quantum error correction in the Kitaev honeycomb model
NASA Astrophysics Data System (ADS)
Lee, Yi-Chan; Brell, Courtney G.; Flammia, Steven T.
2017-08-01
The Kitaev honeycomb model is an approximate topological quantum error correcting code in the same phase as the toric code, but requiring only a 2-body Hamiltonian. As a frustrated spin model, it is well outside the commuting models of topological quantum codes that are typically studied, but its exact solubility makes it more amenable to analysis of effects arising in this noncommutative setting than a generic topologically ordered Hamiltonian. Here we study quantum error correction in the honeycomb model using both analytic and numerical techniques. We first prove explicit exponential bounds on the approximate degeneracy, local indistinguishability, and correctability of the code space. These bounds are tighter than can be achieved using known general properties of topological phases. Our proofs are specialized to the honeycomb model, but some of the methods may nonetheless be of broader interest. Following this, we numerically study noise caused by thermalization processes in the perturbative regime close to the toric code renormalization group fixed point. The appearance of non-topological excitations in this setting has no significant effect on the error correction properties of the honeycomb model in the regimes we study. Although the behavior of this model is found to be qualitatively similar to that of the standard toric code in most regimes, we find numerical evidence of an interesting effect in the low-temperature, finite-size regime where a preferred lattice direction emerges and anyon diffusion is geometrically constrained. We expect this effect to yield an improvement in the scaling of the lifetime with system size as compared to the standard toric code.
NASA Astrophysics Data System (ADS)
Hartland, Tucker A.; Schilling, Oleg
2016-11-01
Analytical self-similar solutions corresponding to Rayleigh-Taylor, Richtmyer-Meshkov and Kelvin-Helmholtz instability are combined with observed values of the growth parameters in these instabilities to derive coefficient sets for K- ɛ and K- L- a Reynolds-averaged turbulence models. It is shown that full numerical solutions of the model equations give mixing layer widths, fields, and budgets in good agreement with the corresponding self-similar quantities for small Atwood number. Both models are then applied to Rayleigh-Taylor instability with increasing density contrasts to estimate the Atwood number above which the self-similar solutions become invalid. The models are also applied to a reshocked Richtmyer-Meshkov instability, and the predictions are compared with data. The expressions for the growth parameters obtained from the similarity analysis are used to develop estimates for the sensitivity of their values to changes in important model coefficients. Numerical simulations using these modified coefficient values are then performed to provide bounds on the model predictions associated with uncertainties in these coefficient values. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. This work was supported by the 2016 LLNL High-Energy-Density Physics Summer Student Program.
Optimal Information Processing in Biochemical Networks
NASA Astrophysics Data System (ADS)
Wiggins, Chris
2012-02-01
A variety of experimental results over the past decades provide examples of near-optimal information processing in biological networks, including in biochemical and transcriptional regulatory networks. Computing information-theoretic quantities requires first choosing or computing the joint probability distribution describing multiple nodes in such a network --- for example, representing the probability distribution of finding an integer copy number of each of two interacting reactants or gene products while respecting the `intrinsic' small copy number noise constraining information transmission at the scale of the cell. I'll given an overview of some recent analytic and numerical work facilitating calculation of such joint distributions and the associated information, which in turn makes possible numerical optimization of information flow in models of noisy regulatory and biochemical networks. Illustrating cases include quantification of form-function relations, ideal design of regulatory cascades, and response to oscillatory driving.
Modular constraints on conformal field theories with currents
NASA Astrophysics Data System (ADS)
Bae, Jin-Beom; Lee, Sungjay; Song, Jaewon
2017-12-01
We study constraints coming from the modular invariance of the partition function of two-dimensional conformal field theories. We constrain the spectrum of CFTs in the presence of holomorphic and anti-holomorphic currents using the semi-definite programming. In particular, we find the bounds on the twist gap for the non-current primaries depend dramatically on the presence of holomorphic currents, showing numerous kinks and peaks. Various rational CFTs are realized at the numerical boundary of the twist gap, saturating the upper limits on the degeneracies. Such theories include Wess-Zumino-Witten models for the Deligne's exceptional series, the Monster CFT and the Baby Monster CFT. We also study modular constraints imposed by W -algebras of various type and observe that the bounds on the gap depend on the choice of W -algebra in the small central charge region.
Analysis of Surface Heterogeneity Effects with Mesoscale Terrestrial Modeling Platforms
NASA Astrophysics Data System (ADS)
Simmer, C.
2015-12-01
An improved understanding of the full variability in the weather and climate system is crucial for reducing the uncertainty in weather forecasting and climate prediction, and to aid policy makers to develop adaptation and mitigation strategies. A yet unknown part of uncertainty in the predictions from the numerical models is caused by the negligence of non-resolved land surface heterogeneity and the sub-surface dynamics and their potential impact on the state of the atmosphere. At the same time, mesoscale numerical models using finer horizontal grid resolution [O(1)km] can suffer from inconsistencies and neglected scale-dependencies in ABL parameterizations and non-resolved effects of integrated surface-subsurface lateral flow at this scale. Our present knowledge suggests large-eddy-simulation (LES) as an eventual solution to overcome the inadequacy of the physical parameterizations in the atmosphere in this transition scale, yet we are constrained by the computational resources, memory management, big-data, when using LES for regional domains. For the present, there is a need for scale-aware parameterizations not only in the atmosphere but also in the land surface and subsurface model components. In this study, we use the recently developed Terrestrial Systems Modeling Platform (TerrSysMP) as a numerical tool to analyze the uncertainty in the simulation of surface exchange fluxes and boundary layer circulations at grid resolutions of the order of 1km, and explore the sensitivity of the atmospheric boundary layer evolution and convective rainfall processes on land surface heterogeneity.
NASA Astrophysics Data System (ADS)
Jung, Youngjean
This dissertation concerns the constitutive description of superelasticity in NiTi alloys and the finite element analysis of a corresponding material model at large strains. Constitutive laws for shape-memory alloys subject to biaxial loading, which are based on direct experimental observations, are generally not available. A reliable constitutive model for shape-memory alloys is important for various applications because Nitinol is now widely used in biotechnology devices such as endovascular stents, vena cava filters, dental files, archwires and guidewires, etc. As part of a broader project, tension-torsion tests are conducted on thin-walled tubes (thickness/radius ratio of 1:10) of the polycrystalline superelastic Nitinol using various loading/unloading paths under isothermal conditions. This biaxial loading/unloading test was carefully designed to avoid torsional buckling and strain non-uniformities. A micromechanical constitutive model, algorithmic implementation and numerical simulation of polycrystalline superelastic alloys under biaxial loading are developed. The constitutive model is based on the micromechanical structure of Ni-Ti crystals and accounts for the physical observation of solid-solid phase transformations through the minimization of the Helmholtz energy with dissipation. The model is formulated in finite deformations and incorporates the effect of texture which is of profound significance in the mechanical response of polycrystalline Nitinol tubes. The numerical implementation is based on the constrained minimization of a functional corresponding to the Helmholtz energy with dissipation. Special treatment of loading/unloading conditions is also developed to distinguish between forward/reverse transformation state. Simulations are conducted for thin tubes of Nitinol under tension-torsion, as well as for a simplified model of a biomedical stent.
NASA Technical Reports Server (NTRS)
Chin, Jeffrey C.; Csank, Jeffrey T.; Haller, William J.; Seidel, Jonathan A.
2016-01-01
This document outlines methodologies designed to improve the interface between the Numerical Propulsion System Simulation framework and various control and dynamic analyses developed in the Matlab and Simulink environment. Although NPSS is most commonly used for steady-state modeling, this paper is intended to supplement the relatively sparse documentation on it's transient analysis functionality. Matlab has become an extremely popular engineering environment, and better methodologies are necessary to develop tools that leverage the benefits of these disparate frameworks. Transient analysis is not a new feature of the Numerical Propulsion System Simulation (NPSS), but transient considerations are becoming more pertinent as multidisciplinary trade-offs begin to play a larger role in advanced engine designs. This paper serves to supplement the relatively sparse documentation on transient modeling and cover the budding convergence between NPSS and Matlab based modeling toolsets. The following sections explore various design patterns to rapidly develop transient models. Each approach starts with a base model built with NPSS, and assumes the reader already has a basic understanding of how to construct a steady-state model. The second half of the paper focuses on further enhancements required to subsequently interface NPSS with Matlab codes. The first method being the simplest and most straightforward but performance constrained, and the last being the most abstract. These methods aren't mutually exclusive and the specific implementation details could vary greatly based on the designer's discretion. Basic recommendations are provided to organize model logic in a format most easily amenable to integration with existing Matlab control toolsets.
NASA Astrophysics Data System (ADS)
Peng, Guoyi; Cao, Shuliang; Ishizuka, Masaru; Hayama, Shinji
2002-06-01
This paper is concerned with the design optimization of axial flow hydraulic turbine runner blade geometry. In order to obtain a better design plan with good performance, a new comprehensive performance optimization procedure has been presented by combining a multi-variable multi-objective constrained optimization model with a Q3D inverse computation and a performance prediction procedure. With careful analysis of the inverse design of axial hydraulic turbine runner, the total hydraulic loss and the cavitation coefficient are taken as optimization objectives and a comprehensive objective function is defined using the weight factors. Parameters of a newly proposed blade bound circulation distribution function and parameters describing positions of blade leading and training edges in the meridional flow passage are taken as optimization variables.The optimization procedure has been applied to the design optimization of a Kaplan runner with specific speed of 440 kW. Numerical results show that the performance of designed runner is successfully improved through optimization computation. The optimization model is found to be validated and it has the feature of good convergence. With the multi-objective optimization model, it is possible to control the performance of designed runner by adjusting the value of weight factors defining the comprehensive objective function. Copyright
Mean-Field Description of Ionic Size Effects with Non-Uniform Ionic Sizes: A Numerical Approach
Zhou, Shenggao; Wang, Zhongming; Li, Bo
2013-01-01
Ionic size effects are significant in many biological systems. Mean-field descriptions of such effects can be efficient but also challenging. When ionic sizes are different, explicit formulas in such descriptions are not available for the dependence of the ionic concentrations on the electrostatic potential, i.e., there is no explicit, Boltzmann type distributions. This work begins with a variational formulation of the continuum electrostatics of an ionic solution with such non-uniform ionic sizes as well as multiple ionic valences. An augmented Lagrange multiplier method is then developed and implemented to numerically solve the underlying constrained optimization problem. The method is shown to be accurate and efficient, and is applied to ionic systems with non-uniform ionic sizes such as the sodium chloride solution. Extensive numerical tests demonstrate that the mean-field model and numerical method capture qualitatively some significant ionic size effects, particularly those for multivalent ionic solutions, such as the stratification of multivalent counterions near a charged surface. The ionic valence-to-volume ratio is found to be the key physical parameter in the stratification of concentrations. All these are not well described by the classical Poisson–Boltzmann theory, or the generalized Poisson–Boltzmann theory that treats uniform ionic sizes. Finally, various issues such as the close packing, limitation of the continuum model, and generalization of this work to molecular solvation are discussed. PMID:21929014
NASA Astrophysics Data System (ADS)
Nowack, R. L.; Bakir, A. C.; Griffin, J.; Chen, W.; Tseng, T.
2010-12-01
Using data from regional earthquakes recorded by the Hi-CLIMB array in Tibet, we utilize seismic attributes from crustal and Pn arrivals to constrain the velocity and attenuation structure in the crust and the upper mantle in central and western Tibet. The seismic attributes considered include arrival times, Hilbert envelope amplitudes, and instantaneous as well as spectral frequencies. We have constructed more than 30 high-quality regional seismic profiles, and of these, 10 events have been selected with excellent crustal and Pn arrivals for further analysis. Travel-times recorded by the Hi-CLIMB array are used to estimate the large-scale velocity structure in the region, with four near regional events to the array used to constrain the crustal structure. The travel times from the far regional events indicate that the Moho beneath the southern Lhasa terrane is up to 75 km thick, with Pn velocities greater than 8 km/s. In contrast, the data sampling the Qiangtang terrane north of the Bangong-Nujiang (BNS) suture shows thinner crust with Pn velocities less than 8 km/s. Seismic amplitude and frequency attributes have been extracted from the crustal and Pn wave trains, and these data are compared with numerical results for models with upper-mantle velocity gradients and attenuation, which can strongly affect Pn amplitudes and pulse frequencies. The numerical modeling is performed using the complete spectral element method (SEM), where the results from the SEM method are in good agreement with analytical and reflectivity results for different models with upper-mantle velocity gradients. The results for the attenuation modeling in Tibet imply lower upper mantle Q values in the Qiangtang terrane to the north of the BNS compared to the less attenuative upper mantle beneath the Lhasa terrane to the south of the BNS.
A hybrid intelligent algorithm for portfolio selection problem with fuzzy returns
NASA Astrophysics Data System (ADS)
Li, Xiang; Zhang, Yang; Wong, Hau-San; Qin, Zhongfeng
2009-11-01
Portfolio selection theory with fuzzy returns has been well developed and widely applied. Within the framework of credibility theory, several fuzzy portfolio selection models have been proposed such as mean-variance model, entropy optimization model, chance constrained programming model and so on. In order to solve these nonlinear optimization models, a hybrid intelligent algorithm is designed by integrating simulated annealing algorithm, neural network and fuzzy simulation techniques, where the neural network is used to approximate the expected value and variance for fuzzy returns and the fuzzy simulation is used to generate the training data for neural network. Since these models are used to be solved by genetic algorithm, some comparisons between the hybrid intelligent algorithm and genetic algorithm are given in terms of numerical examples, which imply that the hybrid intelligent algorithm is robust and more effective. In particular, it reduces the running time significantly for large size problems.
NASA Astrophysics Data System (ADS)
Gupta, R. K.; Bhunia, A. K.; Roy, D.
2009-10-01
In this paper, we have considered the problem of constrained redundancy allocation of series system with interval valued reliability of components. For maximizing the overall system reliability under limited resource constraints, the problem is formulated as an unconstrained integer programming problem with interval coefficients by penalty function technique and solved by an advanced GA for integer variables with interval fitness function, tournament selection, uniform crossover, uniform mutation and elitism. As a special case, considering the lower and upper bounds of the interval valued reliabilities of the components to be the same, the corresponding problem has been solved. The model has been illustrated with some numerical examples and the results of the series redundancy allocation problem with fixed value of reliability of the components have been compared with the existing results available in the literature. Finally, sensitivity analyses have been shown graphically to study the stability of our developed GA with respect to the different GA parameters.
(2, 2) superconformal bootstrap in two dimensions
Lin, Ying -Hsuan; Shao, Shu -Heng; Wang, Yifan; ...
2017-05-19
We find a simple relation between two-dimensional BPS N = 2 superconformal blocks and bosonic Virasoro conformal blocks, which allows us to analyze the crossing equations for BPS 4-point functions in unitary (2, 2) superconformal theories numerically with semidefinite programming. Here, we constrain gaps in the non-BPS spectrum through the operator product expansion of BPS operators, in ways that depend on the moduli of exactly marginal deformations through chiral ring coefficients. In some cases, our bounds on the spectral gaps are observed to be saturated by free theories, by N = 2 Liouville theory, and by certain Landau-Ginzburg models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jordan, Amy B.; Boukhalfa, Hakim; Caporuscio, Florie Andre
To gain confidence in the predictive capability of numerical models, experimental validation must be performed to ensure that parameters and processes are correctly simulated. The laboratory investigations presented herein aim to address knowledge gaps for heat-generating nuclear waste (HGNW) disposal in bedded salt that remain after examination of prior field and laboratory test data. Primarily, we are interested in better constraining the thermal, hydrological, and physicochemical behavior of brine, water vapor, and salt when moist salt is heated. The target of this work is to use run-of-mine (RoM) salt; however during FY2015 progress was made using high-purity, granular sodium chloride.
Development of homotopy algorithms for fixed-order mixed H2/H(infinity) controller synthesis
NASA Technical Reports Server (NTRS)
Whorton, M.; Buschek, H.; Calise, A. J.
1994-01-01
A major difficulty associated with H-infinity and mu-synthesis methods is the order of the resulting compensator. Whereas model and/or controller reduction techniques are sometimes applied, performance and robustness properties are not preserved. By directly constraining compensator order during the optimization process, these properties are better preserved, albeit at the expense of computational complexity. This paper presents a novel homotopy algorithm to synthesize fixed-order mixed H2/H-infinity compensators. Numerical results are presented for a four-disk flexible structure to evaluate the efficiency of the algorithm.
A TV-constrained decomposition method for spectral CT
NASA Astrophysics Data System (ADS)
Guo, Xiaoyue; Zhang, Li; Xing, Yuxiang
2017-03-01
Spectral CT is attracting more and more attention in medicine, industrial nondestructive testing and security inspection field. Material decomposition is an important issue to a spectral CT to discriminate materials. Because of the spectrum overlap of energy channels, as well as the correlation of basis functions, it is well acknowledged that decomposition step in spectral CT imaging causes noise amplification and artifacts in component coefficient images. In this work, we propose materials decomposition via an optimization method to improve the quality of decomposed coefficient images. On the basis of general optimization problem, total variance minimization is constrained on coefficient images in our overall objective function with adjustable weights. We solve this constrained optimization problem under the framework of ADMM. Validation on both a numerical dental phantom in simulation and a real phantom of pig leg on a practical CT system using dual-energy imaging is executed. Both numerical and physical experiments give visually obvious better reconstructions than a general direct inverse method. SNR and SSIM are adopted to quantitatively evaluate the image quality of decomposed component coefficients. All results demonstrate that the TV-constrained decomposition method performs well in reducing noise without losing spatial resolution so that improving the image quality. The method can be easily incorporated into different types of spectral imaging modalities, as well as for cases with energy channels more than two.
Planetary geology: Impact processes on asteroids
NASA Technical Reports Server (NTRS)
Chapman, C. R.; Davis, D. R.; Greenberg, R.; Weidenschilling, S. J.
1982-01-01
The fundamental geological and geophysical properties of asteroids were studied by theoretical and simulation studies of their collisional evolution. Numerical simulations incorporating realistic physical models were developed to study the collisional evolution of hypothetical asteroid populations over the age of the solar system. Ideas and models are constrained by the observed distributions of sizes, shapes, and spin rates in the asteroid belt, by properties of Hirayama families, and by experimental studies of cratering and collisional phenomena. It is suggested that many asteroids are gravitationally-bound "rubble piles.' Those that rotate rapidly may have nonspherical quasi-equilibrium shapes, such as ellipsoids or binaries. Through comparison of models with astronomical data, physical properties of these asteroids (including bulk density) are determined, and physical processes that have operated in the solar system in primordial and subsequent epochs are studied.
A small chance of paradise —Equivalence of balanced states
NASA Astrophysics Data System (ADS)
Krawczyk, M. J.; Kaluzny, S.; Kulakowski, K.
2017-06-01
A social network is modeled by a complete graph of N nodes, with interpersonal relations represented by links. In the framework of the Heider balance theory, we prove numerically that the probability of each balanced state is the same. This means in particular, that the probability of the paradise state, where all relations are positive, is 21-N . The proof is performed within two models. In the first, relations are changing continuously in time, and the proof is performed only for N = 3 with the methods of nonlinear dynamics. The second model is the Constrained Triad Dynamics, as introduced by Antal, Krapivsky and Redner in 2005. In the latter case, the proof makes use of the symmetries of the network of system states and it is completed for 3≤ N≤ 7 .
A strategy for the observation of volcanism on Earth from space.
Wadge, G
2003-01-15
Heat, strain, topography and atmospheric emissions associated with volcanism are well observed by satellites orbiting the Earth. Gravity and electromagnetic transients from volcanoes may also prove to be measurable from space. The nature of eruptions means that the best strategy for measuring their dynamic properties remotely from space is to employ two modes with different spatial and temporal samplings: eruption mode and background mode. Such observational programmes are best carried out at local or regional volcano observatories by coupling them with numerical models of volcanic processes. Eventually, such models could become multi-process, operational forecast models that assimilate the remote and other observables to constrain their uncertainties. The threat posed by very large magnitude explosive eruptions is global and best addressed by a spaceborne observational programme with a global remit.
Gnedin, Nickolay Y.; Semenov, Vadim A.; Kravtsov, Andrey V.
2018-01-30
In this study, an optimally efficient explicit numerical scheme for solving fluid dynamics equations, or any other parabolic or hyperbolic system of partial differential equations, should allow local regions to advance in time with their own, locally constrained time steps. However, such a scheme can result in violation of the Courant-Friedrichs-Lewy (CFL) condition, which is manifestly non-local. Although the violations can be considered to be "weak" in a certain sense and the corresponding numerical solution may be stable, such calculation does not guarantee the correct propagation speed for arbitrary waves. We use an experimental fluid dynamics code that allows cubicmore » "patches" of grid cells to step with independent, locally constrained time steps to demonstrate how the CFL condition can be enforced by imposing a condition on the time steps of neighboring patches. We perform several numerical tests that illustrate errors introduced in the numerical solutions by weak CFL condition violations and show how strict enforcement of the CFL condition eliminates these errors. In all our tests the strict enforcement of the CFL condition does not impose a significant performance penalty.« less
Numerical simulation of the geodynamo reaches Earth's core dynamical regime
NASA Astrophysics Data System (ADS)
Aubert, J.; Gastine, T.; Fournier, A.
2016-12-01
Numerical simulations of the geodynamo have been successful at reproducing a number of static (field morphology) and kinematic (secular variation patterns, core surface flows and westward drift) features of Earth's magnetic field, making them a tool of choice for the analysis and retrieval of geophysical information on Earth's core. However, classical numerical models have been run in a parameter regime far from that of the real system, prompting the question of whether we do get "the right answers for the wrong reasons", i.e. whether the agreement between models and nature simply occurs by chance and without physical relevance in the dynamics. In this presentation, we show that classical models succeed in describing the geodynamo because their large-scale spatial structure is essentially invariant as one progresses along a well-chosen path in parameter space to Earth's core conditions. This path is constrained by the need to enforce the relevant force balance (MAC or Magneto-Archimedes-Coriolis) and preserve the ratio of the convective overturn and magnetic diffusion times. Numerical simulations performed along this path are shown to be spatially invariant at scales larger than that where the magnetic energy is ohmically dissipated. This property enables the definition of large-eddy simulations that show good agreement with direct numerical simulations in the range where both are feasible, and that can be computed at unprecedented values of the control parameters, such as an Ekman number E=10-8. Combining direct and large-eddy simulations, large-scale invariance is observed over half the logarithmic distance in parameter space between classical models and Earth. The conditions reached at this mid-point of the path are furthermore shown to be representative of the rapidly-rotating, asymptotic dynamical regime in which Earth's core resides, with a MAC force balance undisturbed by viscosity or inertia, the enforcement of a Taylor state and strong-field dynamo action. We conclude that numerical modelling has advanced to a stage where it is possible to use models correctly representing the statics, kinematics and now the dynamics of the geodynamo. This opens the way to a better analysis of the geomagnetic field in the time and space domains.
Constrained exceptional supersymmetric standard model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Athron, P.; King, S. F.; Miller, D. J.
2009-08-01
We propose and study a constrained version of the exceptional supersymmetric standard model (E{sub 6}SSM), which we call the cE{sub 6}SSM, based on a universal high energy scalar mass m{sub 0}, trilinear scalar coupling A{sub 0} and gaugino mass M{sub 1/2}. We derive the renormalization group (RG) Equations for the cE{sub 6}SSM, including the extra U(1){sub N} gauge factor and the low-energy matter content involving three 27 representations of E{sub 6}. We perform a numerical RG analysis for the cE{sub 6}SSM, imposing the usual low-energy experimental constraints and successful electroweak symmetry breaking. Our analysis reveals that the sparticle spectrum ofmore » the cE{sub 6}SSM involves a light gluino, two light neutralinos, and a light chargino. Furthermore, although the squarks, sleptons, and Z{sup '} boson are typically heavy, the exotic quarks and squarks can also be relatively light. We finally specify a set of benchmark points, which correspond to particle spectra, production modes, and decay patterns peculiar to the cE{sub 6}SSM, altogether leading to spectacular new physics signals at the Large Hadron Collider.« less
PAPR-Constrained Pareto-Optimal Waveform Design for OFDM-STAP Radar
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sen, Satyabrata
We propose a peak-to-average power ratio (PAPR) constrained Pareto-optimal waveform design approach for an orthogonal frequency division multiplexing (OFDM) radar signal to detect a target using the space-time adaptive processing (STAP) technique. The use of an OFDM signal does not only increase the frequency diversity of our system, but also enables us to adaptively design the OFDM coefficients in order to further improve the system performance. First, we develop a parametric OFDM-STAP measurement model by considering the effects of signaldependent clutter and colored noise. Then, we observe that the resulting STAP-performance can be improved by maximizing the output signal-to-interference-plus-noise ratiomore » (SINR) with respect to the signal parameters. However, in practical scenarios, the computation of output SINR depends on the estimated values of the spatial and temporal frequencies and target scattering responses. Therefore, we formulate a PAPR-constrained multi-objective optimization (MOO) problem to design the OFDM spectral parameters by simultaneously optimizing four objective functions: maximizing the output SINR, minimizing two separate Cramer-Rao bounds (CRBs) on the normalized spatial and temporal frequencies, and minimizing the trace of CRB matrix on the target scattering coefficients estimations. We present several numerical examples to demonstrate the achieved performance improvement due to the adaptive waveform design.« less
NASA Astrophysics Data System (ADS)
Kelly, N. M.; Marchi, S.; Mojzsis, S. J.; Flowers, R. M.; Metcalf, J. R.; Bottke, W. F., Jr.
2017-12-01
Impacts have a significant physical and chemical influence on the surface conditions of a planet. The cratering record is used to understand a wide array of impact processes, such as the evolution of the impact flux through time. However, the relationship between impactor size and a resulting impact crater remains controversial (e.g., Bottke et al., 2016). Likewise, small variations in the impact velocity are known to significantly affect the thermal-mechanical disturbances in the aftermath of a collision. Development of more robust numerical models for impact cratering has implications for how we evaluate the disruptive capabilities of impact events, including the extent and duration of thermal anomalies, the volume of ejected material, and the resulting landscape of impacted environments. To address uncertainties in crater scaling relationships, we present an approach and methodology that integrates numerical modeling of the thermal evolution of terrestrial impact craters with low-temperature, (U-Th)/He thermochronometry. The approach uses time-temperature (t-T) paths of crust within an impact crater, generated from numerical simulations of an impact. These t-T paths are then used in forward models to predict the resetting behavior of (U-Th)/He ages in the mineral chronometers apatite and zircon. Differences between the predicted and measured (U-Th)/He ages from a modeled terrestrial impact crater can then be used to evaluate parameters in the original numerical simulations, and refine the crater scaling relationships. We expect our methodology to additionally inform our interpretation of impact products, such as lunar impact breccias and meteorites, providing robust constraints on their thermal histories. In addition, the method is ideal for sample return mission planning - robust "prediction" of ages we expect from a given impact environment enhances our ability to target sampling sites on the Moon, Mars or other solar system bodies where impacts have strongly shaped the surface. Bottke, W.F., Vokrouhlicky, D., Ghent, B., et al. (2016). 47th LPSC, Abstract #2036.
NASA Astrophysics Data System (ADS)
Hubbard, Stephen; Kostic, Svetlana; Englert, Rebecca; Coutts, Daniel; Covault, Jacob
2017-04-01
Recent bathymetric observations of fjord prodeltas in British Columbia, Canada, reveal evidence for multi-phase channel erosion and deposition. These processes are interpreted to be related to the upstream migration of upper-flow-regime bedforms, namely cyclic steps. We integrate data from high-resolution bathymetric surveys and monitoring to inform morphodynamic numerical models of turbidity currents and associated bedforms in the Squamish prodelta. These models are applied to the interpretation of upper-flow-regime bedforms, including cyclic steps, antidunes, and/or transitional bedforms, in Late Cretaceous submarine conduit strata of the Nanaimo Group at Gabriola Island, British Columbia. In the Squamish prodelta, as bedforms migrate, >90% of the deposits are reworked, making morphology- and facies-based recognition challenging. Sedimentary bodies are 5-30 m long, 0.5-2 m thick and <30 m wide. The Nanaimo Group comprises scour fills of similar scale composed of structureless sandstone, with laminated siltstone locally overlying basal erosion surfaces. Backset stratification is locally observed; packages of 2-4 backset beds, each of which are up to 60 cm thick and up to 15 m long (along dip), commonly share composite basal erosion surfaces. Numerous scour fills are recognized over thin sections (<4 m), indicating limited aggradation and preservation of the bedforms. Preliminary morphodynamic numerical modeling indicates that Squamish and Nanaimo bedforms could be transitional upper-flow-regime bedforms between cyclic steps and antidunes. It is likely that cyclic steps and related upper-flow-regime bedforms are common in strata deposited on high gradient submarine slopes. Evidence for updip-migrating cyclic step and related deposits inform a revised interpretation of a high gradient setting dominated by supercritical flow, or alternating supercritical and subcritical flow in the Nanaimo Group. Integrating direct observations, morphodynamic numerical modeling, and outcrop characterization better constrains fundamental processes that operate in deep-water depositional systems; our analyses aims to further deduce the stratigraphy and preservation potential of upper flow-regime bedforms.
Constraining the inclination of the Low-Mass X-ray Binary Cen X-4
NASA Astrophysics Data System (ADS)
Hammerstein, Erica K.; Cackett, Edward M.; Reynolds, Mark T.; Miller, Jon M.
2018-05-01
We present the results of ellipsoidal light curve modeling of the low mass X-ray binary Cen X-4 in order to constrain the inclination of the system and mass of the neutron star. Near-IR photometric monitoring was performed in May 2008 over a period of three nights at Magellan using PANIC. We obtain J, H and K lightcurves of Cen X-4 using differential photometry. An ellipsoidal modeling code was used to fit the phase folded light curves. The lightcurve fit which makes the least assumptions about the properties of the binary system yields an inclination of 34.9^{+4.9}_{-3.6} degrees (1σ), which is consistent with previous determinations of the system's inclination but with improved statistical uncertainties. When combined with the mass function and mass ratio, this inclination yields a neutron star mass of 1.51^{+0.40}_{-0.55} M⊙. This model allows accretion disk parameters to be free in the fitting process. Fits that do not allow for an accretion disk component in the near-IR flux gives a systematically lower inclination between approximately 33 and 34 degrees, leading to a higher mass neutron star between approximately 1.7 M⊙ and 1.8 M⊙. We discuss the implications of other assumptions made during the modeling process as well as numerous free parameters and their effects on the resulting inclination.
Eagle, Robert A; Risi, Camille; Mitchell, Jonathan L; Eiler, John M; Seibt, Ulrike; Neelin, J David; Li, Gaojun; Tripati, Aradhna K
2013-05-28
The East Asian monsoon is one of Earth's most significant climatic phenomena, and numerous paleoclimate archives have revealed that it exhibits variations on orbital and suborbital time scales. Quantitative constraints on the climate changes associated with these past variations are limited, yet are needed to constrain sensitivity of the region to changes in greenhouse gas levels. Here, we show central China is a region that experienced a much larger temperature change since the Last Glacial Maximum than typically simulated by climate models. We applied clumped isotope thermometry to carbonates from the central Chinese Loess Plateau to reconstruct temperature and water isotope shifts from the Last Glacial Maximum to present. We find a summertime temperature change of 6-7 °C that is reproduced by climate model simulations presented here. Proxy data reveal evidence for a shift to lighter isotopic composition of meteoric waters in glacial times, which is also captured by our model. Analysis of model outputs suggests that glacial cooling over continental China is significantly amplified by the influence of stationary waves, which, in turn, are enhanced by continental ice sheets. These results not only support high regional climate sensitivity in Central China but highlight the fundamental role of planetary-scale atmospheric dynamics in the sensitivity of regional climates to continental glaciation, changing greenhouse gas levels, and insolation.
Reflected stochastic differential equation models for constrained animal movement
Hanks, Ephraim M.; Johnson, Devin S.; Hooten, Mevin B.
2017-01-01
Movement for many animal species is constrained in space by barriers such as rivers, shorelines, or impassable cliffs. We develop an approach for modeling animal movement constrained in space by considering a class of constrained stochastic processes, reflected stochastic differential equations. Our approach generalizes existing methods for modeling unconstrained animal movement. We present methods for simulation and inference based on augmenting the constrained movement path with a latent unconstrained path and illustrate this augmentation with a simulation example and an analysis of telemetry data from a Steller sea lion (Eumatopias jubatus) in southeast Alaska.
Kwicklis, Edward M.; Wolfsberg, Andrew V.; Stauffer, Philip H.; Walvoord, Michelle Ann; Sully, Michael J.
2006-01-01
Multiphase, multicomponent numerical models of long-term unsaturated-zone liquid and vapor movement were created for a thick alluvial basin at the Nevada Test Site to predict present-day liquid and vapor fluxes. The numerical models are based on recently developed conceptual models of unsaturated-zone moisture movement in thick alluvium that explain present-day water potential and tracer profiles in terms of major climate and vegetation transitions that have occurred during the past 10 000 yr or more. The numerical models were calibrated using borehole hydrologic and environmental tracer data available from a low-level radioactive waste management site located in a former nuclear weapons testing area. The environmental tracer data used in the model calibration includes tracers that migrate in both the liquid and vapor phases (??D, ??18O) and tracers that migrate solely as dissolved solutes (Cl), thus enabling the estimation of some gas-phase as well as liquid-phase transport parameters. Parameter uncertainties and correlations identified during model calibration were used to generate parameter combinations for a set of Monte Carlo simulations to more fully characterize the uncertainty in liquid and vapor fluxes. The calculated background liquid and vapor fluxes decrease as the estimated time since the transition to the present-day arid climate increases. However, on the whole, the estimated fluxes display relatively little variability because correlations among parameters tend to create parameter sets for which changes in some parameters offset the effects of others in the set. Independent estimates on the timing since the climate transition established from packrat midden data were essential for constraining the model calibration results. The study demonstrates the utility of environmental tracer data in developing numerical models of liquid- and gas-phase moisture movement and the importance of considering parameter correlations when using Monte Carlo analysis to characterize the uncertainty in moisture fluxes. ?? Soil Science Society of America.
Numerical simulation of bubble deformation in magnetic fluids by finite volume method
NASA Astrophysics Data System (ADS)
Yamasaki, Haruhiko; Yamaguchi, Hiroshi
2017-06-01
Bubble deformation in magnetic fluids under magnetic field is investigated numerically by an interface capturing method. The numerical method consists of a coupled level-set and VOF (Volume of Fluid) method, combined with conservation CIP (Constrained Interpolation Profile) method with the self-correcting procedure. In the present study considering actual physical properties of magnetic fluid, bubble deformation under given uniform magnetic field is analyzed for internal magnetic field passing through a magnetic gaseous and liquid phase interface. The numerical results explain the mechanism of bubble deformation under presence of given magnetic field.
Tang, Xiaoming; Qu, Hongchun; Wang, Ping; Zhao, Meng
2015-03-01
This paper investigates the off-line synthesis approach of model predictive control (MPC) for a class of networked control systems (NCSs) with network-induced delays. A new augmented model which can be readily applied to time-varying control law, is proposed to describe the NCS where bounded deterministic network-induced delays may occur in both sensor to controller (S-A) and controller to actuator (C-A) links. Based on this augmented model, a sufficient condition of the closed-loop stability is derived by applying the Lyapunov method. The off-line synthesis approach of model predictive control is addressed using the stability results of the system, which explicitly considers the satisfaction of input and state constraints. Numerical example is given to illustrate the effectiveness of the proposed method. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
Numerical modelling of electromagnetic loads on fusion device structures
NASA Astrophysics Data System (ADS)
Bettini, Paolo; Furno Palumbo, Maurizio; Specogna, Ruben
2014-03-01
In magnetic confinement fusion devices, during abnormal operations (disruptions) the plasma begins to move rapidly towards the vessel wall in a vertical displacement event (VDE), producing plasma current asymmetries, vessel eddy currents and open field line halo currents, each of which can exert potentially damaging forces upon the vessel and in-vessel components. This paper presents a methodology to estimate electromagnetic loads, on three-dimensional conductive structures surrounding the plasma, which arise from the interaction of halo-currents associated to VDEs with a magnetic field of the order of some Tesla needed for plasma confinement. Lorentz forces, calculated by complementary formulations, are used as constraining loads in a linear static structural analysis carried out on a detailed model of the mechanical structures of a representative machine.
NASA Astrophysics Data System (ADS)
DeGregorio, P.; Lawlor, A.; Dawson, K. A.
2006-04-01
We introduce a new method to describe systems in the vicinity of dynamical arrest. This involves a map that transforms mobile systems at one length scale to mobile systems at a longer length. This map is capable of capturing the singular behavior accrued across very large length scales, and provides a direct route to the dynamical correlation length and other related quantities. The ideas are immediately applicable in two spatial dimensions, and have been applied to a modified Kob-Andersen type model. For such systems the map may be derived in an exact form, and readily solved numerically. We obtain the asymptotic behavior across the whole physical domain of interest in dynamical arrest.
Computation and analysis for a constrained entropy optimization problem in finance
NASA Astrophysics Data System (ADS)
He, Changhong; Coleman, Thomas F.; Li, Yuying
2008-12-01
In [T. Coleman, C. He, Y. Li, Calibrating volatility function bounds for an uncertain volatility model, Journal of Computational Finance (2006) (submitted for publication)], an entropy minimization formulation has been proposed to calibrate an uncertain volatility option pricing model (UVM) from market bid and ask prices. To avoid potential infeasibility due to numerical error, a quadratic penalty function approach is applied. In this paper, we show that the solution to the quadratic penalty problem can be obtained by minimizing an objective function which can be evaluated via solving a Hamilton-Jacobian-Bellman (HJB) equation. We prove that the implicit finite difference solution of this HJB equation converges to its viscosity solution. In addition, we provide computational examples illustrating accuracy of calibration.
NASA Astrophysics Data System (ADS)
Larsson, R.; Milz, M.; Rayer, P.; Saunders, R.; Bell, W.; Booton, A.; Buehler, S. A.; Eriksson, P.; John, V.
2015-10-01
We present a comparison of a reference and a fast radiative transfer model using numerical weather prediction profiles for the Zeeman-affected high altitude Special Sensor Microwave Imager/Sounder channels 19-22. We find that the models agree well for channels 21 and 22 compared to the channels' system noise temperatures (1.9 and 1.3 K, respectively) and the expected profile errors at the affected altitudes (estimated to be around 5 K). For channel 22 there is a 0.5 K average difference between the models, with a standard deviation of 0.24 K for the full set of atmospheric profiles. Same channel, there is 1.2 K in average between the fast model and the sensor measurement, with 1.4 K standard deviation. For channel 21 there is a 0.9 K average difference between the models, with a standard deviation of 0.56 K. Same channel, there is 1.3 K in average between the fast model and the sensor measurement, with 2.4 K standard deviation. We consider the relatively small model differences as a validation of the fast Zeeman effect scheme for these channels. Both channels 19 and 20 have smaller average differences between the models (at below 0.2 K) and smaller standard deviations (at below 0.4 K) when both models use a two-dimensional magnetic field profile. However, when the reference model is switched to using a full three-dimensional magnetic field profile, the standard deviation to the fast model is increased to almost 2 K due to viewing geometry dependencies causing up to ± 7 K differences near the equator. The average differences between the two models remain small despite changing magnetic field configurations. We are unable to compare channels 19 and 20 to sensor measurements due to limited altitude range of the numerical weather prediction profiles. We recommended that numerical weather prediction software using the fast model takes the available fast Zeeman scheme into account for data assimilation of the affected sensor channels to better constrain the upper atmospheric temperatures.
NASA Astrophysics Data System (ADS)
Larsson, Richard; Milz, Mathias; Rayer, Peter; Saunders, Roger; Bell, William; Booton, Anna; Buehler, Stefan A.; Eriksson, Patrick; John, Viju O.
2016-03-01
We present a comparison of a reference and a fast radiative transfer model using numerical weather prediction profiles for the Zeeman-affected high-altitude Special Sensor Microwave Imager/Sounder channels 19-22. We find that the models agree well for channels 21 and 22 compared to the channels' system noise temperatures (1.9 and 1.3 K, respectively) and the expected profile errors at the affected altitudes (estimated to be around 5 K). For channel 22 there is a 0.5 K average difference between the models, with a standard deviation of 0.24 K for the full set of atmospheric profiles. Concerning the same channel, there is 1.2 K on average between the fast model and the sensor measurement, with 1.4 K standard deviation. For channel 21 there is a 0.9 K average difference between the models, with a standard deviation of 0.56 K. Regarding the same channel, there is 1.3 K on average between the fast model and the sensor measurement, with 2.4 K standard deviation. We consider the relatively small model differences as a validation of the fast Zeeman effect scheme for these channels. Both channels 19 and 20 have smaller average differences between the models (at below 0.2 K) and smaller standard deviations (at below 0.4 K) when both models use a two-dimensional magnetic field profile. However, when the reference model is switched to using a full three-dimensional magnetic field profile, the standard deviation to the fast model is increased to almost 2 K due to viewing geometry dependencies, causing up to ±7 K differences near the equator. The average differences between the two models remain small despite changing magnetic field configurations. We are unable to compare channels 19 and 20 to sensor measurements due to limited altitude range of the numerical weather prediction profiles. We recommended that numerical weather prediction software using the fast model takes the available fast Zeeman scheme into account for data assimilation of the affected sensor channels to better constrain the upper atmospheric temperatures.
Exploring Explanations of Subglacial Bedform Sizes Using Statistical Models.
Hillier, John K; Kougioumtzoglou, Ioannis A; Stokes, Chris R; Smith, Michael J; Clark, Chris D; Spagnolo, Matteo S
2016-01-01
Sediments beneath modern ice sheets exert a key control on their flow, but are largely inaccessible except through geophysics or boreholes. In contrast, palaeo-ice sheet beds are accessible, and typically characterised by numerous bedforms. However, the interaction between bedforms and ice flow is poorly constrained and it is not clear how bedform sizes might reflect ice flow conditions. To better understand this link we present a first exploration of a variety of statistical models to explain the size distribution of some common subglacial bedforms (i.e., drumlins, ribbed moraine, MSGL). By considering a range of models, constructed to reflect key aspects of the physical processes, it is possible to infer that the size distributions are most effectively explained when the dynamics of ice-water-sediment interaction associated with bedform growth is fundamentally random. A 'stochastic instability' (SI) model, which integrates random bedform growth and shrinking through time with exponential growth, is preferred and is consistent with other observations of palaeo-bedforms and geophysical surveys of active ice sheets. Furthermore, we give a proof-of-concept demonstration that our statistical approach can bridge the gap between geomorphological observations and physical models, directly linking measurable size-frequency parameters to properties of ice sheet flow (e.g., ice velocity). Moreover, statistically developing existing models as proposed allows quantitative predictions to be made about sizes, making the models testable; a first illustration of this is given for a hypothesised repeat geophysical survey of bedforms under active ice. Thus, we further demonstrate the potential of size-frequency distributions of subglacial bedforms to assist the elucidation of subglacial processes and better constrain ice sheet models.
T-COMP—A suite of programs for extracting transmissivity from MODFLOW models
Halford, Keith J.
2016-02-12
Simulated transmissivities are constrained poorly by assigning permissible ranges of hydraulic conductivities from aquifer-test results to hydrogeologic units in groundwater-flow models. These wide ranges are derived from interpretations of many aquifer tests that are categorized by hydrogeologic unit. Uncertainty is added where contributing thicknesses differ between field estimates and numerical models. Wide ranges of hydraulic conductivities and discordant thicknesses result in simulated transmissivities that frequently are much greater than aquifer-test results. Multiple orders of magnitude differences frequently occur between simulated and observed transmissivities where observed transmissivities are less than 1,000 feet squared per day.Transmissivity observations from individual aquifer tests can constrain model calibration as head and flow observations do. This approach is superior to diluting aquifer-test results into generalized ranges of hydraulic conductivities. Observed and simulated transmissivities can be compared directly with T-COMP, a suite of three FORTRAN programs. Transmissivity observations require that simulated hydraulic conductivities and thicknesses in the volume investigated by an aquifer test be extracted and integrated into a simulated transmissivity. Transmissivities of MODFLOW model cells are sampled within the volume affected by an aquifer test as defined by a well-specific, radial-flow model of each aquifer test. Sampled transmissivities of model cells are averaged within a layer and summed across layers. Accuracy of the approach was tested with hypothetical, multiple-aquifer models where specified transmissivities ranged between 250 and 20,000 feet squared per day. More than 90 percent of simulated transmissivities were within a factor of 2 of specified transmissivities.
Remote Sensing Observations and Numerical Simulation for Martian Layered Ejecta Craters
NASA Astrophysics Data System (ADS)
Li, L.; Yue, Z.; Zhang, C.; Li, D.
2018-04-01
To understand past Martian climates, it is important to know the distribution and nature of water ice on Mars. Impact craters are widely used ubiquitous indicators for the presence of subsurface water or ice on Mars. Remote sensing observations and numerical simulation are powerful tools for investigating morphological and topographic features on planetary surfaces, and we can use the morphology of layered ejecta craters and hydrocode modeling to constrain possible layering and impact environments. The approach of this work consists of three stages. Firstly, the morphological characteristics of the Martian layered ejecta craters are performed based on Martian images and DEM data. Secondly, numerical modeling layered ejecta are performed through the hydrocode iSALE (impact-SALE). We present hydrocode modeling of impacts onto targets with a single icy layer within an otherwise uniform basalt crust to quantify the effects of subsurface H2O on observable layered ejecta morphologies. The model setup is based on a layered target made up of a regolithic layer (described by the basalt ANEOS), on top an ice layer (described by ANEOS equation of H2O ice), in turn on top of an underlying basaltic crust. The bolide is a 0.8 km diameter basaltic asteroid hitting the Martian surface vertically at a velocity of 12.8 km/s. Finally, the numerical results are compared with the MOLA DEM profile in order to analyze the formation mechanism of Martian layered ejecta craters. Our simulations suggest that the presence of an icy layer significantly modifies the cratering mechanics, and many of the unusual features of SLE craters may be explained by the presence of icy layers. Impact cratering on icy satellites is significantly affected by the presence of subsurface H2O.
Prestressing force monitoring method for a box girder through distributed long-gauge FBG sensors
NASA Astrophysics Data System (ADS)
Chen, Shi-Zhi; Wu, Gang; Xing, Tuo; Feng, De-Cheng
2018-01-01
Monitoring prestressing forces is essential for prestressed concrete box girder bridges. However, the current monitoring methods used for prestressing force were not applicable for a box girder neither because of the sensor’s setup being constrained or shear lag effect not being properly considered. Through combining with the previous analysis model of shear lag effect in the box girder, this paper proposed an indirect monitoring method for on-site determination of prestressing force in a concrete box girder utilizing the distributed long-gauge fiber Bragg grating sensor. The performance of this method was initially verified using numerical simulation for three different distribution forms of prestressing tendons. Then, an experiment involving two concrete box girders was conducted to study the feasibility of this method under different prestressing levels preliminarily. The results of both numerical simulation and lab experiment validated this method’s practicability in a box girder.
Search For Dark Matter Satellites Using Fermi-Lat
Ackermann, M.
2012-02-23
Numerical simulations based on the ΛCDM model of cosmology predict a large number of as yet unobserved Galactic dark matter satellites. We report the results of a Large Area Telescope (LAT) search for these satellites via the γ-ray emission expected from the annihilation of weakly interacting massive particle (WIMP) dark matter. Some dark matter satellites are expected to have hard γ-ray spectra, finite angular extents, and a lack of counterparts at other wavelengths. We sought to identify LAT sources with these characteristics, focusing on γ-ray spectra consistent with WIMP annihilation through themore » $$b \\bar{b}$$ channel. We found no viable dark matter satellite candidates using one year of data, and we present a framework for interpreting this result in the context of numerical simulations to constrain the velocity-averaged annihilation cross section for a conventional 100 GeV WIMP annihilating through the $$b \\bar{b}$$ channel.« less
Search for Dark Matter Satellites Using the Fermi-Lat
NASA Technical Reports Server (NTRS)
Ackermann, M.; Albert, A.; Baldini, L.; Ballet, J.; Barbiellini, G.; Bastieri, D.; Bechtol, K.; Bellazzini, R.; Blandford, R. D.; Bloom, E. D.;
2012-01-01
Numerical simulations based on the ACDM model of cosmology predict a large number of as yet unobserved Galactic dark matter satellites. We report the results of a Large Area Telescope (LAT) search for these satellites via the gamma-ray emission expected from the annihilation of weakly interacting massive particle (WIMP) dark matter. Some dark matter satellites are expected to have hard gamma-ray spectra, finite angular extents, and a lack of counterparts at other wavelengths. We sought to identify LAT sources with these characteristics, focusing on gamma-ray spectra consistent with WIMP annihilation through the bb(sup raised bar) channel. We found no viable dark matter satellite candidates using one year of data, and we present a framework for interpreting this result in the context of numerical simulations to constrain the velocity-averaged annihilation cross section for a conventional 100 Ge V WIMP annihilating through the bb(sup raised bar) channel.
Preconditioned Alternating Projection Algorithms for Maximum a Posteriori ECT Reconstruction
Krol, Andrzej; Li, Si; Shen, Lixin; Xu, Yuesheng
2012-01-01
We propose a preconditioned alternating projection algorithm (PAPA) for solving the maximum a posteriori (MAP) emission computed tomography (ECT) reconstruction problem. Specifically, we formulate the reconstruction problem as a constrained convex optimization problem with the total variation (TV) regularization. We then characterize the solution of the constrained convex optimization problem and show that it satisfies a system of fixed-point equations defined in terms of two proximity operators raised from the convex functions that define the TV-norm and the constrain involved in the problem. The characterization (of the solution) via the proximity operators that define two projection operators naturally leads to an alternating projection algorithm for finding the solution. For efficient numerical computation, we introduce to the alternating projection algorithm a preconditioning matrix (the EM-preconditioner) for the dense system matrix involved in the optimization problem. We prove theoretically convergence of the preconditioned alternating projection algorithm. In numerical experiments, performance of our algorithms, with an appropriately selected preconditioning matrix, is compared with performance of the conventional MAP expectation-maximization (MAP-EM) algorithm with TV regularizer (EM-TV) and that of the recently developed nested EM-TV algorithm for ECT reconstruction. Based on the numerical experiments performed in this work, we observe that the alternating projection algorithm with the EM-preconditioner outperforms significantly the EM-TV in all aspects including the convergence speed, the noise in the reconstructed images and the image quality. It also outperforms the nested EM-TV in the convergence speed while providing comparable image quality. PMID:23271835
Constrained optimization via simulation models for new product innovation
NASA Astrophysics Data System (ADS)
Pujowidianto, Nugroho A.
2017-11-01
We consider the problem of constrained optimization where the decision makers aim to optimize the primary performance measure while constraining the secondary performance measures. This paper provides a brief overview of stochastically constrained optimization via discrete event simulation. Most review papers tend to be methodology-based. This review attempts to be problem-based as decision makers may have already decided on the problem formulation. We consider constrained optimization models as there are usually constraints on secondary performance measures as trade-off in new product development. It starts by laying out different possible methods and the reasons using constrained optimization via simulation models. It is then followed by the review of different simulation optimization approach to address constrained optimization depending on the number of decision variables, the type of constraints, and the risk preferences of the decision makers in handling uncertainties.
NASA Astrophysics Data System (ADS)
Reynolds, D.; Hall, I. R.; Slater, S. M.; Scourse, J. D.; Wanamaker, A. D.; Halloran, P. R.; Garry, F. K.
2017-12-01
Spatial network analyses of precisely dated, and annually resolved, tree-ring proxy records have facilitated robust reconstructions of past atmospheric climate variability and the associated mechanisms and forcings that drive it. In contrast, a lack of similarly dated marine archives has constrained the use of such techniques in the marine realm, despite the potential for developing a more robust understanding of the role basin scale ocean dynamics play in the global climate system. Here we show that a spatial network of marine molluscan sclerochronological oxygen isotope (δ18Oshell) series spanning the North Atlantic region provides a skilful reconstruction of basin scale North Atlantic sea surface temperatures (SSTs). Our analyses demonstrate that the composite marine series (referred to as δ18Oproxy_PC1) is significantly sensitive to inter-annual variability in North Atlantic SSTs (R=-0.61 P<0.01) and surface air temperatures (SATs; R=-0.67, P<0.01) over the 20th century. Subpolar gyre (SPG) SSTs dominates variability in the δ18Oproxy_PC1 series at sub-centennial frequencies (R=-0.51, P<0.01). Comparison of the δ18Oproxy_PC1 series against variability in the strength of the European Slope Current and maximum North Atlantic meridional overturning circulation derived from numeric climate models (CMIP5), indicates that variability in the SPG region, associated with the strength of the surface currents of the North Atlantic, are playing a significant role in shaping the multi-decadal scale SST variability over the industrial era. These analyses demonstrate that spatial networks developed from sclerochronological archives can provide powerful baseline archives of past ocean variability that can facilitate the development of a quantitative understanding for the role the oceans play in the global climate systems and constraining uncertainties in numeric climate models.
Multimaterial topology optimization of contact problems using phase field regularization
NASA Astrophysics Data System (ADS)
Myśliński, Andrzej
2018-01-01
The numerical method to solve multimaterial topology optimization problems for elastic bodies in unilateral contact with Tresca friction is developed in the paper. The displacement of the elastic body in contact is governed by elliptic equation with inequality boundary conditions. The body is assumed to consists from more than two distinct isotropic elastic materials. The materials distribution function is chosen as the design variable. Since high contact stress appears during the contact phenomenon the aim of the structural optimization problem is to find such topology of the domain occupied by the body that the normal contact stress along the boundary of the body is minimized. The original cost functional is regularized using the multiphase volume constrained Ginzburg-Landau energy functional rather than the perimeter functional. The first order necessary optimality condition is recalled and used to formulate the generalized gradient flow equations of Allen-Cahn type. The optimal topology is obtained as the steady state of the phase transition governed by the generalized Allen-Cahn equation. As the interface width parameter tends to zero the transition of the phase field model to the level set model is studied. The optimization problem is solved numerically using the operator splitting approach combined with the projection gradient method. Numerical examples confirming the applicability of the proposed method are provided and discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harlim, John, E-mail: jharlim@psu.edu; Mahdi, Adam, E-mail: amahdi@ncsu.edu; Majda, Andrew J., E-mail: jonjon@cims.nyu.edu
2014-01-15
A central issue in contemporary science is the development of nonlinear data driven statistical–dynamical models for time series of noisy partial observations from nature or a complex model. It has been established recently that ad-hoc quadratic multi-level regression models can have finite-time blow-up of statistical solutions and/or pathological behavior of their invariant measure. Recently, a new class of physics constrained nonlinear regression models were developed to ameliorate this pathological behavior. Here a new finite ensemble Kalman filtering algorithm is developed for estimating the state, the linear and nonlinear model coefficients, the model and the observation noise covariances from available partialmore » noisy observations of the state. Several stringent tests and applications of the method are developed here. In the most complex application, the perfect model has 57 degrees of freedom involving a zonal (east–west) jet, two topographic Rossby waves, and 54 nonlinearly interacting Rossby waves; the perfect model has significant non-Gaussian statistics in the zonal jet with blocked and unblocked regimes and a non-Gaussian skewed distribution due to interaction with the other 56 modes. We only observe the zonal jet contaminated by noise and apply the ensemble filter algorithm for estimation. Numerically, we find that a three dimensional nonlinear stochastic model with one level of memory mimics the statistical effect of the other 56 modes on the zonal jet in an accurate fashion, including the skew non-Gaussian distribution and autocorrelation decay. On the other hand, a similar stochastic model with zero memory levels fails to capture the crucial non-Gaussian behavior of the zonal jet from the perfect 57-mode model.« less
The H I-to-H2 Transition in a Turbulent Medium
NASA Astrophysics Data System (ADS)
Bialy, Shmuel; Burkhart, Blakesley; Sternberg, Amiel
2017-07-01
We study the effect of density fluctuations induced by turbulence on the H I/H2 structure in photodissociation regions (PDRs) both analytically and numerically. We perform magnetohydrodynamic numerical simulations for both subsonic and supersonic turbulent gas and chemical H I/H2 balance calculations. We derive atomic-to-molecular density profiles and the H I column density probability density function (PDF) assuming chemical equilibrium. We find that, while the H I/H2 density profiles are strongly perturbed in turbulent gas, the mean H I column density is well approximated by the uniform-density analytic formula of Sternberg et al. The PDF width depends on (a) the radiation intensity-to-mean density ratio, (b) the sonic Mach number, and (c) the turbulence decorrelation scale, or driving scale. We derive an analytic model for the H I PDF and demonstrate how our model, combined with 21 cm observations, can be used to constrain the Mach number and driving scale of turbulent gas. As an example, we apply our model to observations of H I in the Perseus molecular cloud. We show that a narrow observed H I PDF may imply small-scale decorrelation, pointing to the potential importance of subcloud-scale turbulence driving.
The Next Generation of Numerical Modeling in Mergers- Constraining the Star Formation Law
NASA Astrophysics Data System (ADS)
Chien, Li-Hsin
2010-09-01
Spectacular images of colliding galaxies like the "Antennae", taken with the Hubble Space Telescope, have revealed that a burst of star/cluster formation occurs whenever gas-rich galaxies interact. A?The ages and locations of these clusters reveal the interaction history and provide crucial clues to the process of star formation in galaxies. A?We propose to carry out state-of-the-art numerical simulations to model six nearby galaxy mergers {Arp 256, NGC 7469, NGC 4038/39, NGC 520, NGC 2623, NGC 3256}, hence increasing the number with this level of sophistication by a factor of 3. These simulations provide specific predictions for the age and spatial distributions of young star clusters. The comparison between these simulation results and the observations will allow us to answer a number of fundamental questions including: 1} is shock-induced or density-dependent star formation the dominant mechanism; 2} are the demographics {i.e. mass and age distributions} of the clusters in different mergers similar, i.e. "universal", or very different; and 3} will it be necessary to include other mechanisms, e.g., locally triggered star formation, in the models to better match the observations?
NASA Astrophysics Data System (ADS)
Lemieux, J.-M.; Sudicky, E. A.; Peltier, W. R.; Tarasov, L.
2008-09-01
In the recent literature, it has been shown that Pleistocene glaciations had a large impact on North American regional groundwater flow systems. Because of the myriad of complex processes and large spatial scales involved during periods of glaciation, numerical models have become powerful tools to examine how ice sheets control subsurface flow systems. In this paper, the key processes that must be represented in a continental-scale 3-D numerical model of groundwater flow during a glaciation are reviewed, including subglacial infiltration, density-dependent (i.e., high-salinity) groundwater flow, permafrost evolution, isostasy, sea level changes, and ice sheet loading. One-dimensional hydromechanical coupling associated with ice loading and brine generation were included in the numerical model HydroGeoSphere and tested against newly developed exact analytical solutions to verify their implementation. Other processes such as subglacial infiltration, permafrost evolution, and isostasy were explicitly added to HydroGeoSphere. A specified flux constrained by the ice sheet thickness was found to be the most appropriate boundary condition in the subglacial environment. For the permafrost, frozen and unfrozen elements can be selected at every time step with specified hydraulic conductivities. For the isostatic adjustment, the elevations of all the grid nodes in each vertical grid column below the ice sheet are adjusted uniformly to account for the Earth's crust depression and rebound. In a companion paper, the model is applied to the Wisconsinian glaciation over the Canadian landscape in order to illustrate the concepts developed in this paper and to better understand the impact of glaciation on 3-D continental groundwater flow systems.
Seismic depth imaging of sequence boundaries beneath the New Jersey shelf
NASA Astrophysics Data System (ADS)
Riedel, M.; Reiche, S.; Aßhoff, K.; Buske, S.
2018-06-01
Numerical modelling of fluid flow and transport processes relies on a well-constrained geological model, which is usually provided by seismic reflection surveys. In the New Jersey shelf area a large number of 2D seismic profiles provide an extensive database for constructing a reliable geological model. However, for the purpose of modelling groundwater flow, the seismic data need to be depth-converted which is usually accomplished using complementary data from borehole logs. Due to the limited availability of such data in the New Jersey shelf, we propose a two-stage processing strategy with particular emphasis on reflection tomography and pre-stack depth imaging. We apply this workflow to a seismic section crossing the entire New Jersey shelf. Due to the tomography-based velocity modelling, the processing flow does not depend on the availability of borehole logging data. Nonetheless, we validate our results by comparing the migrated depths of selected geological horizons to borehole core data from the IODP expedition 313 drill sites, located at three positions along our seismic line. The comparison yields that in the top 450 m of the migrated section, most of the selected reflectors were positioned with an accuracy close to the seismic resolution limit (≈ 4 m) for that data. For deeper layers the accuracy still remains within one seismic wavelength for the majority of the tested horizons. These results demonstrate that the processed seismic data provide a reliable basis for constructing a hydrogeological model. Furthermore, the proposed workflow can be applied to other seismic profiles in the New Jersey shelf, which will lead to an even better constrained model.
A joint-space numerical model of metabolic energy expenditure for human multibody dynamic system.
Kim, Joo H; Roberts, Dustyn
2015-09-01
Metabolic energy expenditure (MEE) is a critical performance measure of human motion. In this study, a general joint-space numerical model of MEE is derived by integrating the laws of thermodynamics and principles of multibody system dynamics, which can evaluate MEE without the limitations inherent in experimental measurements (phase delays, steady state and task restrictions, and limited range of motion) or muscle-space models (complexities and indeterminacies from excessive DOFs, contacts and wrapping interactions, and reliance on in vitro parameters). Muscle energetic components are mapped to the joint space, in which the MEE model is formulated. A constrained multi-objective optimization algorithm is established to estimate the model parameters from experimental walking data also used for initial validation. The joint-space parameters estimated directly from active subjects provide reliable MEE estimates with a mean absolute error of 3.6 ± 3.6% relative to validation values, which can be used to evaluate MEE for complex non-periodic tasks that may not be experimentally verifiable. This model also enables real-time calculations of instantaneous MEE rate as a function of time for transient evaluations. Although experimental measurements may not be completely replaced by model evaluations, predicted quantities can be used as strong complements to increase reliability of the results and yield unique insights for various applications. Copyright © 2015 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Das, Debasish; Saintillan, David
2015-11-01
The deformation of leaky dielectric drops in a dielectric fluid medium when subject to a uniform electric field is a classic electrohydrodynamic phenomenon best described by the well-known Melcher-Taylor leaky dielectric model. In this work, we develop a three-dimensional boundary element method for the full leaky dielectric model to systematically study the deformation and dynamics of liquid drops in strong electric fields. We compare our results with existing numerical studies, most of which have been constrained to axisymmetric drops or have neglected interfacial charge convection by the flow. The leading effect of convection is to enhance deformation of prolate drops and suppress deformation of oblate drops, as previously observed in the axisymmetric case. The inclusion of charge convection also enables us to investigate the dynamics in the Quincke regime, in which experiments exhibit a symmetry-breaking bifurcation leading to a tank-treading regime. Our simulations confirm the existence of this bifurcation for highly viscous drops, and also reveal the development of sharp interfacial charge gradients driven by convection near the drop's equator. American Chemical Society, Petroleum Research Fund.
Transient dynamics of vulcanian explosions and column collapse.
Clarke, A B; Voight, B; Neri, A; Macedonio, G
2002-02-21
Several analytical and numerical eruption models have provided insight into volcanic eruption behaviour, but most address plinian-type eruptions where vent conditions are quasi-steady. Only a few studies have explored the physics of short-duration vulcanian explosions with unsteady vent conditions and blast events. Here we present a technique that links unsteady vent flux of vulcanian explosions to the resulting dispersal of volcanic ejecta, using a numerical, axisymmetric model with multiple particle sizes. We use observational data from well documented explosions in 1997 at the Soufrière Hills volcano in Montserrat, West Indies, to constrain pre-eruptive subsurface initial conditions and to compare with our simulation results. The resulting simulations duplicate many features of the observed explosions, showing transitional behaviour where mass is divided between a buoyant plume and hazardous radial pyroclastic currents fed by a collapsing fountain. We find that leakage of volcanic gas from the conduit through surrounding rocks over a short period (of the order of 10 hours) or retarded exsolution can dictate the style of explosion. Our simulations also reveal the internal plume dynamics and particle-size segregation mechanisms that may occur in such eruptions.
Numerical modelling of flow through foam's node.
Anazadehsayed, Abdolhamid; Rezaee, Nastaran; Naser, Jamal
2017-10-15
In this work, for the first time, a three-dimensional model to describe the dynamics of flow through geometric Plateau border and node components of foam is presented. The model involves a microscopic-scale structure of one interior node and four Plateau borders with an angle of 109.5 from each other. The majority of the surfaces in the model make a liquid-gas interface where the boundary condition of stress balance between the surface and bulk is applied. The three-dimensional Navier-Stoke equation, along with continuity equation, is solved using the finite volume approach. The numerical results are validated against the available experimental results for the flow velocity and resistance in the interior nodes and Plateau borders. A qualitative illustration of flow in a node in different orientations is shown. The scaled resistance against the flow for different liquid-gas interface mobility is studied and the geometrical characteristics of the node and Plateau border components of the system are compared to investigate the Plateau border and node dominated flow regimes numerically. The findings show the values of the resistance in each component, in addition to the exact point where the flow regimes switch. Furthermore, a more accurate effect of the liquid-gas interface on the foam flow, particularly in the presence of a node in the foam network is obtained. The comparison of the available numerical results with our numerical results shows that the velocity of the node-PB system is lower than the velocity of single PB system for mobile interfaces. That is owing to the fact that despite the more relaxed geometrical structure of the node, constraining effect of merging and mixing of flow and increased viscous damping in the node component result in the node-dominated regime. Moreover, we obtain an accurate updated correlation for the dependence of the scaled average velocity of the node-Plateau border system on the liquid-gas interface mobility described by Boussinesq number. Copyright © 2017 Elsevier Inc. All rights reserved.
Origin and thermal evolution of Mars
NASA Technical Reports Server (NTRS)
Schubert, Gerald; Soloman, S. C.; Turcotte, D. L.; Drake, M. J.; Sleep, N. H.
1990-01-01
The thermal evolution of Mars is governed by subsolidus mantle convection beneath a thick lithosphere. Models of the interior evolution are developed by parameterizing mantle convective heat transport in terms of mantle viscosity, the superadiabatic temperature rise across the mantle, and mantle heat production. Geological, geophysical, and geochemical observations of the compositon and structure of the interior and of the timing of major events in Martian evolution are used to constrain the model computations. Such evolutionary events include global differentiation, atmospheric outgassing, and the formation of the hemispherical dichotomy and Tharsis. Numerical calculations of fully three-dimensional, spherical convection in a shell the size of the Martian mantle are performed to explore plausible patterns of Martian mantel convection and to relate convective features, such as plumes, to surface features, such as Tharsis. The results from the model calculations are presented.
Yan, Yi; Adam, Brian; Galinski, Mary; C Kissinger, Jessica; Moreno, Alberto; Gutierrez, Juan B
2015-12-01
We developed a coupled age-structured partial differential equation model to capture the disease dynamics during blood-stage malaria. The addition of age structure for the parasite population, with respect to previous models, allows us to better characterize the interaction between the malaria parasite and red blood cells during infection. Here we prove that the system we propose is well-posed and there exist at least two global states. We further demonstrate that the numerical simulation of the system coincides with clinically observed outcomes of primary and secondary malaria infection. The well-posedness of this system guarantees that the behavior of the model remains smooth, bounded, and continuously dependent on initial conditions; calibration with clinical data will constrain domains of parameters and variables to physiological ranges. Copyright © 2015 Elsevier Inc. All rights reserved.
Mergers of Black-Hole Binaries with Aligned Spins: Waveform Characteristics
NASA Technical Reports Server (NTRS)
Kelly, Bernard J.; Baker, John G.; vanMeter, James R.; Boggs, William D.; McWilliams, Sean T.; Centrella, Joan
2011-01-01
"We apply our gravitational-waveform analysis techniques, first presented in the context of nonspinning black holes of varying mass ratio [1], to the complementary case of equal-mass spinning black-hole binary systems. We find that, as with the nonspinning mergers, the dominant waveform modes phases evolve together in lock-step through inspiral and merger, supporting the previous model of the binary system as an adiabatically rigid rotator driving gravitational-wave emission - an implicit rotating source (IRS). We further apply the late-merger model for the rotational frequency introduced in [1], along with a new mode amplitude model appropriate for the dominant (2, plus or minus 2) modes. We demonstrate that this seven-parameter model performs well in matches with the original numerical waveform for system masses above - 150 solar mass, both when the parameters are freely fit, and when they are almost completely constrained by physical considerations."
An Optimization Model for the Selection of Bus-Only Lanes in a City.
Chen, Qun
2015-01-01
The planning of urban bus-only lane networks is an important measure to improve bus service and bus priority. To determine the effective arrangement of bus-only lanes, a bi-level programming model for urban bus lane layout is developed in this study that considers accessibility and budget constraints. The goal of the upper-level model is to minimize the total travel time, and the lower-level model is a capacity-constrained traffic assignment model that describes the passenger flow assignment on bus lines, in which the priority sequence of the transfer times is reflected in the passengers' route-choice behaviors. Using the proposed bi-level programming model, optimal bus lines are selected from a set of candidate bus lines; thus, the corresponding bus lane network on which the selected bus lines run is determined. The solution method using a genetic algorithm in the bi-level programming model is developed, and two numerical examples are investigated to demonstrate the efficacy of the proposed model.
Understanding the Physical Structure of the Comet Shoemaker-Levy 9 Fragments
NASA Astrophysics Data System (ADS)
Rettig, Terrence
2000-07-01
Images of the fragmented comet Shoemaker-Levy 9 {SL9} as it approached Jupiter in 1994 provided a unique opportunity to {1} probe the comae, {2} understand the structure of the 20 cometary objects, and {3} provide limits on the Jovian impact parameters. The primary cometary questions were: how were the fragments formed and what was their central structure? There still remains a diversity of opinion regarding the structure of the 21 comet-like fragments as well as the specifics of the disruption event itself. We have shown from Monte Carlo modeling of surface brightness profiles that SL9 fragments had unusual dust size distributions and outflow velocities. Further work of a preliminary nature showed some of the central reflecting area excesses derived from surface brightness profile fitting {w/psf} appeared distributed rather than centrally concentrated as would be expected for comet- like objects, some central excesses were negative and also, the excesses could vary with time. With an improved coma subtraction technique we propose to model each coma surface brightness profile, extract central reflecting areas or central brightness excesses for the non-star-contaminated WFPC-2 SL9, to determine the behavior and characteristics of the central excesses as the fragments approached Jupiter. A second phase of the proposal will be to use numerical techniques {in conjunction with D. Richardson} to investigate the various fragment models. This is a difficult modeling process that will allow us to model the structure and physical characteristics of the fragments and thus constrain parameters for the Jovian impact events. The results will be used to constrain the structure of the central fragment cores of SL9 and how the observed dust comae were produced. The results will provide evidence to discriminate between the parent nucleus models {i.e., were the fragments solid objects or swarms of particles?} and provide better constraints on the atmospheric impact models. The physical characteristics of cometary nuclei are not well understood and the SL9 data provides an important opportunity to constrain these parameters.
Modeling Wide-Angle Seismic Data from the Hi-CLIMB Experiment in Tibet
NASA Astrophysics Data System (ADS)
Nowack, R. L.; Griffin, J. D.; Tseng, T.; Chen, W.
2009-12-01
Using data from local and regional events recorded by the Hi-CLIMB array in Tibet, we utilize seismic attributes, including arrival times, Hilbert amplitudes and pulse frequencies, to constrain structures of seismic wave speed and attenuation in the crust and the upper mantle in western China. We construct more than 30 high-quality, regional seismic profiles, and select 14 of these, which show excellent crustal and Pn arrivals, for further analysis. Travel-times from events at regional distances constrain large-scale velocity structures, and four close-in events provide further details on crustal structure. We use the 3-D ray tracer, CRT, to model the travel-times. Initial results indicate that the Moho beneath the Lhasa terrane of southern Tibet is over 73 km deep with a high Pn speed of about 8.2 km/s. In contrast, the Qiangtang terrane farther north shows a thinner crust, by up to 10 km, and a low Pn speed of 7.8-7.9 km/s. Preliminary estimates of upper mantle velocity gradients are between .003 and .004 km/s per km, consistent with previous results by Phillips et al. (2007). We also use P to SV conversions from teleseismic earthquakes to independently constrain variations in speeds of Pn and depths of the Moho. For instance, amplitudes of the SsPmP phase, when its last reflection off the Moho is near-critical, are particularly sensitive to the contrast in seismic wave speeds across the crust-mantle interface; and results from these additional data are consistent with those from modeling of travel-times. Additional seismic attributes, extracted from wave-trains containing Pn and major crustal phases, are being compared with results of numerical modeling based on the spectral element method and asymptotic calculations in laterally varying media, where both lateral and vertical gradients in seismic wave speeds can strongly affect Pn amplitudes and pulse frequencies.
NASA Astrophysics Data System (ADS)
Liu, L.; Hu, J.; Zhou, Q.
2016-12-01
The rapid accumulation of geophysical and geological data sets poses an increasing demand for the development of geodynamic models to better understand the evolution of the solid Earth. Consequently, the earlier qualitative physical models are no long satisfying. Recent efforts are focusing on more quantitative simulations and more efficient numerical algorithms. Among these, a particular line of research is on the implementation of data-oriented geodynamic modeling, with the purpose of building an observationally consistent and physically correct geodynamic framework. Such models could often catalyze new insights into the functioning mechanisms of the various aspects of plate tectonics, and their predictive nature could also guide future research in a deterministic fashion. Over the years, we have been working on constructing large-scale geodynamic models with both sequential and variational data assimilation techniques. These models act as a bridge between different observational records, and the superposition of the constraining power from different data sets help reveal unknown processes and mechanisms of the dynamics of the mantle and lithosphere. We simulate the post-Cretaceous subduction history in South America using a forward (sequential) approach. The model is constrained using past subduction history, seafloor age evolution, tectonic architecture of continents, and the present day geophysical observations. Our results quantify the various driving forces shaping the present South American flat slabs, which we found are all internally torn. The 3-D geometry of these torn slabs further explains the abnormal seismicity pattern and enigmatic volcanic history. An inverse (variational) model simulating the late Cenozoic western U.S. mantle dynamics with similar constraints reveals a different mechanism for the formation of Yellowstone-related volcanism from traditional understanding. Furthermore, important insights on the mantle density and viscosity structures also emerge from these models.
Sliding Mode Control of a Slewing Flexible Beam
NASA Technical Reports Server (NTRS)
Wilson, David G.; Parker, Gordon G.; Starr, Gregory P.; Robinett, Rush D., III
1997-01-01
An output feedback sliding mode controller (SMC) is proposed to minimize the effects of vibrations of slewing flexible manipulators. A spline trajectory is used to generate ideal position and velocity commands. Constrained nonlinear optimization techniques are used to both calibrate nonlinear models and determine optimized gains to produce a rest-to-rest, residual vibration-free maneuver. Vibration-free maneuvers are important for current and future NASA space missions. This study required the development of the nonlinear dynamic system equations of motion; robust control law design; numerical implementation; system identification; and verification using the Sandia National Laboratories flexible robot testbed. Results are shown for a slewing flexible beam.
On-Board Generation of Three-Dimensional Constrained Entry Trajectories
NASA Technical Reports Server (NTRS)
Shen, Zuojun; Lu, Ping; Jackson, Scott (Technical Monitor)
2002-01-01
A methodology for very fast design of 3DOF entry trajectories subject to all common inequality and equality constraints is developed. The approach make novel use of the well known quasi-equilibrium glide phenomenon in lifting entry as a center piece for conveniently enforcing the inequality constraints which are otherwise difficulty to handle. The algorithm is able to generate a complete feasible 3DOF entry trajectory, given the entry conditions, values of constraint parameters, and final conditions in about 2 seconds on a PC. Numerical simulations with the X-33 vehicle model for various entry missions to land at Kennedy Space Center will be presented.
Constrained reduced-order models based on proper orthogonal decomposition
Reddy, Sohail R.; Freno, Brian Andrew; Cizmas, Paul G. A.; ...
2017-04-09
A novel approach is presented to constrain reduced-order models (ROM) based on proper orthogonal decomposition (POD). The Karush–Kuhn–Tucker (KKT) conditions were applied to the traditional reduced-order model to constrain the solution to user-defined bounds. The constrained reduced-order model (C-ROM) was applied and validated against the analytical solution to the first-order wave equation. C-ROM was also applied to the analysis of fluidized beds. Lastly, it was shown that the ROM and C-ROM produced accurate results and that C-ROM was less sensitive to error propagation through time than the ROM.
Stability analysis of magnetized neutron stars - a semi-analytic approach
NASA Astrophysics Data System (ADS)
Herbrik, Marlene; Kokkotas, Kostas D.
2017-04-01
We implement a semi-analytic approach for stability analysis, addressing the ongoing uncertainty about stability and structure of neutron star magnetic fields. Applying the energy variational principle, a model system is displaced from its equilibrium state. The related energy density variation is set up analytically, whereas its volume integration is carried out numerically. This facilitates the consideration of more realistic neutron star characteristics within the model compared to analytical treatments. At the same time, our method retains the possibility to yield general information about neutron star magnetic field and composition structures that are likely to be stable. In contrast to numerical studies, classes of parametrized systems can be studied at once, finally constraining realistic configurations for interior neutron star magnetic fields. We apply the stability analysis scheme on polytropic and non-barotropic neutron stars with toroidal, poloidal and mixed fields testing their stability in a Newtonian framework. Furthermore, we provide the analytical scheme for dropping the Cowling approximation in an axisymmetric system and investigate its impact. Our results confirm the instability of simple magnetized neutron star models as well as a stabilization tendency in the case of mixed fields and stratification. These findings agree with analytical studies whose spectrum of model systems we extend by lifting former simplifications.
a Numerical Investigation of the Jamming Transition in Traffic Flow on Diluted Planar Networks
NASA Astrophysics Data System (ADS)
Achler, Gabriele; Barra, Adriano
In order to develop a toy model for car's traffic in cities, in this paper we analyze, by means of numerical simulations, the transition among fluid regimes and a congested jammed phase of the flow of kinetically constrained hard spheres in planar random networks similar to urban roads. In order to explore as timescales as possible, at a microscopic level we implement an event driven dynamics as the infinite time limit of a class of already existing model (Follow the Leader) on an Erdos-Renyi two-dimensional graph, the crossroads being accounted by standard Kirchoff density conservations. We define a dynamical order parameter as the ratio among the moving spheres versus the total number and by varying two control parameters (density of the spheres and coordination number of the network) we study the phase transition. At a mesoscopic level it respects an, again suitable, adapted version of the Lighthill-Whitham model, which belongs to the fluid-dynamical approach to the problem. At a macroscopic level, the model seems to display a continuous transition from a fluid phase to a jammed phase when varying the density of the spheres (the amount of cars in a city-like scenario) and a discontinuous jump when varying the connectivity of the underlying network.
Dynamic characterization of high damping viscoelastic materials from vibration test data
NASA Astrophysics Data System (ADS)
Martinez-Agirre, Manex; Elejabarrieta, María Jesús
2011-08-01
The numerical analysis and design of structural systems involving viscoelastic damping materials require knowledge of material properties and proper mathematical models. A new inverse method for the dynamic characterization of high damping and strong frequency-dependent viscoelastic materials from vibration test data measured by forced vibration tests with resonance is presented. Classical material parameter extraction methods are reviewed; their accuracy for characterizing high damping materials is discussed; and the bases of the new analysis method are detailed. The proposed inverse method minimizes the residue between the experimental and theoretical dynamic response at certain discrete frequencies selected by the user in order to identify the parameters of the material constitutive model. Thus, the material properties are identified in the whole bandwidth under study and not just at resonances. Moreover, the use of control frequencies makes the method insensitive to experimental noise and the efficiency is notably enhanced. Therefore, the number of tests required is drastically reduced and the overall process is carried out faster and more accurately. The effectiveness of the proposed method is demonstrated with the characterization of a CLD (constrained layer damping) cantilever beam. First, the elastic properties of the constraining layers are identified from the dynamic response of a metallic cantilever beam. Then, the viscoelastic properties of the core, represented by a four-parameter fractional derivative model, are identified from the dynamic response of a CLD cantilever beam.
Dissipative dark matter halos: The steady state solution
NASA Astrophysics Data System (ADS)
Foot, R.
2018-02-01
Dissipative dark matter, where dark matter particle properties closely resemble familiar baryonic matter, is considered. Mirror dark matter, which arises from an isomorphic hidden sector, is a specific and theoretically constrained scenario. Other possibilities include models with more generic hidden sectors that contain massless dark photons [unbroken U (1 ) gauge interactions]. Such dark matter not only features dissipative cooling processes but also is assumed to have nontrivial heating sourced by ordinary supernovae (facilitated by the kinetic mixing interaction). The dynamics of dissipative dark matter halos around rotationally supported galaxies, influenced by heating as well as cooling processes, can be modeled by fluid equations. For a sufficiently isolated galaxy with a stable star formation rate, the dissipative dark matter halos are expected to evolve to a steady state configuration which is in hydrostatic equilibrium and where heating and cooling rates locally balance. Here, we take into account the major cooling and heating processes, and numerically solve for the steady state solution under the assumptions of spherical symmetry, negligible dark magnetic fields, and that supernova sourced energy is transported to the halo via dark radiation. For the parameters considered, and assumptions made, we were unable to find a physically realistic solution for the constrained case of mirror dark matter halos. Halo cooling generally exceeds heating at realistic halo mass densities. This problem can be rectified in more generic dissipative dark matter models, and we discuss a specific example in some detail.
Modeling sustainable reuse of nitrogen-laden wastewater by poplar.
Wang, Yusong; Licht, Louis; Just, Craig
2016-01-01
Numerical modeling was used to simulate the leaching of nitrogen (N) to groundwater as a consequence of irrigating food processing wastewater onto grass and poplar under various management scenarios. Under current management practices for a large food processor, a simulated annual N loading of 540 kg ha(-1) yielded 93 kg ha(-1) of N leaching for grass and no N leaching for poplar during the growing season. Increasing the annual growing season N loading to approximately 1,550 kg ha(-1) for poplar only, using "weekly", "daily" and "calculated" irrigation scenarios, yielded N leaching of 17 kg ha(-1), 6 kg ha(-1), and 4 kg ha(-1), respectively. Constraining the simulated irrigation schedule by the current onsite wastewater storage capacity of approximately 757 megaliters (Ml) yielded N leaching of 146 kg ha(-1) yr(-1) while storage capacity scenarios of 3,024 and 4,536 Ml yielded N leaching of 65 and 13 kg ha(-1) yr(-1), respectively, for a loading of 1,550 kg ha(-1) yr(-1). Further constraining the model by the current wastewater storage volume and the available land area (approximately 1,000 hectares) required a "diverse" irrigation schedule that was predicted to leach a weighted average of 13 kg-N ha(-1) yr(-1) when dosed with 1,063 kg-N ha(-1) yr(-1).
Constraining the phantom braneworld model from cosmic structure sizes
NASA Astrophysics Data System (ADS)
Bhattacharya, Sourav; Kousvos, Stefanos R.
2017-11-01
We consider the phantom braneworld model in the context of the maximum turnaround radius, RTA ,max, of a stable, spherical cosmic structure with a given mass. The maximum turnaround radius is the point where the attraction due to the central inhomogeneity gets balanced with the repulsion of the ambient dark energy, beyond which a structure cannot hold any mass, thereby giving the maximum upper bound on the size of a stable structure. In this work we derive an analytical expression of RTA ,max for this model using cosmological scalar perturbation theory. Using this we numerically constrain the parameter space, including a bulk cosmological constant and the Weyl fluid, from the mass versus observed size data for some nearby, nonvirial cosmic structures. We use different values of the matter density parameter Ωm, both larger and smaller than that of the Λ cold dark matter, as the input in our analysis. We show in particular, that (a) with a vanishing bulk cosmological constant the predicted upper bound is always greater than what is actually observed; a similar conclusion holds if the bulk cosmological constant is negative (b) if it is positive, the predicted maximum size can go considerably below than what is actually observed and owing to the involved nature of the field equations, it leads to interesting constraints on not only the bulk cosmological constant itself but on the whole parameter space of the theory.
Common reflection point migration and velocity analysis for anisotropic media
NASA Astrophysics Data System (ADS)
Oropeza, Ernesto V.
An efficient Kirchhoff-style prestack depth migration, called 'parsimonious' migration was developed a decade ago for isotropic 2D and 3D media. The common-reflection point (CRP) migration velocity analysis (MVA) was developed later for isotropic media. The isotropic parsimonious migration produces incorrect images when the media is actually anisotropic. Similarly, isotropic CRP MVA produces incorrect inversions when the medium is anisotropic. In this study both parsimonious depth migration and common-reflection point migration velocity analysis are extended for application to 2D tilted transversely isotropic (TTI) media and illustrated with synthetic P-wave data. While the framework of isotropic parsimonious migration may be retained, the extension to TTI media requires redevelopment of each of the numerical components, including calculation of the phase and group velocity for TTI media, development of a new two-point anisotropic ray tracer, and substitution of an initial-angle and anisotropic shooting ray-trace algorithm to replace the isotropic one. The 2D model parameterization consists of Thomsen's parameters (Vpo, epsilon, delta) and the tilt angle of the symmetry axis of the TI medium. The parsimonious anisotropic migration algorithm is successfully applied to synthetic data from a TTI version of the Marmousi-2 model. The quality of the image improves by weighting the impulse response by the calculation of the anisotropic Fresnel radius. The accuracy and speed of this migration makes it useful for anisotropic velocity model building. The common-reflection point migration velocity analysis for TTI media for P-waves includes (and inverts for) Vpo, epsilon, and delta. The orientation of the anisotropic symmetry axis have to be constrained. If it constrained orthogonal to the layer bottom (as it conventionally is), it is estimated at each CRP and updated at each iteration without intermediate picking. The extension to TTI media requires development of a new inversion procedure to include Vpo, epsilon, and delta in the perturbations. The TTI CRP MVA is applied to a single layer to demonstrate its feasibility. Errors in the estimation of the orientation of the symmetry axis larger that 5 degrees affect the inversion of epsilon and delta while Vpo is less sensitive to this parameter. The TTI CRP MVA is also applied to a version of the TTI BP model by layer stripping so one group of CRPs are used do to inversion top to bottom, constraining the model parameter after each previous group of CRPs converges. Vpo, delta and the orientation of the anisotropic symmetry axis (constrained orthogonal to the local reflector orientation) are successfully inverted. epsilon is less well constrained by the small acquisition aperture in the data .
1974-01-01
REGRESSION MODEL - THE UNCONSTRAINED, LINEAR EQUALITY AND INEQUALITY CONSTRAINED APPROACHES January 1974 Nelson Delfino d’Avila Mascarenha;? Image...Report 520 DIGITAL IMAGE RESTORATION UNDER A REGRESSION MODEL THE UNCONSTRAINED, LINEAR EQUALITY AND INEQUALITY CONSTRAINED APPROACHES January...a two- dimensional form adequately describes the linear model . A dis- cretization is performed by using quadrature methods. By trans
Coupled Kelvin-Helmholtz and Tearing Mode Instabilities at the Mercury's Magnetopause
NASA Astrophysics Data System (ADS)
Ivanovski, S. L.; Milillo, A.; Kartalev, M.; Massetti, S.
2018-05-01
A MHD approach for numerical simulations of coupled Kelvin-Helmholtz and tearing mode instabilities has been applied to Mercury’s magnetopause and used to perform a physical parameters study constrained by the MESSENGER data.
NASA Astrophysics Data System (ADS)
Benesh, N. P.; Plesch, A.; Shaw, J. H.; Frost, E. K.
2007-03-01
Using the discrete element modeling method, we examine the two-dimensional nature of fold development above an anticlinal bend in a blind thrust fault. Our models were composed of numerical disks bonded together to form pregrowth strata overlying a fixed fault surface. This pregrowth package was then driven along the fault surface at a fixed velocity using a vertical backstop. Additionally, new particles were generated and deposited onto the pregrowth strata at a fixed rate to produce sequential growth layers. Models with and without mechanical layering were used, and the process of folding was analyzed in comparison with fold geometries predicted by kinematic fault bend folding as well as those observed in natural settings. Our results show that parallel fault bend folding behavior holds to first order in these models; however, a significant decrease in limb dip is noted for younger growth layers in all models. On the basis of comparisons to natural examples, we believe this deviation from kinematic fault bend folding to be a realistic feature of fold development resulting from an axial zone of finite width produced by materials with inherent mechanical strength. These results have important implications for how growth fold structures are used to constrain slip and paleoearthquake ages above blind thrust faults. Most notably, deformation localized about axial surfaces and structural relief across the fold limb seem to be the most robust observations that can readily constrain fault activity and slip. In contrast, fold limb width and shallow growth layer dips appear more variable and dependent on mechanical properties of the strata.
Inverse Regional Modeling with Adjoint-Free Technique
NASA Astrophysics Data System (ADS)
Yaremchuk, M.; Martin, P.; Panteleev, G.; Beattie, C.
2016-02-01
The ongoing parallelization trend in computer technologies facilitates the use ensemble methods in geophysical data assimilation. Of particular interest are ensemble techniques which do not require the development of tangent linear numerical models and their adjoints for optimization. These ``adjoint-free'' methods minimize the cost function within the sequence of subspaces spanned by a carefully chosen sets perturbations of the control variables. In this presentation, an adjoint-free variational technique (a4dVar) is demonstrated in an application estimating initial conditions of two numerical models: the Navy Coastal Ocean Model (NCOM), and the surface wave model (WAM). With the NCOM, performance of both adjoint and adjoint-free 4dVar data assimilation techniques is compared in application to the hydrographic surveys and velocity observations collected in the Adriatic Sea in 2006. Numerical experiments have shown that a4dVar is capable of providing forecast skill similar to that of conventional 4dVar at comparable computational expense while being less susceptible to excitation of ageostrophic modes that are not supported by observations. Adjoint-free technique constrained by the WAM model is tested in a series of data assimilation experiments with synthetic observations in the southern Chukchi Sea. The types of considered observations are directional spectra estimated from point measurements by stationary buoys, significant wave height (SWH) observations by coastal high-frequency radars and along-track SWH observations by satellite altimeters. The a4dVar forecast skill is shown to be 30-40% better than the skill of the sequential assimilaiton method based on optimal interpolation which is currently used in operations. Prospects of further development of the a4dVar methods in regional applications are discussed.
Using SpF to Achieve Petascale for Legacy Pseudospectral Applications
NASA Technical Reports Server (NTRS)
Clune, Thomas L.; Jiang, Weiyuan
2014-01-01
Pseudospectral (PS) methods possess a number of characteristics (e.g., efficiency, accuracy, natural boundary conditions) that are extremely desirable for dynamo models. Unfortunately, dynamo models based upon PS methods face a number of daunting challenges, which include exposing additional parallelism, leveraging hardware accelerators, exploiting hybrid parallelism, and improving the scalability of global memory transposes. Although these issues are a concern for most models, solutions for PS methods tend to require far more pervasive changes to underlying data and control structures. Further, improvements in performance in one model are difficult to transfer to other models, resulting in significant duplication of effort across the research community. We have developed an extensible software framework for pseudospectral methods called SpF that is intended to enable extreme scalability and optimal performance. Highlevel abstractions provided by SpF unburden applications of the responsibility of managing domain decomposition and load balance while reducing the changes in code required to adapt to new computing architectures. The key design concept in SpF is that each phase of the numerical calculation is partitioned into disjoint numerical kernels that can be performed entirely inprocessor. The granularity of domain decomposition provided by SpF is only constrained by the datalocality requirements of these kernels. SpF builds on top of optimized vendor libraries for common numerical operations such as transforms, matrix solvers, etc., but can also be configured to use open source alternatives for portability. SpF includes several alternative schemes for global data redistribution and is expected to serve as an ideal testbed for further research into optimal approaches for different network architectures. In this presentation, we will describe our experience in porting legacy pseudospectral models, MoSST and DYNAMO, to use SpF as well as present preliminary performance results provided by the improved scalability.
Evaluation of the performance of a passive-active vibration isolation system
NASA Astrophysics Data System (ADS)
Sun, L. L.; Hansen, C. H.; Doolan, C.
2015-01-01
The behavior of a feedforward active isolation system subjected to actuator output constraints is investigated. Distributed parameter models are developed to analyze the system response, and to produce a transfer matrix for the design of an integrated passive-active isolation system. Cost functions considered here comprise a combination of the vibration transmission energy and the sum of the squared control forces. The example system considered is a rigid body connected to a simply supported plate via two isolation mounts. The overall isolation performance is evaluated by numerical simulation. The results show that the control strategies which rely on unconstrained actuator outputs may give substantial power transmission reductions over a wide frequency range, but also require large control force amplitudes to control excited vibration modes of the system. Expected power transmission reductions for modified control strategies that incorporate constrained actuator outputs are considerably less than typical reductions with unconstrained actuator outputs. The active system with constrained control force outputs is shown to be more effective at the resonance frequencies of the supporting plate. However, in the frequency range in which rigid body modes are present, the control strategies employed using constrained actuator outputs can only achieve 5-10 dB power transmission reduction, while at off-resonance frequencies, little or no power transmission reduction can be obtained with realistic control forces. Analysis of the wave effects in the passive mounts is also presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cranmer, Steven R.; Wilner, David J.; MacGregor, Meredith A.
2013-08-01
Many low-mass pre-main-sequence stars exhibit strong magnetic activity and coronal X-ray emission. Even after the primordial accretion disk has been cleared out, the star's high-energy radiation continues to affect the formation and evolution of dust, planetesimals, and large planets. Young stars with debris disks are thus ideal environments for studying the earliest stages of non-accretion-driven coronae. In this paper we simulate the corona of AU Mic, a nearby active M dwarf with an edge-on debris disk. We apply a self-consistent model of coronal loop heating that was derived from numerical simulations of solar field-line tangling and magnetohydrodynamic turbulence. We alsomore » synthesize the modeled star's X-ray luminosity and thermal radio/millimeter continuum emission. A realistic set of parameter choices for AU Mic produces simulated observations that agree with all existing measurements and upper limits. This coronal model thus represents an alternative explanation for a recently discovered ALMA central emission peak that was suggested to be the result of an inner 'asteroid belt' within 3 AU of the star. However, it is also possible that the central 1.3 mm peak is caused by a combination of active coronal emission and a bright inner source of dusty debris. Additional observations of this source's spatial extent and spectral energy distribution at millimeter and radio wavelengths will better constrain the relative contributions of the proposed mechanisms.« less
Constraining fault constitutive behavior with slip and stress heterogeneity
Aagaard, Brad T.; Heaton, T.H.
2008-01-01
We study how enforcing self-consistency in the statistical properties of the preshear and postshear stress on a fault can be used to constrain fault constitutive behavior beyond that required to produce a desired spatial and temporal evolution of slip in a single event. We explore features of rupture dynamics that (1) lead to slip heterogeneity in earthquake ruptures and (2) maintain these conditions following rupture, so that the stress field is compatible with the generation of aftershocks and facilitates heterogeneous slip in subsequent events. Our three-dimensional fmite element simulations of magnitude 7 events on a vertical, planar strike-slip fault show that the conditions that lead to slip heterogeneity remain in place after large events when the dynamic stress drop (initial shear stress) and breakdown work (fracture energy) are spatially heterogeneous. In these models the breakdown work is on the order of MJ/m2, which is comparable to the radiated energy. These conditions producing slip heterogeneity also tend to produce narrower slip pulses independent of a slip rate dependence in the fault constitutive model. An alternative mechanism for generating these confined slip pulses appears to be fault constitutive models that have a stronger rate dependence, which also makes them difficult to implement in numerical models. We hypothesize that self-consistent ruptures could also be produced by very narrow slip pulses propagating in a self-sustaining heterogeneous stress field with breakdown work comparable to fracture energy estimates of kJ/M2. Copyright 2008 by the American Geophysical Union.
NASA Astrophysics Data System (ADS)
Zhu, H.
2017-12-01
Recently, seismologists observed increasing seismicity in North Texas and Oklahoma. Based on seismic observations and other geophysical measurements, some studies suggested possible links between the increasing seismicity and wastewater injection during unconventional oil and gas exploration. To better monitor seismic events and investigate their mechanisms, we need an accurate 3D crustal wavespeed model for North Texas and Oklahoma. Considering the uneven distribution of earthquakes in this region, seismic tomography with local earthquake records have difficulties to achieve good illumination. To overcome this limitation, in this study, ambient noise cross-correlation functions are used to constrain subsurface variations in wavespeeds. I use adjoint tomography to iteratively fit frequency-dependent phase differences between observed and predicted band-limited Green's functions. The spectral-element method is used to numerically calculate the band-limited Green's functions and the adjoint method is used to calculate misfit gradients with respect to wavespeeds. 25 preconditioned conjugate gradient iterations are used to update model parameters and minimize data misfits. Features in the new crustal model M25 correlates with geological units in the study region, including the Llano uplift, the Anadarko basin and the Ouachita orogenic front. In addition, these seismic anomalies correlate with gravity and magnetic observations. This new model can be used to better constrain earthquake source parameters in North Texas and Oklahoma, such as epicenter location and moment tensor solutions, which are important for investigating potential relations between seismicity and unconventional oil and gas exploration.
GRB 110715A: the peculiar multiwavelength evolution of the first afterglow detected by ALMA
NASA Astrophysics Data System (ADS)
Sánchez-Ramírez, R.; Hancock, P. J.; Jóhannesson, G.; Murphy, Tara; de Ugarte Postigo, A.; Gorosabel, J.; Kann, D. A.; Krühler, T.; Oates, S. R.; Japelj, J.; Thöne, C. C.; Lundgren, A.; Perley, D. A.; Malesani, D.; de Gregorio Monsalvo, I.; Castro-Tirado, A. J.; D'Elia, V.; Fynbo, J. P. U.; Garcia-Appadoo, D.; Goldoni, P.; Greiner, J.; Hu, Y.-D.; Jelínek, M.; Jeong, S.; Kamble, A.; Klose, S.; Kuin, N. P. M.; Llorente, A.; Martín, S.; Nicuesa Guelbenzu, A.; Rossi, A.; Schady, P.; Sparre, M.; Sudilovsky, V.; Tello, J. C.; Updike, A.; Wiersema, K.; Zhang, B.-B.
2017-02-01
We present the extensive follow-up campaign on the afterglow of GRB 110715A at 17 different wavelengths, from X-ray to radio bands, starting 81 s after the burst and extending up to 74 d later. We performed for the first time a GRB afterglow observation with the ALMA observatory. We find that the afterglow of GRB 110715A is very bright at optical and radio wavelengths. We use the optical and near-infrared spectroscopy to provide further information about the progenitor's environment and its host galaxy. The spectrum shows weak absorption features at a redshift z = 0.8225, which reveal a host-galaxy environment with low ionization, column density, and dynamical activity. Late deep imaging shows a very faint galaxy, consistent with the spectroscopic results. The broad-band afterglow emission is modelled with synchrotron radiation using a numerical algorithm and we determine the best-fitting parameters using Bayesian inference in order to constrain the physical parameters of the jet and the medium in which the relativistic shock propagates. We fitted our data with a variety of models, including different density profiles and energy injections. Although the general behaviour can be roughly described by these models, none of them are able to fully explain all data points simultaneously. GRB 110715A shows the complexity of reproducing extensive multiwavelength broad-band afterglow observations, and the need of good sampling in wavelength and time and more complex models to accurately constrain the physics of GRB afterglows.
NASA Astrophysics Data System (ADS)
Ritsema, Jeroen; Garnero, Edward; Lay, Thorne
1997-01-01
A new approach for constraining the seismic shear velocity structure above the core-mantle boundary is introduced, whereby SH-SKS differential travel times, amplitude ratios of SV/SKS, and Sdiff waveshapes are simultaneously modeled. This procedure is applied to the lower mantle beneath the central Pacific using da.ta from numerous deep-focus southwest Pacific earthquakes recorded in North America. We analyze 90 broadband and 248 digitized analog recordings for this source-receiver geometry. SH-SKS times are highly variable and up to 10 s larger than standard reference model predictions, indicating the presence of laterally varying low shear velocities in the study area. The travel times, however, do not constrain the depth extent or velocity gradient of the low-velocity region. SV/SKS amplitude ratios and SH waveforms are sensitive to the radial shear velocity profile, and when analyzed simultaneously with SH-SKS times, rnveal up to 3% shear velocity reductions restricted to the lowermost 190±50 km of the mantle. Our preferred model for the central-eastern Pacific region (Ml) has a strong negative gradient (with 0.5% reduction in velocity relative to the preliminary reference Earth model (PREM) at 2700 km depth and 3% reduction at 2891 km depth) and slight velocity reductions from 2000 to 2700 km depth (0-0.5% lower than PREM). Significant small-scale (100-500 km) shear velocity heterogeneity (0.5%-1%) is required to explain scatter in the differential times and amplitude ratios.
Solution techniques for transient stability-constrained optimal power flow – Part II
Geng, Guangchao; Abhyankar, Shrirang; Wang, Xiaoyu; ...
2017-06-28
Transient stability-constrained optimal power flow is an important emerging problem with power systems pushed to the limits for economic benefits, dense and larger interconnected systems, and reduced inertia due to expected proliferation of renewable energy resources. In this study, two more approaches: single machine equivalent and computational intelligence are presented. Also discussed are various application areas, and future directions in this research area. In conclusion, a comprehensive resource for the available literature, publicly available test systems, and relevant numerical libraries is also provided.
Solution techniques for transient stability-constrained optimal power flow – Part II
DOE Office of Scientific and Technical Information (OSTI.GOV)
Geng, Guangchao; Abhyankar, Shrirang; Wang, Xiaoyu
Transient stability-constrained optimal power flow is an important emerging problem with power systems pushed to the limits for economic benefits, dense and larger interconnected systems, and reduced inertia due to expected proliferation of renewable energy resources. In this study, two more approaches: single machine equivalent and computational intelligence are presented. Also discussed are various application areas, and future directions in this research area. In conclusion, a comprehensive resource for the available literature, publicly available test systems, and relevant numerical libraries is also provided.
A greedy algorithm for species selection in dimension reduction of combustion chemistry
NASA Astrophysics Data System (ADS)
Hiremath, Varun; Ren, Zhuyin; Pope, Stephen B.
2010-09-01
Computational calculations of combustion problems involving large numbers of species and reactions with a detailed description of the chemistry can be very expensive. Numerous dimension reduction techniques have been developed in the past to reduce the computational cost. In this paper, we consider the rate controlled constrained-equilibrium (RCCE) dimension reduction method, in which a set of constrained species is specified. For a given number of constrained species, the 'optimal' set of constrained species is that which minimizes the dimension reduction error. The direct determination of the optimal set is computationally infeasible, and instead we present a greedy algorithm which aims at determining a 'good' set of constrained species; that is, one leading to near-minimal dimension reduction error. The partially-stirred reactor (PaSR) involving methane premixed combustion with chemistry described by the GRI-Mech 1.2 mechanism containing 31 species is used to test the algorithm. Results on dimension reduction errors for different sets of constrained species are presented to assess the effectiveness of the greedy algorithm. It is shown that the first four constrained species selected using the proposed greedy algorithm produce lower dimension reduction error than constraints on the major species: CH4, O2, CO2 and H2O. It is also shown that the first ten constrained species selected using the proposed greedy algorithm produce a non-increasing dimension reduction error with every additional constrained species; and produce the lowest dimension reduction error in many cases tested over a wide range of equivalence ratios, pressures and initial temperatures.
NASA Astrophysics Data System (ADS)
Clark, Chris
2014-05-01
Uncertainty exists regarding the future mass of the Antarctic and Greenland ice sheets and how they will respond to forcings from sea level, and atmospheric and ocean temperatures. If we want to know more about the mechanisms and rate of change of shrinking ice sheets, then why not examine an ice sheet that has fully disappeared and track its retreat through time? If achieved in enough detail such information on ice retreat could be a data-rich playground for improving the next breed of numerical ice sheet models to be used in ice and sea level forecasting. We regard that the last British-Irish Ice Sheet is a good target for this work, on account of its small size, density of information and with its numerous researchers already investigating it. Geomorphological mapping across the British Isles and the surrounding continental shelf has revealed the nature and distribution of glacial landforms. Here we demonstrate how such data have been used to build a pattern of ice margin retreat. The BRITICE-CHRONO consortium of Quaternary scientists and glaciologists, are now working on a project running from 2012 - 2017 to produce an ice sheet wide database of geochronometric dates to constrain and then understand ice margin retreat. This is being achieved by focusing on 8 transects running from the continental shelf edge to a short distance (10s km) onshore and acquiring marine and terrestrial samples for geochronometric dating. The project includes funding for 587 radiocarbon, 140 OSL and 158 TCN samples for surface exposure dating; with sampling accomplished by two research cruises and 16 fieldwork campaigns. Results will reveal the timing and rate of change of ice margin recession for each transect, and combined with existing landform and dating databases, will be used to build an ice sheet-wide empirical reconstruction of retreat. Simulations using two numerical ice sheet models, fitted against the margin data, will help us understand the nature and significance of sea-level rise and ocean/atmosphere forcing on influencing the rate of retreat and ice sheet demise and the effect that bed topography has in controlling this.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rutqvist, Jonny; Majer, Ernie; Oldenburg, Curt
2006-06-07
In this paper, we present progress made in a study aimed atincreasing the understanding of the relative contributions of differentmechanisms that may be causing the seismicity occurring at The Geysersgeothermal field, California. The approach we take is to integrate: (1)coupled reservoir geomechanical numerical modeling, (2) data fromrecently upgraded and expanded NCPA/Calpine/LBNL seismic arrays, and (3)tens of years of archival InSAR data from monthly satellite passes. Wehave conducted a coupled reservoir geomechanical analysis to studypotential mechanisms induced by steam production. Our simulation resultscorroborate co-locations of hypocenter field observations of inducedseismicity and their correlation with steam production as reported in theliterature. Seismicmore » and InSAR data are being collected and processed foruse in constraining the coupled reservoir geomechanicalmodel.« less
Homelessness and drug misuse in developing countries: A mathematical approach
NASA Astrophysics Data System (ADS)
Bhunu, C. P.
2014-06-01
Homelessness and drug-misuse are known to exist like siamese twins. We present a model to capture the dynamics in the growth in the number of homeless (street kids and street adults) and drug misusers. The reproduction numbers of the model are determined and analyzed. Results from this study suggests that adult peer pressure plays a more significant role in the growth of drug-misuse and the number of street kids. This result suggests that in resource constrained settings intervention strategies should be tailor made to target adults whose behaviour influence others to misuse drugs and abuse children. Furthermore, numerical simulations show that homelessness and drug-misuse positively enhances, the growth of each other. Thus, to effectively control these two social problems require strategies targeting both of them.
Dynamics of the Mount Nyiragongo lava lake
NASA Astrophysics Data System (ADS)
Burgi, P.-Y.; Darrah, T. H.; Tedesco, D.; Eymold, W. K.
2014-05-01
The permanent and presently rising lava lake at Mount Nyiragongo constitutes a major potential geological hazard to the inhabitants of the Virunga volcanic region in the Democratic Republic of Congo (DRC) and Rwanda. Based on two field campaigns in June 2010 and 2011, we estimate the lava lake level from the southeastern crater rim (~400 m diameter) and lava lake area (~46,550 m2), which constrains, respectively, the lava lake volume (~9 × 106 m3) and volume flow rate needed to keep the magma in a molten state (0.6 to 3.5 m3 s-1). A bidirectional magma flow model, which includes the characterization of the conduit diameter and funnel-shaped lava lake geometry, is developed to constrain the amount of magma intruded/emplaced within the magmatic chamber and rift-related structures that extend between Mount Nyiragongo's volcanic center and the city of Goma, DRC, since Mount Nyiragongo's last eruption (17 January 2002). Besides matching field data of the lava lake level covering the period 1977 to 2002, numerical solutions of the model indicate that by 2022, 20 years after the January 2002 eruption, between 300 and 1700 × 106 m3 (0.3 to 1.7 km3) of magma could have intruded/emplaced underneath the edifice, and the lava lake volume could exceed 15 × 106 m3.
Water in Massive protostellar objects: first detection of THz water maser and water inner abundance.
NASA Astrophysics Data System (ADS)
Herpin, Fabrice
2014-10-01
The formation massive stars is still not well understood. Despite numerous water line observations with Herschel telescope, over a broad range of energies, in most of the observed sources the WISH-KP (Water In Star-forming regions with Herschel, Co-PI: F. Herpin) observations were not able to trace the emission from the hot core. Moreover, water maser model predict that several THz water maser should be detectable in these objects. We aim to detect for the first time the THz maser lines o-H2O 8(2,7)- 7(3,4) at 1296.41106 GHz and p-H2O 7(2,6)- 6(3,3) at 1440.78167 GHz as predicted by the model. We propose two sources for a northern flight as first priority and two other sources for a possible southern flight. This will 1) constrain the maser theory, 2) constrain the physical conditions and water abundance in the inner layers of the prostellar environnement. In addition, we will use the p-H2O 3(3,1)- 4(0,4) thermal line at 1893.68651 GHz (L2 channel) in order to probe the physical conditions and water abundance in the inner layers of the prostellar objects where HIFI-Herschel has partially failed.
NASA Astrophysics Data System (ADS)
Keitel, David; Forteza, Xisco Jiménez; Husa, Sascha; London, Lionel; Bernuzzi, Sebastiano; Harms, Enno; Nagar, Alessandro; Hannam, Mark; Khan, Sebastian; Pürrer, Michael; Pratten, Geraint; Chaurasia, Vivek
2017-07-01
For a brief moment, a binary black hole (BBH) merger can be the most powerful astrophysical event in the visible Universe. Here we present a model fit for this gravitational-wave peak luminosity of nonprecessing quasicircular BBH systems as a function of the masses and spins of the component black holes, based on numerical relativity (NR) simulations and the hierarchical fitting approach introduced by X. Jiménez-Forteza et al. [Phys. Rev. D 95, 064024 (2017)., 10.1103/PhysRevD.95.064024]. This fit improves over previous results in accuracy and parameter-space coverage and can be used to infer posterior distributions for the peak luminosity of future astrophysical signals like GW150914 and GW151226. The model is calibrated to the ℓ≤6 modes of 378 nonprecessing NR simulations up to mass ratios of 18 and dimensionless spin magnitudes up to 0.995, and includes unequal-spin effects. We also constrain the fit to perturbative numerical results for large mass ratios. Studies of key contributions to the uncertainty in NR peak luminosities, such as (i) mode selection, (ii) finite resolution, (iii) finite extraction radius, and (iv) different methods for converting NR waveforms to luminosity, allow us to use NR simulations from four different codes as a homogeneous calibration set. This study of systematic fits to combined NR and large-mass-ratio data, including higher modes, also paves the way for improved inspiral-merger-ringdown waveform models.
NASA Astrophysics Data System (ADS)
Doronzo, Domenico; Dellino, Pierfrancesco; Sulpizio, Roberto; Lucchi, Federico
2017-04-01
In order to obtain significant volcanological results from computer simulations of explosive eruptions, one either needs a systematic statistical approach to test a wide range of initial and boundary conditions, or needs using a well-constrained field case study. Here we followed the second approach, using data obtained from field mapping of the Grotta dei Palizzi 2 pyroclastic deposits (Vulcano Island, Italy) as input for numerical modeling. This case study deals with impulsive phreatomagmatic explosions that generated ash-rich pyroclastic density currents, interacting with the high topographic obstacle of the La Fossa Caldera rim. We demonstrate that by merging field data with 3D numerical simulation it is possible to highlight the details of the dynamical current-terrain interaction, and to interpret the lithofacies variations of the associated deposits as a function of topography-induced sedimentation rate. Results suggest that a value of the sedimentation rate lower than 5 kg/m2s at the bed load can still be sheared by the overlying current, producing tractional structures in the deposit. Instead, a sedimentation rate in excess of that threshold can preclude the formation of tractional structures, producing thick massive deposits. We think that the approach used in this study could be applied to other case studies to confirm or refine such threshold value of the sedimentation rate, which is to be considered as an upper value as for the limitations of the numerical model.
NASA Astrophysics Data System (ADS)
Sif Gylfadóttir, Sigríður; Kim, Jihwan; Kristinn Helgason, Jón; Brynjólfsson, Sveinn; Höskuldsson, Ármann; Jóhannesson, Tómas; Bonnevie Harbitz, Carl; Løvholt, Finn
2016-04-01
The Askja central volcano is located in the Northern Volcanic Zone of Iceland. Within the main caldera an inner caldera was formed in an eruption in 1875 and over the next 40 years it gradually subsided and filled up with water, forming Lake Askja. A large rockslide was released from the Southeast margin of the inner caldera into Lake Askja on 21 July 2014. The release zone was located from 150 m to 350 m above the water level and measured 800 m across. The volume of the rockslide is estimated to have been 15-30 million m3, of which 10.5 million m3 was deposited in the lake, raising the water level by almost a meter. The rockslide caused a large tsunami that traveled across the lake, and inundated the shores around the entire lake after 1-2 minutes. The vertical run-up varied typically between 10-40 m, but in some locations close to the impact area it ranged up to 70 m. Lake Askja is a popular destination visited by tens of thousands of tourists every year but as luck would have it, the event occurred near midnight when no one was in the area. Field surveys conducted in the months following the event resulted in an extensive dataset. The dataset contains e.g. maximum inundation, high-resolution digital elevation model of the entire inner caldera, as well as a high resolution bathymetry of the lake displaying the landslide deposits. Using these data, a numerical model of the Lake Askja landslide and tsunami was developed using GeoClaw, a software package for numerical analysis of geophysical flow problems. Both the shallow water version and an extension of GeoClaw that includes dispersion, was employed to simulate the wave generation, propagation, and run-up due to the rockslide plunging into the lake. The rockslide was modeled as a block that was allowed to stretch during run-out after entering the lake. An optimization approach was adopted to constrain the landslide parameters through inverse modeling by comparing the calculated inundation with the observed run-up. By taking the minimum mean squared error between simulations and observations, a set of best-fit landslide parameters (friction parameters, initial speed and block size) were determined. While we were able to obtain a close fit with observations using the dispersive model, it proved impossible to constrain the landslide parameters to fit the data using a shallow water model. As a consequence, we conclude that in the present case, dispersive effects were crucial in obtaining the correct inundation pattern, and that a shallow water model produced large artificial offsets.
NASA Astrophysics Data System (ADS)
Haddout, Soufiane
2018-01-01
The equations of motion of a bicycle are highly nonlinear and rolling of wheels without slipping can only be expressed by nonholonomic constraint equations. A geometrical theory of general nonholonomic constrained systems on fibered manifolds and their jet prolongations, based on so-called Chetaev-type constraint forces, was proposed and developed in the last decade by O. Krupková (Rossi) in 1990's. Her approach is suitable for study of all kinds of mechanical systems-without restricting to Lagrangian, time-independent, or regular ones, and is applicable to arbitrary constraints (holonomic, semiholonomic, linear, nonlinear or general nonholonomic). The goal of this paper is to apply Krupková's geometric theory of nonholonomic mechanical systems to study a concrete problem in nonlinear nonholonomic dynamics, i.e., autonomous bicycle. The dynamical model is preserved in simulations in its original nonlinear form without any simplifying. The results of numerical solutions of constrained equations of motion, derived within the theory, are in good agreement with measurements and thus they open the possibility of direct application of the theory to practical situations.
Sun, Yiwen; Wang, Tiejun; Skidmore, Andrew K; Wang, Qi; Ding, Changqing
2015-12-01
Traditional agriculture benefits a rich diversity of plants and animals. The winter-flooded rice fields in the Qinling Mountains, China, are the last refuge for the endangered Asian crested ibis (Nipponia nippon), and intensive efforts have been made to protect this anthropogenic habitat. Analyses of multi-temporal satellite data indicate that winter-flooded rice fields have been continuously reduced across the current range of crested ibis during the past two decades. The rate of loss of these fields in the core-protected areas has unexpectedly increased to a higher level than that in non-protected areas in the past decade. The best fit (R (2) = 0.87) numerical response model of the crested ibis population shows that a reduction of winter-flooded rice fields decreases population growth and predicts that the population growth will be constrained by the decline of traditional winter-flooded rice fields in the coming decades. Our findings suggest that the decline of traditional rice farming is likely to continue to pose a threat to the long-term survival and recovery of the crested ibis population in China.
NASA Astrophysics Data System (ADS)
Sarghini, Fabrizio; De Vivo, Angela; Marra, Francesco
2017-10-01
Computational science and engineering methods have allowed a major change in the way products and processes are designed, as validated virtual models - capable to simulate physical, chemical and bio changes occurring during production processes - can be realized and used in place of real prototypes and performing experiments, often time and money consuming. Among such techniques, Optimal Shape Design (OSD) (Mohammadi & Pironneau, 2004) represents an interesting approach. While most classical numerical simulations consider fixed geometrical configurations, in OSD a certain number of geometrical degrees of freedom is considered as a part of the unknowns: this implies that the geometry is not completely defined, but part of it is allowed to move dynamically in order to minimize or maximize the objective function. The applications of optimal shape design (OSD) are uncountable. For systems governed by partial differential equations, they range from structure mechanics to electromagnetism and fluid mechanics or to a combination of the three. This paper presents one of possible applications of OSD, particularly how extrusion bell shape, for past production, can be designed by applying a multivariate constrained shape optimization.
Digital robust control law synthesis using constrained optimization
NASA Technical Reports Server (NTRS)
Mukhopadhyay, Vivekananda
1989-01-01
Development of digital robust control laws for active control of high performance flexible aircraft and large space structures is a research area of significant practical importance. The flexible system is typically modeled by a large order state space system of equations in order to accurately represent the dynamics. The active control law must satisy multiple conflicting design requirements and maintain certain stability margins, yet should be simple enough to be implementable on an onboard digital computer. Described here is an application of a generic digital control law synthesis procedure for such a system, using optimal control theory and constrained optimization technique. A linear quadratic Gaussian type cost function is minimized by updating the free parameters of the digital control law, while trying to satisfy a set of constraints on the design loads, responses and stability margins. Analytical expressions for the gradients of the cost function and the constraints with respect to the control law design variables are used to facilitate rapid numerical convergence. These gradients can be used for sensitivity study and may be integrated into a simultaneous structure and control optimization scheme.
A numerical study of tsunami wave impact and run-up on coastal cliffs using a CIP-based model
NASA Astrophysics Data System (ADS)
Zhao, Xizeng; Chen, Yong; Huang, Zhenhua; Hu, Zijun; Gao, Yangyang
2017-05-01
There is a general lack of understanding of tsunami wave interaction with complex geographies, especially the process of inundation. Numerical simulations are performed to understand the effects of several factors on tsunami wave impact and run-up in the presence of gentle submarine slopes and coastal cliffs, using an in-house code, a constrained interpolation profile (CIP)-based model. The model employs a high-order finite difference method, the CIP method, as the flow solver; utilizes a VOF-type method, the tangent of hyperbola for interface capturing/slope weighting (THINC/SW) scheme, to capture the free surface; and treats the solid boundary by an immersed boundary method. A series of incident waves are arranged to interact with varying coastal geographies. Numerical results are compared with experimental data and good agreement is obtained. The influences of gentle submarine slope, coastal cliff and incident wave height are discussed. It is found that the tsunami amplification factor varying with incident wave is affected by gradient of cliff slope, and the critical value is about 45°. The run-up on a toe-erosion cliff is smaller than that on a normal cliff. The run-up is also related to the length of a gentle submarine slope with a critical value of about 2.292 m in the present model for most cases. The impact pressure on the cliff is extremely large and concentrated, and the backflow effect is non-negligible. Results of our work are highly precise and helpful in inverting tsunami source and forecasting disaster.
Exploring Explanations of Subglacial Bedform Sizes Using Statistical Models
Kougioumtzoglou, Ioannis A.; Stokes, Chris R.; Smith, Michael J.; Clark, Chris D.; Spagnolo, Matteo S.
2016-01-01
Sediments beneath modern ice sheets exert a key control on their flow, but are largely inaccessible except through geophysics or boreholes. In contrast, palaeo-ice sheet beds are accessible, and typically characterised by numerous bedforms. However, the interaction between bedforms and ice flow is poorly constrained and it is not clear how bedform sizes might reflect ice flow conditions. To better understand this link we present a first exploration of a variety of statistical models to explain the size distribution of some common subglacial bedforms (i.e., drumlins, ribbed moraine, MSGL). By considering a range of models, constructed to reflect key aspects of the physical processes, it is possible to infer that the size distributions are most effectively explained when the dynamics of ice-water-sediment interaction associated with bedform growth is fundamentally random. A ‘stochastic instability’ (SI) model, which integrates random bedform growth and shrinking through time with exponential growth, is preferred and is consistent with other observations of palaeo-bedforms and geophysical surveys of active ice sheets. Furthermore, we give a proof-of-concept demonstration that our statistical approach can bridge the gap between geomorphological observations and physical models, directly linking measurable size-frequency parameters to properties of ice sheet flow (e.g., ice velocity). Moreover, statistically developing existing models as proposed allows quantitative predictions to be made about sizes, making the models testable; a first illustration of this is given for a hypothesised repeat geophysical survey of bedforms under active ice. Thus, we further demonstrate the potential of size-frequency distributions of subglacial bedforms to assist the elucidation of subglacial processes and better constrain ice sheet models. PMID:27458921
Franke, O. Lehn; Reilly, Thomas E.
1987-01-01
The most critical and difficult aspect of defining a groundwater system or problem for conceptual analysis or numerical simulation is the selection of boundary conditions . This report demonstrates the effects of different boundary conditions on the steady-state response of otherwise similar ground-water systems to a pumping stress. Three series of numerical experiments illustrate the behavior of three hypothetical groundwater systems that are rectangular sand prisms with the same dimensions but with different combinations of constant-head, specified-head, no-flow, and constant-flux boundary conditions. In the first series of numerical experiments, the heads and flows in all three systems are identical, as are the hydraulic conductivity and system geometry . However, when the systems are subjected to an equal stress by a pumping well in the third series, each differs significantly in its response . The highest heads (smallest drawdowns) and flows occur in the systems most constrained by constant- or specified-head boundaries. These and other observations described herein are important in steady-state calibration, which is an integral part of simulating many ground-water systems. Because the effects of boundary conditions on model response often become evident only when the system is stressed, a close match between the potential distribution in the model and that in the unstressed natural system does not guarantee that the model boundary conditions correctly represent those in the natural system . In conclusion, the boundary conditions that are selected for simulation of a ground-water system are fundamentally important to groundwater systems analysis and warrant continual reevaluation and modification as investigation proceeds and new information and understanding are acquired.
Thin Film Delamination Using a High Power Pulsed Laser Materials Interaction
NASA Astrophysics Data System (ADS)
Sherman, Bradley
Thin films attached to substrates are only effective while the film is adhered to the substrate. When the film begins to spall the whole system can fail, thus knowing the working strength of the film substrate system is important when designing structures. Surface acoustic waves (SAWs) are suitable for characterization of thin film mechanical properties due to the confinement of their energy within a shallow depth from a material surface. In this project, we study the feasibility of inducing dynamic interfacial failure in thin films using surface waves generated by a high power pulsed laser. Surface acoustic waves are modeled using a finite element numerical code, where the ablative interaction between the pulsed laser and the incident film is modeled using equivalent surface mechanical stresses. The numerical results are validated using experimental results from a laser ultrasonic setup. Once validated the normal film-substrate interfacial stress can be extracted from the numerical code and tends to be in the mega-Pascal range. This study uses pulsed laser generation to produce SAW in various metallic thin film/substrate systems. Each system varies in its response based on its dispersive relationship and as such requires individualized numerical modeling to match the experimental data. In addition to pulsed SAW excitation using an ablative source, a constrained thermo-mechanical load produced by the ablation of a metal film under a polymer layer is explored to generate larger dynamic mechanical stresses. These stresses are sufficient to delaminate the thin film in a manner similar to a peel test. However, since the loading is produced by a pulsed laser source, it occurs at a much faster rate, limiting the influence of slower damage modes that are present in quasi-static loading. This approach is explored to predict the interfacial fracture toughness of weak thin film interfaces.
NASA Astrophysics Data System (ADS)
Doronzo, Domenico M.; Dellino, Pierfrancesco; Sulpizio, Roberto; Lucchi, Federico
2017-01-01
In order to obtain results from computer simulations of explosive volcanic eruptions, one either needs a statistical approach to test a wide range of initial and boundary conditions, or needs using a well-constrained field case study via stratigraphy. Here we followed the second approach, using data obtained from field mapping of the Grotta dei Palizzi 2 pyroclastic deposits (Vulcano Island, Italy) as input for numerical modeling. This case study deals with impulsive phreatomagmatic explosions of La Fossa Cone that generated ash-rich pyroclastic density currents, interacting with the topographic high of the La Fossa Caldera rim. One of the simplifications in dealing with well-sorted ash (one particle size in the model) is to highlight the topographic effects on the same pyroclastic material in an unsteady current. We demonstrate that by merging field data with 3D numerical simulation results it is possible to see key details of the dynamical current-terrain interaction, and to interpret the lithofacies variations of the associated deposits as a function of topography-induced sedimentation (settling) rate. Results suggest that a value of the sedimentation rate lower than 5 kg/m2 s at the bed load can still be sheared by the overlying current, producing tractional structures (laminae) in the deposits. Instead, a sedimentation rate higher than that threshold can preclude the formation of tractional structures, producing thicker massive deposits. We think that the approach used in this study could be applied to other case studies (both for active and ancient volcanoes) to confirm or refine such threshold value of the sedimentation rate, which is to be considered as an upper value as for the limitations of the numerical model.
Charging of the Van Allen Probes: Theory and Simulations
NASA Astrophysics Data System (ADS)
Delzanno, G. L.; Meierbachtol, C.; Svyatskiy, D.; Denton, M.
2017-12-01
The electrical charging of spacecraft has been a known problem since the beginning of the space age. Its consequences can vary from moderate (single event upsets) to catastrophic (total loss of the spacecraft) depending on a variety of causes, some of which could be related to the surrounding plasma environment, including emission processes from the spacecraft surface. Because of its complexity and cost, this problem is typically studied using numerical simulations. However, inherent unknowns in both plasma parameters and spacecraft material properties can lead to inaccurate predictions of overall spacecraft charging levels. The goal of this work is to identify and study the driving causes and necessary parameters for particular spacecraft charging events on the Van Allen Probes (VAP) spacecraft. This is achieved by making use of plasma theory, numerical simulations, and on-board data. First, we present a simple theoretical spacecraft charging model, which assumes a spherical spacecraft geometry and is based upon the classical orbital-motion-limited approximation. Some input parameters to the model (such as the warm plasma distribution function) are taken directly from on-board VAP data, while other parameters are either varied parametrically to assess their impact on the spacecraft potential, or constrained through spacecraft charging data and statistical techniques. Second, a fully self-consistent numerical simulation is performed by supplying these parameters to CPIC, a particle-in-cell code specifically designed for studying plasma-material interactions. CPIC simulations remove some of the assumptions of the theoretical model and also capture the influence of the full geometry of the spacecraft. The CPIC numerical simulation results will be presented and compared with on-board VAP data. This work will set the foundation for our eventual goal of importing the full plasma environment from the LANL-developed SHIELDS framework into CPIC, in order to more accurately predict spacecraft charging.
NASA Astrophysics Data System (ADS)
Rocha, Ana Maria A. C.; Costa, M. Fernanda P.; Fernandes, Edite M. G. P.
2016-12-01
This article presents a shifted hyperbolic penalty function and proposes an augmented Lagrangian-based algorithm for non-convex constrained global optimization problems. Convergence to an ?-global minimizer is proved. At each iteration k, the algorithm requires the ?-global minimization of a bound constrained optimization subproblem, where ?. The subproblems are solved by a stochastic population-based metaheuristic that relies on the artificial fish swarm paradigm and a two-swarm strategy. To enhance the speed of convergence, the algorithm invokes the Nelder-Mead local search with a dynamically defined probability. Numerical experiments with benchmark functions and engineering design problems are presented. The results show that the proposed shifted hyperbolic augmented Lagrangian compares favorably with other deterministic and stochastic penalty-based methods.
Controllability of switched singular mix-valued logical control networks with constraints
NASA Astrophysics Data System (ADS)
Deng, Lei; Gong, Mengmeng; Zhu, Peiyong
2018-03-01
The present paper investigates the controllability problem of switched singular mix-valued logical control networks (SSMLCNs) with constraints on states and controls. First, using the semi-tenser product (STP) of matrices, the SSMLCN is expressed in an algebraic form, based on which a necessary and sufficient condition is given for the uniqueness of solution of SSMLCNs. Second, a necessary and sufficient criteria is derived for the controllability of constrained SSMLCNs, by converting a constrained SSMLCN into a parallel constrained switched mix-valued logical control network. Third, an algorithm is presented to design a proper switching sequence and a control scheme which force a state to a reachable state. Finally, a numerical example is given to demonstrate the efficiency of the results obtained in this paper.
Adjoint-Based Sensitivity Kernels for Glacial Isostatic Adjustment in a Laterally Varying Earth
NASA Astrophysics Data System (ADS)
Crawford, O.; Al-Attar, D.; Tromp, J.; Mitrovica, J. X.; Austermann, J.; Lau, H. C. P.
2017-12-01
We consider a new approach to both the forward and inverse problems in glacial isostatic adjustment. We present a method for forward modelling GIA in compressible and laterally heterogeneous earth models with a variety of linear and non-linear rheologies. Instead of using the so-called sea level equation, which must be solved iteratively, the forward theory we present consists of a number of coupled evolution equations that can be straightforwardly numerically integrated. We also apply the adjoint method to the inverse problem in order to calculate the derivatives of measurements of GIA with respect to the viscosity structure of the Earth. Such derivatives quantify the sensitivity of the measurements to the model. The adjoint method enables efficient calculation of continuous and laterally varying derivatives, allowing us to calculate the sensitivity of measurements of glacial isostatic adjustment to the Earth's three-dimensional viscosity structure. The derivatives have a number of applications within the inverse method. Firstly, they can be used within a gradient-based optimisation method to find a model which minimises some data misfit function. The derivatives can also be used to quantify the uncertainty in such a model and hence to provide understanding of which parts of the model are well constrained. Finally, they enable construction of measurements which provide sensitivity to a particular part of the model space. We illustrate both the forward and inverse aspects with numerical examples in a spherically symmetric earth model.
NASA Astrophysics Data System (ADS)
Dartevelle, S.
2006-12-01
Large-scale volcanic eruptions are inherently hazardous events, hence cannot be described by detailed and accurate in situ measurements; hence, volcanic explosive phenomenology is inadequately constrained in terms of initial and inflow conditions. Consequently, little to no real-time data exist to Verify and Validate computer codes developed to model these geophysical events as a whole. However, code Verification and Validation remains a necessary step, particularly when volcanologists use numerical data for mitigation of volcanic hazards as more often performed nowadays. The Verification and Validation (V&V) process formally assesses the level of 'credibility' of numerical results produced within a range of specific applications. The first step, Verification, is 'the process of determining that a model implementation accurately represents the conceptual description of the model', which requires either exact analytical solutions or highly accurate simplified experimental data. The second step, Validation, is 'the process of determining the degree to which a model is an accurate representation of the real world', which requires complex experimental data of the 'real world' physics. The Verification step is rather simple to formally achieve, while, in the 'real world' explosive volcanism context, the second step, Validation, is about impossible. Hence, instead of validating computer code against the whole large-scale unconstrained volcanic phenomenology, we rather suggest to focus on the key physics which control these volcanic clouds, viz., momentum-driven supersonic jets and multiphase turbulence. We propose to compare numerical results against a set of simple but well-constrained analog experiments, which uniquely and unambiguously represent these two key-phenomenology separately. Herewith, we use GMFIX (Geophysical Multiphase Flow with Interphase eXchange, v1.62), a set of multiphase- CFD FORTRAN codes, which have been recently redeveloped to meet the strict Quality Assurance, verification, and validation requirements from the Office of Civilian Radioactive Waste Management of the US Dept of Energy. GMFIX solves Navier-Stokes and energy partial differential equations for each phase with appropriate turbulence and interfacial coupling between phases. For momentum-driven single- to multi-phase underexpanded jets, the position of the first Mach disk is known empirically as a function of both the pressure ratio, K, and the particle mass fraction, Phi at the nozzle. Namely, the higher K, the further downstream the Mach disk and the higher Phi, the further upstream the first Mach disk. We show that GMFIX captures these two essential features. In addition, GMFIX displays all the properties found in these jets, such as expansion fans, incident and reflected shocks, and subsequent downstream mach discs, which make this code ideal for further investigations of equivalent volcanological phenomena. One of the other most challenging aspects of volcanic phenomenology is the multiphase nature of turbulence. We also validated GMFIX in comparing the velocity profiles and turbulence quantities against well constrained analog experiments. The velocity profiles agree with the analog ones as well as these of production of turbulent quantities. Overall, the Verification and the Validation experiments although inherently challenging suggest GMFIX captures the most essential dynamical properties of multiphase and supersonic flows and jets.
Numerical simulation of plagioclase rim growth during magma ascent at Bezymianny Volcano, Kamchatka
NASA Astrophysics Data System (ADS)
Gorokhova, N. V.; Melnik, O. E.; Plechov, P. Yu.; Shcherbakov, V. D.
2013-08-01
Slow CaAl-NaSi interdiffusion in plagioclase crystals preserves chemical zoning of plagioclase in detail, which, along with strong dependence of anorthite content in plagioclase on melt composition, pressure, and temperature, make this mineral an important source of information on magma processes. A numerical model of zoned crystal growth is developed in the paper. The model is based on equations of multicomponent diffusion with diagonal cross-component diffusion terms and accounts for mass conservation on the melt-crystal interface and growth rate controlled by undercooling. The model is applied to the data of plagioclase rim zoning from several recent Bezymianny Volcano (Kamchatka) eruptions. We show that an equilibrium growth model cannot explain crystallization of naturally observed plagioclase during magma ascent. The developed non-equilibrium model reproduced natural plagioclase zoning and allowed magma ascent rates to be constrained. Matching of natural and simulated zoning suggests ascent from 100 to 50 MPa during 15-20 days. Magma ascent rate from 50 MPa to the surface varies from eruption to eruption: plagioclase zoning from the December 2006 eruption suggests ascent to the surface in less than 1 day, whereas plagioclase zoning from March 2000 and May 2007 eruptions are better explained by magma ascent over periods of more than 30 days). Based on comparison of diffusion coefficients for individual elements a mechanism of atomic diffusion during plagioclase crystallization is proposed.
Spectral characteristics of background error covariance and multiscale data assimilation
Li, Zhijin; Cheng, Xiaoping; Gustafson, Jr., William I.; ...
2016-05-17
The steady increase of the spatial resolutions of numerical atmospheric and oceanic circulation models has occurred over the past decades. Horizontal grid spacing down to the order of 1 km is now often used to resolve cloud systems in the atmosphere and sub-mesoscale circulation systems in the ocean. These fine resolution models encompass a wide range of temporal and spatial scales, across which dynamical and statistical properties vary. In particular, dynamic flow systems at small scales can be spatially localized and temporarily intermittent. Difficulties of current data assimilation algorithms for such fine resolution models are numerically and theoretically examined. Ourmore » analysis shows that the background error correlation length scale is larger than 75 km for streamfunctions and is larger than 25 km for water vapor mixing ratios, even for a 2-km resolution model. A theoretical analysis suggests that such correlation length scales prevent the currently used data assimilation schemes from constraining spatial scales smaller than 150 km for streamfunctions and 50 km for water vapor mixing ratios. Moreover, our results highlight the need to fundamentally modify currently used data assimilation algorithms for assimilating high-resolution observations into the aforementioned fine resolution models. Lastly, within the framework of four-dimensional variational data assimilation, a multiscale methodology based on scale decomposition is suggested and challenges are discussed.« less
Leng, Guoyong; Leung, L. Ruby; Huang, Maoyi
2017-06-20
An irrigation module that considers both irrigation water sources and irrigation methods has been incorporated into the ACME Land Model (ALM). Global numerical experiments were conducted to evaluate the impacts of irrigation water sources and irrigation methods on the simulated irrigation effects. All simulations shared the same irrigation soil moisture target constrained by a global census dataset of irrigation amounts. Irrigation has large impacts on terrestrial water balances especially in regions with extensive irrigation. Such effects depend on the irrigation water sources: surface-water-fed irrigation leads to decreases in runoff and water table depth, while groundwater-fed irrigation increases water table depth,more » with positive or negative effects on runoff depending on the pumping intensity. Irrigation effects also depend significantly on the irrigation methods. Flood irrigation applies water in large volumes within short durations, resulting in much larger impacts on runoff and water table depth than drip and sprinkler irrigations. Differentiating the irrigation water sources and methods is important not only for representing the distinct pathways of how irrigation influences the terrestrial water balances, but also for estimating irrigation water use efficiency. Specifically, groundwater pumping has lower irrigation water use efficiency due to enhanced recharge rates. Different irrigation methods also affect water use efficiency, with drip irrigation the most efficient followed by sprinkler and flood irrigation. Furthermore, our results highlight the importance of explicitly accounting for irrigation sources and irrigation methods, which are the least understood and constrained aspects in modeling irrigation water demand, water scarcity and irrigation effects in Earth System Models.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leng, Guoyong; Leung, L. Ruby; Huang, Maoyi
An irrigation module that considers both irrigation water sources and irrigation methods has been incorporated into the ACME Land Model (ALM). Global numerical experiments were conducted to evaluate the impacts of irrigation water sources and irrigation methods on the simulated irrigation effects. All simulations shared the same irrigation soil moisture target constrained by a global census dataset of irrigation amounts. Irrigation has large impacts on terrestrial water balances especially in regions with extensive irrigation. Such effects depend on the irrigation water sources: surface-water-fed irrigation leads to decreases in runoff and water table depth, while groundwater-fed irrigation increases water table depth,more » with positive or negative effects on runoff depending on the pumping intensity. Irrigation effects also depend significantly on the irrigation methods. Flood irrigation applies water in large volumes within short durations, resulting in much larger impacts on runoff and water table depth than drip and sprinkler irrigations. Differentiating the irrigation water sources and methods is important not only for representing the distinct pathways of how irrigation influences the terrestrial water balances, but also for estimating irrigation water use efficiency. Specifically, groundwater pumping has lower irrigation water use efficiency due to enhanced recharge rates. Different irrigation methods also affect water use efficiency, with drip irrigation the most efficient followed by sprinkler and flood irrigation. Furthermore, our results highlight the importance of explicitly accounting for irrigation sources and irrigation methods, which are the least understood and constrained aspects in modeling irrigation water demand, water scarcity and irrigation effects in Earth System Models.« less
Walvoord, Michelle Ann; Stonestrom, David A.; Andraski, Brian J.; Striegl, Robert G.
2004-01-01
Natural flow regimes in deep unsaturated zones of arid interfluvial environments are rarely in hydraulic equilibrium with near-surface boundary conditions imposed by present-day plant–soil–atmosphere dynamics. Nevertheless, assessments of water resources and contaminant transport require realistic estimates of gas, water, and solute fluxes under past, present, and projected conditions. Multimillennial transients that are captured in current hydraulic, chemical, and isotopic profiles can be interpreted to constrain alternative scenarios of paleohydrologic evolution following climatic and vegetational shifts from pluvial to arid conditions. However, interpreting profile data with numerical models presents formidable challenges in that boundary conditions must be prescribed throughout the entire Holocene, when we have at most a few decades of actual records. Models of profile development at the Amargosa Desert Research Site include substantial uncertainties from imperfectly known initial and boundary conditions when simulating flow and solute transport over millennial timescales. We show how multiple types of profile data, including matric potentials and porewater concentrations of Cl−, δD, δ18O, can be used in multiphase heat, flow, and transport models to expose and reduce uncertainty in paleohydrologic reconstructions. Results indicate that a dramatic shift in the near-surface water balance occurred approximately 16000 yr ago, but that transitions in precipitation, temperature, and vegetation were not necessarily synchronous. The timing of the hydraulic transition imparts the largest uncertainty to model-predicted contemporary fluxes. In contrast, the uncertainties associated with initial (late Pleistocene) conditions and boundary conditions during the Holocene impart only small uncertainties to model-predicted contemporaneous fluxes.
PSHAe (Probabilistic Seismic Hazard enhanced): the case of Istanbul.
NASA Astrophysics Data System (ADS)
Stupazzini, Marco; Allmann, Alexander; Infantino, Maria; Kaeser, Martin; Mazzieri, Ilario; Paolucci, Roberto; Smerzini, Chiara
2016-04-01
The Probabilistic Seismic Hazard Analysis (PSHA) only relying on GMPEs tends to be insufficiently constrained at short distances and data only partially account for the rupture process, seismic wave propagation and three-dimensional (3D) complex configurations. Given a large and representative set of numerical results from 3D scenarios, analysing the resulting database from a statistical point of view and implementing the results as a generalized attenuation function (GAF) into the classical PSHA might be an appealing way to deal with this problem (Villani et al., 2014). Nonetheless, the limited amount of computational resources or time available tend to pose substantial constrains in a broad application of the previous method and, furthermore, the method is only partially suitable for taking into account the spatial correlation of ground motion as modelled by each forward physics-based simulation (PBS). Given that, we envision a streamlined and alternative implementation of the previous approach, aiming at selecting a limited number of scenarios wisely chosen and associating them a probability of occurrence. The experience gathered in the past year regarding 3D modelling of seismic wave propagation in complex alluvial basin (Pilz et al., 2011, Guidotti et al., 2011, Smerzini and Villani, 2012) allowed us to enhance the choice of simulated scenarios in order to explore the variability of ground motion, preserving the full spatial correlation necessary for risk modelling, on one hand and on the other the simulated losses for a given location and a given building stock. 3D numerical modelling of scenarios occurring the North Anatolian Fault in the proximity of Istanbul are carried out through the spectral element code SPEED (http://speed.mox.polimi.it). The results are introduced in a PSHA, exploiting the capabilities of the proposed methodology against a traditional approach based on GMPE. References Guidotti R, M Stupazzini, C Smerzini, R Paolucci, P Ramieri, "Numerical Study on the Role of Basin Geometry and Kinematic Seismic Source in 3D Ground Motion Simulation of the 22 February 2011 M-W 6.2 Christchurch Earthquake", SRL 11/2011; 82(6):767-782. DOI:10.1785/gssrl.82.6.767 Pilz M,Parolai S, Stupazzini M, Paolucci P and Zschau J, "Modelling basin effects on earthquake ground motion in the Santiago de Chile basin by a spectral element code", GJI 11/2011, 187(2):929-945. DOI: 10.1111/j.1365-246X.2011.05183.x Smerzini C and Villani M, "Broadband Numerical Simulations in Complex Near-Field Geological Configurations: The Case of the 2009 Mw 6.3 L'Aquila Earthquake", BSSA 12/2012; 102(6):2436-2451. DOI:10.1785/0120120002 Villani M, Faccioli E, Ordaz M, Stupazzini M, "High-Resolution Seismic Hazard Analysis in a Complex Geological Configuration: The Case of the Sulmona Basin in Central Italy", Earthquake Spectra, 11/2014; 30(4):1801-1824. DOI: 10.1193/112911EQS288M
Jost, Adam B.; Bachan, Aviv; van de Schootbrugge, Bas; ...
2016-12-29
The end-Triassic mass extinction coincided with a negative δ 13 C excursion, consistent with release of 13C-depleted CO 2 from the Central Atlantic Magmatic Province. However, the amount of carbon released and its effects on ocean chemistry are poorly constrained. The co upled nature of the carbon and calcium cycles allows calcium isotopes to be used for constraining carbon cycle dynamics and vice versa. We present a high-resolution calcium isotope (δ 44/40 Ca) record from 100 m of marine limestone spanning the Triassic/Jurassic boundary in two stratigraphic sections from northern Italy. Immediately above the extinction horizon and the associated negativemore » excursion in δ 13 C, δ 44/40 Ca decreases by ca. 0.8‰ in 20 m of section and then recovers to preexcursion values. Coupled numerical models of the geological carbon and calcium cycles demonstrate that this δ 44/40 Ca excursion is too large to be explained by changes to seawater δ 44/40 Ca alone, regardless of CO 2 injection volume and duration. Less than 20% of the δ 44/40 Ca excursion can be attributed to acidification. The remaining 80% likely reflects a higher proportion of aragonite in the original sediment, based largely on high concentrations of Sr in the samples. Our study demonstrates that coupled models of the carbon and calcium cycles have the potential to help distinguish contributions of primary seawater isotopic changes from local or diagenetic effects on the δ 44/40 Ca of carbonate sediments. Finally, differentiating between these effects is critical for constraining the impact of ocean acidification during the end-Triassic mass extinction, as well as for interpreting other environmental events in the geologic past.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jost, Adam B.; Bachan, Aviv; van de Schootbrugge, Bas
The end-Triassic mass extinction coincided with a negative δ 13 C excursion, consistent with release of 13C-depleted CO 2 from the Central Atlantic Magmatic Province. However, the amount of carbon released and its effects on ocean chemistry are poorly constrained. The co upled nature of the carbon and calcium cycles allows calcium isotopes to be used for constraining carbon cycle dynamics and vice versa. We present a high-resolution calcium isotope (δ 44/40 Ca) record from 100 m of marine limestone spanning the Triassic/Jurassic boundary in two stratigraphic sections from northern Italy. Immediately above the extinction horizon and the associated negativemore » excursion in δ 13 C, δ 44/40 Ca decreases by ca. 0.8‰ in 20 m of section and then recovers to preexcursion values. Coupled numerical models of the geological carbon and calcium cycles demonstrate that this δ 44/40 Ca excursion is too large to be explained by changes to seawater δ 44/40 Ca alone, regardless of CO 2 injection volume and duration. Less than 20% of the δ 44/40 Ca excursion can be attributed to acidification. The remaining 80% likely reflects a higher proportion of aragonite in the original sediment, based largely on high concentrations of Sr in the samples. Our study demonstrates that coupled models of the carbon and calcium cycles have the potential to help distinguish contributions of primary seawater isotopic changes from local or diagenetic effects on the δ 44/40 Ca of carbonate sediments. Finally, differentiating between these effects is critical for constraining the impact of ocean acidification during the end-Triassic mass extinction, as well as for interpreting other environmental events in the geologic past.« less
Models for small-scale structure on cosmic strings. II. Scaling and its stability
NASA Astrophysics Data System (ADS)
Vieira, J. P. P.; Martins, C. J. A. P.; Shellard, E. P. S.
2016-11-01
We make use of the formalism described in a previous paper [Martins et al., Phys. Rev. D 90, 043518 (2014)] to address general features of wiggly cosmic string evolution. In particular, we highlight the important role played by poorly understood energy loss mechanisms and propose a simple Ansatz which tackles this problem in the context of an extended velocity-dependent one-scale model. We find a general procedure to determine all the scaling solutions admitted by a specific string model and study their stability, enabling a detailed comparison with future numerical simulations. A simpler comparison with previous Goto-Nambu simulations supports earlier evidence that scaling is easier to achieve in the matter era than in the radiation era. In addition, we also find that the requirement that a scaling regime be stable seems to notably constrain the allowed range of energy loss parameters.
Computational wing optimization and comparisons with experiment for a semi-span wing model
NASA Technical Reports Server (NTRS)
Waggoner, E. G.; Haney, H. P.; Ballhaus, W. F.
1978-01-01
A computational wing optimization procedure was developed and verified by an experimental investigation of a semi-span variable camber wing model in the NASA Ames Research Center 14 foot transonic wind tunnel. The Bailey-Ballhaus transonic potential flow analysis and Woodward-Carmichael linear theory codes were linked to Vanderplaats constrained minimization routine to optimize model configurations at several subsonic and transonic design points. The 35 deg swept wing is characterized by multi-segmented leading and trailing edge flaps whose hinge lines are swept relative to the leading and trailing edges of the wing. By varying deflection angles of the flap segments, camber and twist distribution can be optimized for different design conditions. Results indicate that numerical optimization can be both an effective and efficient design tool. The optimized configurations had as good or better lift to drag ratios at the design points as the best designs previously tested during an extensive parametric study.
Yao, Jincao; Yu, Huimin; Hu, Roland
2017-01-01
This paper introduces a new implicit-kernel-sparse-shape-representation-based object segmentation framework. Given an input object whose shape is similar to some of the elements in the training set, the proposed model can automatically find a cluster of implicit kernel sparse neighbors to approximately represent the input shape and guide the segmentation. A distance-constrained probabilistic definition together with a dualization energy term is developed to connect high-level shape representation and low-level image information. We theoretically prove that our model not only derives from two projected convex sets but is also equivalent to a sparse-reconstruction-error-based representation in the Hilbert space. Finally, a "wake-sleep"-based segmentation framework is applied to drive the evolutionary curve to recover the original shape of the object. We test our model on two public datasets. Numerical experiments on both synthetic images and real applications show the superior capabilities of the proposed framework.
A Constrained Scheme for High Precision Downward Continuation of Potential Field Data
NASA Astrophysics Data System (ADS)
Wang, Jun; Meng, Xiaohong; Zhou, Zhiwen
2018-04-01
To further improve the accuracy of the downward continuation of potential field data, we present a novel constrained scheme in this paper combining the ideas of the truncated Taylor series expansion, the principal component analysis, the iterative continuation and the prior constraint. In the scheme, the initial downward continued field on the target plane is obtained from the original measured field using the truncated Taylor series expansion method. If the original field was with particularly low signal-to-noise ratio, the principal component analysis is utilized to suppress the noise influence. Then, the downward continued field is upward continued to the plane of the prior information. If the prior information was on the target plane, it should be upward continued over a short distance to get the updated prior information. Next, the difference between the calculated field and the updated prior information is calculated. The cosine attenuation function is adopted to get the scope of constraint and the corresponding modification item. Afterward, a correction is performed on the downward continued field on the target plane by adding the modification item. The correction process is iteratively repeated until the difference meets the convergence condition. The accuracy of the proposed constrained scheme is tested on synthetic data with and without noise. Numerous model tests demonstrate that downward continuation using the constrained strategy can yield more precise results compared to other downward continuation methods without constraints and is relatively insensitive to noise even for downward continuation over a large distance. Finally, the proposed scheme is applied to real magnetic data collected within the Dapai polymetallic deposit from the Fujian province in South China. This practical application also indicates the superiority of the presented scheme.
NASA Astrophysics Data System (ADS)
Allstadt, K.; Moretti, L.; Mangeney, A.; Stutzmann, E.; Capdeville, Y.
2014-12-01
The time series of forces exerted on the earth by a large and rapid landslide derived remotely from the inversion of seismic records can be used to tie post-slide evidence to what actually occurred during the event and can be used to tune numerical models and test theoretical methods. This strategy is applied to the 48.5 Mm3 August 2010 Mount Meager rockslide-debris flow in British Columbia, Canada. By inverting data from just five broadband seismic stations less than 300 km from the source, we reconstruct the time series of forces that the landslide exerted on the Earth as it occurred. The result illuminates a complex retrogressive initiation sequence and features attributable to flow over a complicated path including several curves and runup against a valley wall. The seismically derived force history also allows for the estimation of the horizontal acceleration (0.39 m/s^2) and average apparent coefficient of basal friction (0.38) of the rockslide, and the speed of the center of mass of the debris flow (peak of 92 m/s). To extend beyond these simple calculations and to test the interpretation, we also use the seismically derived force history to guide numerical modeling of the event - seeking to simulate the landslide in a way that best fits both the seismic and field constraints. This allows for a finer reconstruction of the volume, timing, and sequence of events, estimates of friction, and spatiotemporal variations in speed and flow thickness. The modeling allowed us to analyze the sensitivity of the force to the different parameters involved in the landslide modeling to better understand what can and cannot be constrained from seismic source inversions of landslide signals.
Numerical Estimation of Balanced and Falling States for Constrained Legged Systems
NASA Astrophysics Data System (ADS)
Mummolo, Carlotta; Mangialardi, Luigi; Kim, Joo H.
2017-08-01
Instability and risk of fall during standing and walking are common challenges for biped robots. While existing criteria from state-space dynamical systems approach or ground reference points are useful in some applications, complete system models and constraints have not been taken into account for prediction and indication of fall for general legged robots. In this study, a general numerical framework that estimates the balanced and falling states of legged systems is introduced. The overall approach is based on the integration of joint-space and Cartesian-space dynamics of a legged system model. The full-body constrained joint-space dynamics includes the contact forces and moments term due to current foot (or feet) support and another term due to altered contact configuration. According to the refined notions of balanced, falling, and fallen, the system parameters, physical constraints, and initial/final/boundary conditions for balancing are incorporated into constrained nonlinear optimization problems to solve for the velocity extrema (representing the maximum perturbation allowed to maintain balance without changing contacts) in the Cartesian space at each center-of-mass (COM) position within its workspace. The iterative algorithm constructs the stability boundary as a COM state-space partition between balanced and falling states. Inclusion in the resulting six-dimensional manifold is a necessary condition for a state of the given system to be balanced under the given contact configuration, while exclusion is a sufficient condition for falling. The framework is used to analyze the balance stability of example systems with various degrees of complexities. The manifold for a 1-degree-of-freedom (DOF) legged system is consistent with the experimental and simulation results in the existing studies for specific controller designs. The results for a 2-DOF system demonstrate the dependency of the COM state-space partition upon joint-space configuration (elbow-up vs. elbow-down). For both 1- and 2-DOF systems, the results are validated in simulation environments. Finally, the manifold for a biped walking robot is constructed and illustrated against its single-support walking trajectories. The manifold identified by the proposed framework for any given legged system can be evaluated beforehand as a system property and serves as a map for either a specified state or a specific controller's performance.
Temperature profile around a basaltic sill intruded into wet sediments
Baker, Leslie; Bernard, Andrew; Rember, William C.; Milazzo, Moses; Dundas, Colin M.; Abramov, Oleg; Kestay, Laszlo P.
2015-01-01
The transfer of heat into wet sediments from magmatic intrusions or lava flows is not well constrained from field data. Such field constraints on numerical models of heat transfer could significantly improve our understanding of water–lava interactions. We use experimentally calibrated pollen darkening to measure the temperature profile around a basaltic sill emplaced into wet lakebed sediments. It is well known that, upon heating, initially transparent palynomorphs darken progressively through golden, brown, and black shades before being destroyed; however, this approach to measuring temperature has not been applied to volcanological questions. We collected sediment samples from established Miocene fossil localities at Clarkia, Idaho. Fossils in the sediments include pollen from numerous tree and shrub species. We experimentally calibrated changes in the color of Clarkia sediment pollen and used this calibration to determine sediment temperatures around a Miocene basaltic sill emplaced in the sediments. Results indicated a flat temperature profile above and below the sill, with T > 325 °C within 1 cm of the basalt-sediment contact, near 300 °C at 1–2 cm from the contact, and ~ 250 °C at 1 m from the sill contact. This profile suggests that heat transport in the sediments was hydrothermally rather than conductively controlled. This information will be used to test numerical models of heat transfer in wet sediments on Earth and Mars.
NASA Astrophysics Data System (ADS)
Rae, A. S. P.; Collins, G. S.; Grieve, R. A. F.; Osinski, G. R.; Morgan, J. V.
2017-07-01
Large impact structures have complex morphologies, with zones of structural uplift that can be expressed topographically as central peaks and/or peak rings internal to the crater rim. The formation of these structures requires transient strength reduction in the target material and one of the proposed mechanisms to explain this behavior is acoustic fluidization. Here, samples of shock-metamorphosed quartz-bearing lithologies at the West Clearwater Lake impact structure, Canada, are used to estimate the maximum recorded shock pressures in three dimensions across the crater. These measurements demonstrate that the currently observed distribution of shock metamorphism is strongly controlled by the formation of the structural uplift. The distribution of peak shock pressures, together with apparent crater morphology and geological observations, is compared with numerical impact simulations to constrain parameters used in the block-model implementation of acoustic fluidization. The numerical simulations produce craters that are consistent with morphological and geological observations. The results show that the regeneration of acoustic energy must be an important feature of acoustic fluidization in crater collapse, and should be included in future implementations. Based on the comparison between observational data and impact simulations, we conclude that the West Clearwater Lake structure had an original rim (final crater) diameter of 35-40 km and has since experienced up to 2 km of differential erosion.
The Deep Crust Magmatic Refinery, Part 2 : The Magmatic Output of Numerical Models.
NASA Astrophysics Data System (ADS)
Bouilhol, P.; Riel, N., Jr.; Van Hunen, J.
2016-12-01
Metamorphic and magmatic processes occurring in the deep crust ultimately control the chemical and physical characteristic of the continental crust. A complex interplay between magma intrusion, crystallization, and reaction with the pre-existing crust provide a wide range of differentiated magma and cumulates (and / or restites) that will feed the upper crustal levels with evolved melt while constructing the lower crust. With growing evidence from field and experimental studies, it becomes clearer that crystallization and melting processes are non-exclusive but should be considered together. Incoming H2O bearing mantle melts will start to fractionate to a certain extent, forming cumulates but also releasing heat and H2O to the intruded host-rock allowing it to melt in saturated conditions. The end-result of such dynamic system is a function of the amount and composition of melt input, and extent of reaction with the host which is itself dependent on the migration mode of the melts. To better constrain lower crust processes, we have built up a numerical model [see Riel et al. associated abstract for methods] to explore different parameters, unravelling the complex interplay between melt percolation / crystallization and degassing / re-melting in a so called "hot zone" model. We simulated the intrusion of water bearing mantle melts at the base of an amphibolitized lower crust during a magmatic event that lasts 5 Ma. We varied several parameters such as Moho depth and melt rock ratio to better constrain what controls the final melt / lower crust composition.. We show the evolution of the chemical characteristics of the melt that escape the system during this magmatic event, as well as the resulting lower crust characteristics. We illustrate how the evolution of melt major elements composition reflects the progressive replacement of the crust towards compositions that are dominated by the mantle melt input. The resulting magmas cover a wide range of composition from tonalite to granite, and the modelled lower crust shows all the petrological characteristic of observed lower arc-crust.
NASA Technical Reports Server (NTRS)
Nash, Stephen G.; Polyak, R.; Sofer, Ariela
1994-01-01
When a classical barrier method is applied to the solution of a nonlinear programming problem with inequality constraints, the Hessian matrix of the barrier function becomes increasingly ill-conditioned as the solution is approached. As a result, it may be desirable to consider alternative numerical algorithms. We compare the performance of two methods motivated by barrier functions. The first is a stabilized form of the classical barrier method, where a numerically stable approximation to the Newton direction is used when the barrier parameter is small. The second is a modified barrier method where a barrier function is applied to a shifted form of the problem, and the resulting barrier terms are scaled by estimates of the optimal Lagrange multipliers. The condition number of the Hessian matrix of the resulting modified barrier function remains bounded as the solution to the constrained optimization problem is approached. Both of these techniques can be used in the context of a truncated-Newton method, and hence can be applied to large problems, as well as on parallel computers. In this paper, both techniques are applied to problems with bound constraints and we compare their practical behavior.
Vibration control of multiferroic fibrous composite plates using active constrained layer damping
NASA Astrophysics Data System (ADS)
Kattimani, S. C.; Ray, M. C.
2018-06-01
Geometrically nonlinear vibration control of fiber reinforced magneto-electro-elastic or multiferroic fibrous composite plates using active constrained layer damping treatment has been investigated. The piezoelectric (BaTiO3) fibers are embedded in the magnetostrictive (CoFe2O4) matrix forming magneto-electro-elastic or multiferroic smart composite. A three-dimensional finite element model of such fiber reinforced magneto-electro-elastic plates integrated with the active constrained layer damping patches is developed. Influence of electro-elastic, magneto-elastic and electromagnetic coupled fields on the vibration has been studied. The Golla-Hughes-McTavish method in time domain is employed for modeling a constrained viscoelastic layer of the active constrained layer damping treatment. The von Kármán type nonlinear strain-displacement relations are incorporated for developing a three-dimensional finite element model. Effect of fiber volume fraction, fiber orientation and boundary conditions on the control of geometrically nonlinear vibration of the fiber reinforced magneto-electro-elastic plates is investigated. The performance of the active constrained layer damping treatment due to the variation of piezoelectric fiber orientation angle in the 1-3 Piezoelectric constraining layer of the active constrained layer damping treatment has also been emphasized.
NASA Astrophysics Data System (ADS)
Sun, Congcong; Wang, Zhijie; Liu, Sanming; Jiang, Xiuchen; Sheng, Gehao; Liu, Tianyu
2017-05-01
Wind power has the advantages of being clean and non-polluting and the development of bundled wind-thermal generation power systems (BWTGSs) is one of the important means to improve wind power accommodation rate and implement “clean alternative” on generation side. A two-stage optimization strategy for BWTGSs considering wind speed forecasting results and load characteristics is proposed. By taking short-term wind speed forecasting results of generation side and load characteristics of demand side into account, a two-stage optimization model for BWTGSs is formulated. By using the environmental benefit index of BWTGSs as the objective function, supply-demand balance and generator operation as the constraints, the first-stage optimization model is developed with the chance-constrained programming theory. By using the operation cost for BWTGSs as the objective function, the second-stage optimization model is developed with the greedy algorithm. The improved PSO algorithm is employed to solve the model and numerical test verifies the effectiveness of the proposed strategy.
A simulation-based analytic model of radio galaxies
NASA Astrophysics Data System (ADS)
Hardcastle, M. J.
2018-04-01
I derive and discuss a simple semi-analytical model of the evolution of powerful radio galaxies which is not based on assumptions of self-similar growth, but rather implements some insights about the dynamics and energetics of these systems derived from numerical simulations, and can be applied to arbitrary pressure/density profiles of the host environment. The model can qualitatively and quantitatively reproduce the source dynamics and synchrotron light curves derived from numerical modelling. Approximate corrections for radiative and adiabatic losses allow it to predict the evolution of radio spectral index and of inverse-Compton emission both for active and `remnant' sources after the jet has turned off. Code to implement the model is publicly available. Using a standard model with a light relativistic (electron-positron) jet, subequipartition magnetic fields, and a range of realistic group/cluster environments, I simulate populations of sources and show that the model can reproduce the range of properties of powerful radio sources as well as observed trends in the relationship between jet power and radio luminosity, and predicts their dependence on redshift and environment. I show that the distribution of source lifetimes has a significant effect on both the source length distribution and the fraction of remnant sources expected in observations, and so can in principle be constrained by observations. The remnant fraction is expected to be low even at low redshift and low observing frequency due to the rapid luminosity evolution of remnants, and to tend rapidly to zero at high redshift due to inverse-Compton losses.
NASA Astrophysics Data System (ADS)
White, Christopher Joseph
We describe the implementation of sophisticated numerical techniques for general-relativistic magnetohydrodynamics simulations in the Athena++ code framework. Improvements over many existing codes include the use of advanced Riemann solvers and of staggered-mesh constrained transport. Combined with considerations for computational performance and parallel scalability, these allow us to investigate black hole accretion flows with unprecedented accuracy. The capability of the code is demonstrated by exploring magnetically arrested disks.
NASA Astrophysics Data System (ADS)
Barantsrva, O.
2014-12-01
We present a preliminary analysis of the crustal and upper mantle structure for off-shore regions in the North Atlantic and Arctic oceans. These regions have anomalous oceanic lithosphere: the upper mantle of the North Atlantic ocean is affected by the Iceland plume, while the Arctic ocean has some of the slowest spreading rates. Our specific goal is to constrain the density structure of the upper mantle in order to understand the links between the deep lithosphere dynamics, ocean spreading, ocean floor bathymetry, heat flow and structure of the oceanic lithosphere in the regions where classical models of evolution of the oceanic lithosphere may not be valid. The major focus is on the oceanic lithosphere, but the Arctic shelves with a sufficient data coverage are also included into the analysis. Out major interest is the density structure of the upper mantle, and the analysis is based on the interpretation of GOCE satellite gravity data. To separate gravity anomalies caused by subcrustal anomalous masses, the gravitational effect of water, crust and the deep mantle is removed from the observed gravity field. For bathymetry we use the global NOAA database ETOPO1. The crustal correction to gravity is based on two crustal models: (1) global model CRUST1.0 (Laske, 2013) and, for a comparison, (2) a regional seismic model EUNAseis (Artemieva and Thybo, 2013). The crustal density structure required for the crustal correction is constrained from Vp data. Previous studies have shown that a large range of density values corresponds to any Vp value. To overcome this problem and to reduce uncertainty associated with the velocity-density conversion, we account for regional tectonic variations in the Northern Atlantics as constrained by numerous published seismic profiles and potential-field models across the Norwegian off-shore crust (e.g. Breivik et al., 2005, 2007), and apply different Vp-density conversions for different parts of the region. We present preliminary results, which we use to examine factors that control variations in bathymetry, sedimentary and crustal thicknesses in these anomalous oceanic domains.
Tests of the Grobner Basis Solution for Lightning Ground Flash Fraction Retrieval
NASA Technical Reports Server (NTRS)
Koshak, William; Solakiewicz, Richard; Attele, Rohan
2011-01-01
Satellite lightning imagers such as the NASA Tropical Rainfall Measuring Mission Lightning Imaging Sensor (TRMM/LIS) and the future GOES-R Geostationary Lightning Mapper (GLM) are designed to detect total lightning (ground flashes + cloud flashes). However, there is a desire to discriminate ground flashes from cloud flashes from the vantage point of space since this would enhance the overall information content of the satellite lightning data and likely improve its operational and scientific applications (e.g., in severe weather warning, lightning nitrogen oxides studies, and global electric circuit analyses). A Bayesian inversion method was previously introduced for retrieving the fraction of ground flashes in a set of flashes observed from a satellite lightning imager. The method employed a constrained mixed exponential distribution model to describe the lightning optical measurements. To obtain the optimum model parameters (one of which is the ground flash fraction), a scalar function was minimized by a numerical method. In order to improve this optimization, a Grobner basis solution was introduced to obtain analytic representations of the model parameters that serve as a refined initialization scheme to the numerical optimization. In this study, we test the efficacy of the Grobner basis initialization using actual lightning imager measurements and ground flash truth derived from the national lightning network.
Statistical fluctuations of an ocean surface inferred from shoes and ships
NASA Astrophysics Data System (ADS)
Lerche, Ian; Maubeuge, Frédéric
1995-12-01
This paper shows that it is possible to roughly estimate some ocean properties using simple time-dependent statistical models of ocean fluctuations. Based on a real incident, the loss by a vessel of a Nike shoes container in the North Pacific Ocean, a statistical model was tested on data sets consisting of the Nike shoes found by beachcombers a few months later. This statistical treatment of the shoes' motion allows one to infer velocity trends of the Pacific Ocean, together with their fluctuation strengths. The idea is to suppose that there is a mean bulk flow speed that can depend on location on the ocean surface and time. The fluctuations of the surface flow speed are then treated as statistically random. The distribution of shoes is described in space and time using Markov probability processes related to the mean and fluctuating ocean properties. The aim of the exercise is to provide some of the properties of the Pacific Ocean that are otherwise calculated using a sophisticated numerical model, OSCURS, where numerous data are needed. Relevant quantities are sharply estimated, which can be useful to (1) constrain output results from OSCURS computations, and (2) elucidate the behavior patterns of ocean flow characteristics on long time scales.
The H i-to-H{sub 2} Transition in a Turbulent Medium
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bialy, Shmuel; Sternberg, Amiel; Burkhart, Blakesley, E-mail: shmuelbi@mail.tau.ac.il
2017-07-10
We study the effect of density fluctuations induced by turbulence on the H i/H{sub 2} structure in photodissociation regions (PDRs) both analytically and numerically. We perform magnetohydrodynamic numerical simulations for both subsonic and supersonic turbulent gas and chemical H i/H{sub 2} balance calculations. We derive atomic-to-molecular density profiles and the H i column density probability density function (PDF) assuming chemical equilibrium. We find that, while the H i/H{sub 2} density profiles are strongly perturbed in turbulent gas, the mean H i column density is well approximated by the uniform-density analytic formula of Sternberg et al. The PDF width depends onmore » (a) the radiation intensity–to–mean density ratio, (b) the sonic Mach number, and (c) the turbulence decorrelation scale, or driving scale. We derive an analytic model for the H i PDF and demonstrate how our model, combined with 21 cm observations, can be used to constrain the Mach number and driving scale of turbulent gas. As an example, we apply our model to observations of H i in the Perseus molecular cloud. We show that a narrow observed H i PDF may imply small-scale decorrelation, pointing to the potential importance of subcloud-scale turbulence driving.« less
NASA Technical Reports Server (NTRS)
Koshak, William; Krider, E. Philip; Murray, Natalie; Boccippio, Dennis
2007-01-01
A "dimensional reduction" (DR) method is introduced for analyzing lightning field changes whereby the number of unknowns in a discrete two-charge model is reduced from the standard eight to just four. The four unknowns are found by performing a numerical minimization of a chi-squared goodness-of-fit function. At each step of the minimization, an Overdetermined Fixed Matrix (OFM) method is used to immediately retrieve the best "residual source". In this way, all 8 parameters are found, yet a numerical search of only 4 parameters is required. The inversion method is applied to the understanding of lightning charge retrievals. The accuracy of the DR method has been assessed by comparing retrievals with data provided by the Lightning Detection And Ranging (LDAR) instrument. Because lightning effectively deposits charge within thundercloud charge centers and because LDAR traces the geometrical development of the lightning channel with high precision, the LDAR data provides an ideal constraint for finding the best model charge solutions. In particular, LDAR data can be used to help determine both the horizontal and vertical positions of the model charges, thereby eliminating dipole ambiguities. The results of the LDAR-constrained charge retrieval method have been compared to the locations of optical pulses/flash locations detected by the Lightning Imaging Sensor (LIS).
Gil, H; Qualls, W A; Cosner, C; DeAngelis, D L; Hassan, A; Gad, A M; Ruan, S; Cantrell, S R; Beier, J C
2016-01-01
Rift-Valley Fever (RVF) is a zoonotic mosquito-borne disease in Africa and the Arabian Peninsula. Drivers for this disease vary by region and are not well understood for North African countries such as Egypt. A deeper understanding of RVF risk factors would inform disease management policies. The present study employs mathematical and computational modeling techniques to ascertain the extent to which the severity of RVF epizootics in Egypt differs depending on the interaction between imported ruminant and environmentally-constrained mosquito populations. An ordinary differential system of equations, a numerical model, and an individual-based model (IBM) were constructed to represent RVF disease dynamics between localized mosquitoes and ruminants being imported into Egypt for the Greater Bairam. Four cases, corresponding to the Greater Bairam's occurrence during distinct quarters of the solar year, were set up in both models to assess whether the different season-associated mosquito populations present during the Greater Bairam resulted in RVF epizootics of variable magnitudes. The numerical model and the IBM produced nearly identical results: ruminant and mosquito population plots for both models were similar in shape and magnitude for all four cases. In both models, all four cases differed in the severity of their corresponding simulated RVF epizootics. The four cases, ranked by the severity of the simulated RVF epizootics in descending order, correspond with the occurrence of the Greater Bairam on the following months: July, October, April, and January. The numerical model was assessed for sensitivity with respect to parameter values and exhibited a high degree of robustness. Limiting the importation of infected ruminants beginning one month prior to the Greater Bairam festival (on years in which the festival falls between the months of July and October: 2014-2022) might be a feasible way of mitigating future RVF epizootics in Egypt. Copyright © 2015 The Royal Society for Public Health. Published by Elsevier Ltd. All rights reserved.
Constraining screened fifth forces with the electron magnetic moment
NASA Astrophysics Data System (ADS)
Brax, Philippe; Davis, Anne-Christine; Elder, Benjamin; Wong, Leong Khim
2018-04-01
Chameleon and symmetron theories serve as archetypal models for how light scalar fields can couple to matter with gravitational strength or greater, yet evade the stringent constraints from classical tests of gravity on Earth and in the Solar System. They do so by employing screening mechanisms that dynamically alter the scalar's properties based on the local environment. Nevertheless, these do not hide the scalar completely, as screening leads to a distinct phenomenology that can be well constrained by looking for specific signatures. In this work, we investigate how a precision measurement of the electron magnetic moment places meaningful constraints on both chameleons and symmetrons. Two effects are identified: First, virtual chameleons and symmetrons run in loops to generate quantum corrections to the intrinsic value of the magnetic moment—a common process widely considered in the literature for many scenarios beyond the Standard Model. A second effect, however, is unique to scalar fields that exhibit screening. A scalar bubblelike profile forms inside the experimental vacuum chamber and exerts a fifth force on the electron, leading to a systematic shift in the experimental measurement. In quantifying this latter effect, we present a novel approach that combines analytic arguments and a small number of numerical simulations to solve for the bubblelike profile quickly for a large range of model parameters. Taken together, both effects yield interesting constraints in complementary regions of parameter space. While the constraints we obtain for the chameleon are largely uncompetitive with those in the existing literature, this still represents the tightest constraint achievable yet from an experiment not originally designed to search for fifth forces. We break more ground with the symmetron, for which our results exclude a large and previously unexplored region of parameter space. Central to this achievement are the quantum correction terms, which are able to constrain symmetrons with masses in the range μ ∈[10-3.88,108] eV , whereas other experiments have hitherto only been sensitive to 1 or 2 orders of magnitude at a time.
NASA Astrophysics Data System (ADS)
Jayne, R., Jr.; Pollyea, R.
2016-12-01
Carbon capture and sequestration (CCS) in geologic reservoirs is one strategy for reducing anthropogenic CO2 emissions from large-scale point-source emitters. Recent developments at the CarbFix CCS pilot in Iceland have shown that basalt reservoirs are highly effective for permanent mineral trapping on the basis of CO2-water-rock interactions, which result in the formation of carbonates minerals. In order to advance our understanding of basalt sequestration in large igneous provinces, this research uses numerical simulation to evaluate the feasibility of industrial-scale CO2 injections in the Columbia River Basalt Group (CRBG). Although bulk reservoir properties are well constrained on the basis of field and laboratory testing from the Wallula Basalt Sequestration Pilot Project, there remains significant uncertainty in the spatial distribution of permeability at the scale of individual basalt flows. Geostatistical analysis of hydrologic data from 540 wells illustrates that CRBG reservoirs are reasonably modeled as layered heterogeneous systems on the basis of basalt flow morphology; however, the regional dataset is insufficient to constrain permeability variability at the scale of an individual basalt flow. As a result, permeability distribution for this modeling study is established by centering the lognormal permeability distribution in the regional dataset over the bulk permeability measured at Wallula site, which results in a spatially random permeability distribution within the target reservoir. In order to quantify the effects of this permeability uncertainty, CO2 injections are simulated within 50 equally probable synthetic reservoir domains. Each model domain comprises three-dimensional geometry with 530,000 grid blocks, and fracture-matrix interaction is simulated as interacting continua for the two low permeability layers (flow interiors) bounding the injection zone. Results from this research illustrate that permeability uncertainty at the scale of individual basalt flows may significantly impact both injection pressure accumulation and CO2 distribution.
NASA Astrophysics Data System (ADS)
Malanotte-Rizzoli, Paola; Young, Roberta E.
1995-12-01
The primary objective of this paper is to assess the relative effectiveness of data sets with different space coverage and time resolution when they are assimilated into an ocean circulation model. We focus on obtaining realistic numerical simulations of the Gulf Stream system typically of the order of 3-month duration by constructing a "synthetic" ocean simultaneously consistent with the model dynamics and the observations. The model used is the Semispectral Primitive Equation Model. The data sets are the "global" Optimal Thermal Interpolation Scheme (OTIS) 3 of the Fleet Numerical Oceanography Center providing temperature and salinity fields with global coverage and with bi-weekly frequency, and the localized measurements, mostly of current velocities, from the central and eastern array moorings of the Synoptic Ocean Prediction (SYNOP) program, with daily frequency but with a very small spatial coverage. We use a suboptimal assimilation technique ("nudging"). Even though this technique has already been used in idealized data assimilation studies, to our knowledge this is the first study in which the effectiveness of nudging is tested by assimilating real observations of the interior temperature and salinity fields. This is also the first work in which a systematic assimilation is carried out of the localized, high-quality SYNOP data sets in numerical experiments longer than 1-2 weeks, that is, not aimed to forecasting. We assimilate (1) the global OTIS 3 alone, (2) the local SYNOP observations alone, and (3) both OTIS 3 and SYNOP observations. We assess the success of the assimilations with quantitative measures of performance, both on the global and local scale. The results can be summarized as follows. The intermittent assimilation of the global OTIS 3 is necessary to keep the model "on track" over 3-month simulations on the global scale. As OTIS 3 is assimilated at every model grid point, a "gentle" weight must be prescribed to it so as not to overconstrain the model. However, in these assimilations the predicted velocity fields over the SYNOP arrays are greatly in error. The continuous assimilation of the localized SYNOP data sets with a strong weight is necessary to obtain local realistic evolutions. Then assimilation of velocity measurements alone recovers the density structure over the array area. However, the spatial coverage of the SYNOP measurements is too small to constrain the model on the global scale. Thus the blending of both types of datasets is necessary in the assimilation as they constrain different time and space scales. Our choice of "gentle" nudging weight for the global OTIS 3 and "strong" weight for the local SYNOP data provides for realistic simulations of the Gulf Stream system, both globally and locally, on the 3- to 4-month-long timescale, the one governed by the Gulf Stream jet internal dynamics.
``Green's function'' approach & low-mode asymmetries
NASA Astrophysics Data System (ADS)
Masse, Laurent; Clark, Dan; Salmonson, Jay; MacLaren, Steve; Ma, Tammy; Khan, Shahab; Pino, Jesse; Ralph, Jo; Czajka, C.; Tipton, Robert; Landen, Otto; Kyrala, Georges; 2 Team; 1 Team
2017-10-01
Long wavelength, low mode asymmetries are believed to play a leading role in limiting the performance of current ICF implosions on NIF. These long wavelength modes are initiated and driven by asymmetries in the x-ray flux from the hohlraum; however, the underlying hydrodynamics of the implosion also act to amplify these asymmetries. The work presented here aim to deepen our understanding of the interplay of the drive asymmetries and the underlying implosion hydrodynamics in determining the final imploded configuration. This is accomplished through a synthesis of numerical modeling, analytic theory, and experimental data. In detail, we use a Green's function approach to connect the drive asymmetry seen by the capsule to the measured inflight and hot spot symmetries. The approach has been validated against a suite of numerical simulations. Ultimately, we hope this work will identify additional measurements to further constrain the asymmetries and increase hohlraum illumination design flexibility on the NIF. The technique and derivation of associated error bars will be presented. LLC, (LLNS) Contract No. DE-AC52-07NA27344.
NASA Astrophysics Data System (ADS)
Derigs, Dominik; Winters, Andrew R.; Gassner, Gregor J.; Walch, Stefanie; Bohm, Marvin
2018-07-01
The paper presents two contributions in the context of the numerical simulation of magnetized fluid dynamics. First, we show how to extend the ideal magnetohydrodynamics (MHD) equations with an inbuilt magnetic field divergence cleaning mechanism in such a way that the resulting model is consistent with the second law of thermodynamics. As a byproduct of these derivations, we show that not all of the commonly used divergence cleaning extensions of the ideal MHD equations are thermodynamically consistent. Secondly, we present a numerical scheme obtained by constructing a specific finite volume discretization that is consistent with the discrete thermodynamic entropy. It includes a mechanism to control the discrete divergence error of the magnetic field by construction and is Galilean invariant. We implement the new high-order MHD solver in the adaptive mesh refinement code FLASH where we compare the divergence cleaning efficiency to the constrained transport solver available in FLASH (unsplit staggered mesh scheme).
NASA Astrophysics Data System (ADS)
Trinchero, Paolo; Puigdomenech, Ignasi; Molinero, Jorge; Ebrahimi, Hedieh; Gylling, Björn; Svensson, Urban; Bosbach, Dirk; Deissmann, Guido
2017-05-01
We present an enhanced continuum-based approach for the modelling of groundwater flow coupled with reactive transport in crystalline fractured rocks. In the proposed formulation, flow, transport and geochemical parameters are represented onto a numerical grid using Discrete Fracture Network (DFN) derived parameters. The geochemical reactions are further constrained by field observations of mineral distribution. To illustrate how the approach can be used to include physical and geochemical complexities into reactive transport calculations, we have analysed the potential ingress of oxygenated glacial-meltwater in a heterogeneous fractured rock using the Forsmark site (Sweden) as an example. The results of high-performance reactive transport calculations show that, after a quick oxygen penetration, steady state conditions are attained where abiotic reactions (i.e. the dissolution of chlorite and the homogeneous oxidation of aqueous iron(II) ions) counterbalance advective oxygen fluxes. The results show that most of the chlorite becomes depleted in the highly conductive deformation zones where higher mineral surface areas are available for reactions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Amano, Takanobu, E-mail: amano@eps.s.u-tokyo.ac.jp
A new multidimensional simulation code for relativistic two-fluid electrodynamics (RTFED) is described. The basic equations consist of the full set of Maxwell’s equations coupled with relativistic hydrodynamic equations for separate two charged fluids, representing the dynamics of either an electron–positron or an electron–proton plasma. It can be recognized as an extension of conventional relativistic magnetohydrodynamics (RMHD). Finite resistivity may be introduced as a friction between the two species, which reduces to resistive RMHD in the long wavelength limit without suffering from a singularity at infinite conductivity. A numerical scheme based on HLL (Harten–Lax–Van Leer) Riemann solver is proposed that exactlymore » preserves the two divergence constraints for Maxwell’s equations simultaneously. Several benchmark problems demonstrate that it is capable of describing RMHD shocks/discontinuities at long wavelength limit, as well as dispersive characteristics due to the two-fluid effect appearing at small scales. This shows that the RTFED model is a promising tool for high energy astrophysics application.« less
NASA Astrophysics Data System (ADS)
Algarray, A. F. A.; Jun, H.; Mahdi, I.-E. M.
2017-11-01
The effects of the end conditions of cross-ply laminated composite beams on their dimensionless natural frequencies of free vibration is investigated. The problem is analyzed and solved by using the energy approach, which is formulated by a finite element model. Various end conditions of beams are used. Each beam has either movable ends or immovable ends. Numerical results are verified by comparisons with other relevant works. It is found that more constrained beams have higher values of natural frequencies of transverse vibration. The values of the natural frequencies of longitudinal modes are found to be the same for all beams with movable ends because they are generated by longitudinal movements only.
Accurate Modeling of Galaxy Clustering on Small Scales: Testing the Standard ΛCDM + Halo Model
NASA Astrophysics Data System (ADS)
Sinha, Manodeep; Berlind, Andreas A.; McBride, Cameron; Scoccimarro, Roman
2015-01-01
The large-scale distribution of galaxies can be explained fairly simply by assuming (i) a cosmological model, which determines the dark matter halo distribution, and (ii) a simple connection between galaxies and the halos they inhabit. This conceptually simple framework, called the halo model, has been remarkably successful at reproducing the clustering of galaxies on all scales, as observed in various galaxy redshift surveys. However, none of these previous studies have carefully modeled the systematics and thus truly tested the halo model in a statistically rigorous sense. We present a new accurate and fully numerical halo model framework and test it against clustering measurements from two luminosity samples of galaxies drawn from the SDSS DR7. We show that the simple ΛCDM cosmology + halo model is not able to simultaneously reproduce the galaxy projected correlation function and the group multiplicity function. In particular, the more luminous sample shows significant tension with theory. We discuss the implications of our findings and how this work paves the way for constraining galaxy formation by accurate simultaneous modeling of multiple galaxy clustering statistics.
Computing Generalized Matrix Inverse on Spiking Neural Substrate.
Shukla, Rohit; Khoram, Soroosh; Jorgensen, Erik; Li, Jing; Lipasti, Mikko; Wright, Stephen
2018-01-01
Emerging neural hardware substrates, such as IBM's TrueNorth Neurosynaptic System, can provide an appealing platform for deploying numerical algorithms. For example, a recurrent Hopfield neural network can be used to find the Moore-Penrose generalized inverse of a matrix, thus enabling a broad class of linear optimizations to be solved efficiently, at low energy cost. However, deploying numerical algorithms on hardware platforms that severely limit the range and precision of representation for numeric quantities can be quite challenging. This paper discusses these challenges and proposes a rigorous mathematical framework for reasoning about range and precision on such substrates. The paper derives techniques for normalizing inputs and properly quantizing synaptic weights originating from arbitrary systems of linear equations, so that solvers for those systems can be implemented in a provably correct manner on hardware-constrained neural substrates. The analytical model is empirically validated on the IBM TrueNorth platform, and results show that the guarantees provided by the framework for range and precision hold under experimental conditions. Experiments with optical flow demonstrate the energy benefits of deploying a reduced-precision and energy-efficient generalized matrix inverse engine on the IBM TrueNorth platform, reflecting 10× to 100× improvement over FPGA and ARM core baselines.
NASA Astrophysics Data System (ADS)
Pohle, Ina; Niebisch, Michael; Müller, Hannes; Schümberg, Sabine; Zha, Tingting; Maurer, Thomas; Hinz, Christoph
2018-07-01
To simulate the impacts of within-storm rainfall variabilities on fast hydrological processes, long precipitation time series with high temporal resolution are required. Due to limited availability of observed data such time series are typically obtained from stochastic models. However, most existing rainfall models are limited in their ability to conserve rainfall event statistics which are relevant for hydrological processes. Poisson rectangular pulse models are widely applied to generate long time series of alternating precipitation events durations and mean intensities as well as interstorm period durations. Multiplicative microcanonical random cascade (MRC) models are used to disaggregate precipitation time series from coarse to fine temporal resolution. To overcome the inconsistencies between the temporal structure of the Poisson rectangular pulse model and the MRC model, we developed a new coupling approach by introducing two modifications to the MRC model. These modifications comprise (a) a modified cascade model ("constrained cascade") which preserves the event durations generated by the Poisson rectangular model by constraining the first and last interval of a precipitation event to contain precipitation and (b) continuous sigmoid functions of the multiplicative weights to consider the scale-dependency in the disaggregation of precipitation events of different durations. The constrained cascade model was evaluated in its ability to disaggregate observed precipitation events in comparison to existing MRC models. For that, we used a 20-year record of hourly precipitation at six stations across Germany. The constrained cascade model showed a pronounced better agreement with the observed data in terms of both the temporal pattern of the precipitation time series (e.g. the dry and wet spell durations and autocorrelations) and event characteristics (e.g. intra-event intermittency and intensity fluctuation within events). The constrained cascade model also slightly outperformed the other MRC models with respect to the intensity-frequency relationship. To assess the performance of the coupled Poisson rectangular pulse and constrained cascade model, precipitation events were stochastically generated by the Poisson rectangular pulse model and then disaggregated by the constrained cascade model. We found that the coupled model performs satisfactorily in terms of the temporal pattern of the precipitation time series, event characteristics and the intensity-frequency relationship.
NASA Astrophysics Data System (ADS)
Cronkite-Ratcliff, C.; Phelps, G. A.; Boucher, A.
2011-12-01
In many geologic settings, the pathways of groundwater flow are controlled by geologic heterogeneities which have complex geometries. Models of these geologic heterogeneities, and consequently, their effects on the simulated pathways of groundwater flow, are characterized by uncertainty. Multiple-point geostatistics, which uses a training image to represent complex geometric descriptions of geologic heterogeneity, provides a stochastic approach to the analysis of geologic uncertainty. Incorporating multiple-point geostatistics into numerical models provides a way to extend this analysis to the effects of geologic uncertainty on the results of flow simulations. We present two case studies to demonstrate the application of multiple-point geostatistics to numerical flow simulation in complex geologic settings with both static and dynamic conditioning data. Both cases involve the development of a training image from a complex geometric description of the geologic environment. Geologic heterogeneity is modeled stochastically by generating multiple equally-probable realizations, all consistent with the training image. Numerical flow simulation for each stochastic realization provides the basis for analyzing the effects of geologic uncertainty on simulated hydraulic response. The first case study is a hypothetical geologic scenario developed using data from the alluvial deposits in Yucca Flat, Nevada. The SNESIM algorithm is used to stochastically model geologic heterogeneity conditioned to the mapped surface geology as well as vertical drill-hole data. Numerical simulation of groundwater flow and contaminant transport through geologic models produces a distribution of hydraulic responses and contaminant concentration results. From this distribution of results, the probability of exceeding a given contaminant concentration threshold can be used as an indicator of uncertainty about the location of the contaminant plume boundary. The second case study considers a characteristic lava-flow aquifer system in Pahute Mesa, Nevada. A 3D training image is developed by using object-based simulation of parametric shapes to represent the key morphologic features of rhyolite lava flows embedded within ash-flow tuffs. In addition to vertical drill-hole data, transient pressure head data from aquifer tests can be used to constrain the stochastic model outcomes. The use of both static and dynamic conditioning data allows the identification of potential geologic structures that control hydraulic response. These case studies demonstrate the flexibility of the multiple-point geostatistics approach for considering multiple types of data and for developing sophisticated models of geologic heterogeneities that can be incorporated into numerical flow simulations.
Optically inspired biomechanical model of the human eyeball.
Sródka, Wieslaw; Iskander, D Robert
2008-01-01
Currently available biomechanical models of the human eyeball focus mainly on the geometries and material properties of its components while little attention has been given to its optics--the eye's primary function. We postulate that in the evolution process, the mechanical structure of the eyeball has been influenced by its optical functions. We develop a numerical finite element analysis-based model in which the eyeball geometry and its material properties are linked to the optical functions of the eye. This is achieved by controlling in the model all essential optical functions while still choosing material properties from a range of clinically available data. In particular, it is assumed that in a certain range of intraocular pressures, the eye is able to maintain focus. This so-called property of optical self-adjustments provides a more constrained set of numerical solutions in which the number of free model parameters significantly decreases, leading to models that are more robust. Further, we investigate two specific cases of a model that satisfies optical self-adjustment: (1) a full model in which the cornea is flexibly attached to sclera at the limbus, and (2) a fixed cornea model in which the cornea is not allowed to move at the limbus. We conclude that for a biomechanical model of the eyeball to mimic the optical function of a real eye, it is crucial that the cornea is allowed to move at the limbal junction, that the materials used for the cornea and sclera are strongly nonlinear, and that their moduli of elasticity remain in a very close relationship.
NASA Technical Reports Server (NTRS)
Egbert, Gary D.
2001-01-01
A numerical ocean tide model has been developed and tested using highly accurate TOPEX/Poseidon (T/P) tidal solutions. The hydrodynamic model is based on time stepping a finite difference approximation to the non-linear shallow water equations. Two novel features of our implementation are a rigorous treatment of self attraction and loading (SAL), and a physically based parameterization for internal tide (IT) radiation drag. The model was run for a range of grid resolutions, and with variations in model parameters and bathymetry. For a rational treatment of SAL and IT drag, the model run at high resolution (1/12 degree) fits the T/P solutions to within 5 cm RMS in the open ocean. Both the rigorous SAL treatment and the IT drag parameterization are required to obtain solutions of this quality. The sensitivity of the solution to perturbations in bathymetry suggest that the fit to T/P is probably now limited by errors in this critical input. Since the model is not constrained by any data, we can test the effect of dropping sea-level to match estimated bathymetry from the last glacial maximum (LGM). Our results suggest that the 100 m drop in sea-level in the LGM would have significantly increased tidal amplitudes in the North Atlantic, and increased overall tidal dissipation by about 40%. However, details in tidal solutions for the past 20 ka are sensitive to the assumed stratification. IT drag accounts for a significant fraction of dissipation, especially in the LGM when large areas of present day shallow sea were exposed, and this parameter is poorly constrained at present.
NASA Astrophysics Data System (ADS)
Root, Bart; Tarasov, Lev; van der Wal, Wouter
2014-05-01
The global ice budget is still under discussion because the observed 120-130 m eustatic sea level equivalent since the Last Glacial Maximum (LGM) can not be explained by the current knowledge of land-ice melt after the LGM. One possible location for the missing ice is the Barents Sea Region, which was completely covered with ice during the LGM. This is deduced from relative sea level observations on Svalbard, Novaya Zemlya and the North coast of Scandinavia. However, there are no observations in the middle of the Barents Sea that capture the post-glacial uplift. With increased precision and longer time series of monthly gravity observations of the GRACE satellite mission it is possible to constrain Glacial Isostatic Adjustment in the center of the Barents Sea. This study investigates the extra constraint provided by GRACE data for modeling the past ice geometry in the Barents Sea. We use CSR release 5 data from February 2003 to July 2013. The GRACE data is corrected for the past 10 years of secular decline of glacier ice on Svalbard, Novaya Zemlya and Frans Joseph Land. With numerical GIA models for a radially symmetric Earth, we model the expected gravity changes and compare these with the GRACE observations after smoothing with a 250 km Gaussian filter. The comparisons show that for the viscosity profile VM5a, ICE-5G has too strong a gravity signal compared to GRACE. The regional calibrated ice sheet model (GLAC) of Tarasov appears to fit the amplitude of the GRACE signal. However, the GRACE data are very sensitive to the ice-melt correction, especially for Novaya Zemlya. Furthermore, the ice mass should be more concentrated to the middle of the Barents Sea. Alternative viscosity models confirm these conclusions.
DARK MATTER SUBHALOS AND THE X-RAY MORPHOLOGY OF THE COMA CLUSTER
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andrade-Santos, Felipe; Nulsen, Paul E. J.; Kraft, Ralph P.
2013-04-01
Structure formation models predict that clusters of galaxies contain numerous massive subhalos. The gravity of a subhalo in a cluster compresses the surrounding intracluster gas and enhances its X-ray emission. We present a simple model, which treats subhalos as slow moving and gasless, for computing this effect. Recent weak lensing measurements by Okabe et al. have determined masses of {approx}10{sup 13} M{sub Sun} for three mass concentrations projected within 300 kpc of the center of the Coma Cluster, two of which are centered on the giant elliptical galaxies NGC 4889 and NGC 4874. Adopting a smooth spheroidal {beta}-model for themore » gas distribution in the unperturbed cluster, we model the effect of these subhalos on the X-ray morphology of the Coma Cluster, comparing our results to Chandra and XMM-Newton X-ray data. The agreement between the models and the X-ray morphology of the central Coma Cluster is striking. With subhalo parameters from the lensing measurements, the distances of the three subhalos from the Coma Cluster midplane along our line of sight are all tightly constrained. Using the model to fit the subhalo masses for NGC 4889 and NGC 4874 gives 9.1 Multiplication-Sign 10{sup 12} M{sub Sun} and 7.6 Multiplication-Sign 10{sup 12} M{sub Sun }, respectively, in good agreement with the lensing masses. These results lend strong support to the argument that NGC 4889 and NGC 4874 are each associated with a subhalo that resides near the center of the Coma Cluster. In addition to constraining the masses and 3-d location of subhalos, the X-ray data show promise as a means of probing the structure of central subhalos.« less
Understanding lithospheric stresses in Arctic: constraints and models
NASA Astrophysics Data System (ADS)
Medvedev, Sergei; Minakov, Alexander; Lebedeva-Ivanova, Nina; Gaina, Carmen
2016-04-01
This pilot project aims to model stress patterns and analyze factors controlling lithospheric stresses in Arctic. The project aims to understand the modern stresses in Arctic as well as to define the ways to test recent hypotheses about Cenozoic evolution of the region. The regions around Lomonosov Ridge and Barents Sea are of particular interest driven by recent acquisition of high-resolution potential field and seismic data. Naturally, the major contributor to the lithospheric stress distribution is the gravitational potential energy (GPE). The study tries to incorporate available geological and geophysical data to build reliable GPE. In particular, we use the recently developed integrated gravity inversion for crustal thickness which incorporates up-to-date compilations of gravity anomalies, bathymetry, and sedimentary thickness. The modelled lithosphere thermal structure assumes a pure shear extension and the ocean age model constrained by global plate kinematics for the last ca. 120 Ma. The results of this approach are juxtaposed with estimates of the density variation inferred from the upper mantle S-wave velocity models based on previous surface wave tomography studies. Although new data and interpretations of the Arctic lithosphere structure become available now, there are areas of low accuracy or even lack of data. To compensate for this, we compare two approaches to constrain GPE: (1) one that directly integrates density of modelled lithosphere and (2) one that uses geoid anomalies which are filtered to account for density variations down to the base of the lithosphere only. The two versions of GPE compared to each other and the stresses calculated numerically are compared with observations. That allows us to optimize GPE and understand density structure, stress pattern, and factors controlling the stresses in Arctic.
Failure in lithium-ion batteries under transverse indentation loading
NASA Astrophysics Data System (ADS)
Chung, Seung Hyun; Tancogne-Dejean, Thomas; Zhu, Juner; Luo, Hailing; Wierzbicki, Tomasz
2018-06-01
Deformation and failure of constrained cells and modules in the battery pack under transverse loading is one of the most common conditions in batteries subjected to mechanical impacts. A combined experimental, numerical and analytical approach was undertaken to reveal the underlying mechanism and develop a new cell failure model. When large format pouch cells were subjected to local indentation all the way to failure, the post-mortem examination of the failure zones beneath the punches indicates a consistent slant fracture surface angle to the battery plane. This type of behavior can be described by the critical fracture plane theory in which fracture is caused by the shear stress modified by the normal stress. The Mohr-Coulomb fracture criterion is then postulated and it is shown how the two material constants can be determined from just one indentation test. The orientation of the fracture plane is invariant with respect to the type of loading and can be considered as a property of the cell stack. In addition, closed-form solutions are derived for the load-displacement relation for both plane-strain and axisymmetric cases. The results are in good agreement with the numerical simulation of the homogenized model and experimentally measured responses.
NASA Astrophysics Data System (ADS)
Stilmant, Frédéric; Pirotton, Michel; Archambeau, Pierre; Erpicum, Sébastien; Dewals, Benjamin
2015-01-01
A fly ash heap collapse occurred in Jupille (Liege, Belgium) in 1961. The subsequent flow of fly ash reached a surprisingly long runout and had catastrophic consequences. Its unprecedented degree of fluidization attracted scientific attention. As drillings and direct observations revealed no water-saturated zone at the base of the deposits, scientists assumed an air-fluidization mechanism, which appeared consistent with the properties of the material. In this paper, the air-fluidization assumption is tested based on two-dimensional numerical simulations. The numerical model has been developed so as to focus on the most prominent processes governing the flow, with parameters constrained by their physical interpretation. Results are compared to accurate field observations and are presented for different stages in the model enhancement, so as to provide a base for a discussion of the relative influence of pore pressure dissipation and pore pressure generation. These results show that the apparently high diffusion coefficient that characterizes the dissipation of air pore pressures is in fact sufficiently low for an important degree of fluidization to be maintained during a flow of hundreds of meters.
Evolution of midplate hotspot swells: Numerical solutions
NASA Technical Reports Server (NTRS)
Liu, Mian; Chase, Clement G.
1990-01-01
The evolution of midplate hotspot swells on an oceanic plate moving over a hot, upwelling mantle plume is numerically simulated. The plume supplies a Gaussian-shaped thermal perturbation and thermally-induced dynamic support. The lithosphere is treated as a thermal boundary layer with a strongly temperature-dependent viscosity. The two fundamental mechanisms of transferring heat, conduction and convection, during the interaction of the lithosphere with the mantle plume are considered. The transient heat transfer equations, with boundary conditions varying in both time and space, are solved in cylindrical coordinates using the finite difference ADI (alternating direction implicit) method on a 100 x 100 grid. The topography, geoid anomaly, and heat flow anomaly of the Hawaiian swell and the Bermuda rise are used to constrain the models. Results confirm the conclusion of previous works that the Hawaiian swell can not be explained by conductive heating alone, even if extremely high thermal perturbation is allowed. On the other hand, the model of convective thinning predicts successfully the topography, geoid anomaly, and the heat flow anomaly around the Hawaiian islands, as well as the changes in the topography and anomalous heat flow along the Hawaiian volcanic chain.
Transport of Perfluorocarbon Tracers in the Cranfield Geological Carbon Sequestration Project
NASA Astrophysics Data System (ADS)
Moortgat, J.; Soltanian, M. R.; Amooie, M. A.; Cole, D. R.; Graham, D. E.; Pfiffner, S. M.; Phelps, T.
2017-12-01
A field-scale carbon dioxide (CO2) injection pilot project was conducted by the Southeast Regional Sequestration Partnership (SECARB) at Cranfield, Mississippi. Two associated campaigns in 2009 and 2010 were carried out to co-inject perfluorocarbon tracers (PFTs) and sulfur hexafluoride (SF6) with CO2. Tracers in gas samples from two observation wells were analyzed to construct breakthrough curves. We present the compiled field data as well as detailed numerical modeling of the flow and transport of CO2, brine, and introduced tracers. A high-resolution static model of the formation geology in the Detailed Area Study (DAS) was used in order to capture the impact of connected flow pathways created by fluvial channels on breakthrough curves and breakthrough times of PFTs and SF6 tracers. We use the cubic-plus-association (CPA) equation of state, which takes into account the polar nature of water molecules, to describe the phase behavior of CO2-brine-tracer mixtures. We show how the combination of multiple tracer injection pulses with detailed numerical simulations provide a powerful tool in constraining both formation properties and how complex flow pathways develop over time.
Numerical modeling of crater lake seepage
NASA Astrophysics Data System (ADS)
Todesco, M.; Rouwet, D.
2012-04-01
The fate of crater lake waters seeping into the volcanic edifice is poorly constrained. Quantification of the seepage flux is important in volcanic surveillance as this water loss counterbalances the inflow of hot magmatic fluids into the lake, and enters the mass balance computation. Uncertainties associated with the estimate of seepage therefore transfer to the estimate of magmatic degassing and hazard assessment. Moreover, when the often acidic lake brines disperse into the volcanic edifice, they may lead to acid attack (stress corrosion) and eventually to mechanical weakening of the volcano flanks, thereby causing an indirect volcanic risk. Understanding of the features that control the underground propagation of lake waters and their interactions with the magmatic-hydrothermal system is therefore highly recommended in volcanic hazard assessment. In this work, we use the TOUGH2 geothermal simulator to investigate crater lake water seepage in different volcanic settings. Modeling is carried out to describe the evolution of a hydrothermal system open on a hot, pressurized reservoir of dry gas and capped by a volcanic lake. Numerical simulations investigate the role of lake morphology, system geometry, rock properties, and of the conditions applied to the lake and to the gas reservoir at depth.
Bartholow, Bruce D
2010-03-01
Numerous social-cognitive models posit that social behavior largely is driven by links between constructs in long-term memory that automatically become activated when relevant stimuli are encountered. Various response biases have been understood in terms of the influence of such "implicit" processes on behavior. This article reviews event-related potential (ERP) studies investigating the role played by cognitive control and conflict resolution processes in social-cognitive phenomena typically deemed automatic. Neurocognitive responses associated with response activation and conflict often are sensitive to the same stimulus manipulations that produce differential behavioral responses on social-cognitive tasks and that often are attributed to the role of automatic associations. Findings are discussed in the context of an overarching social cognitive neuroscience model in which physiological data are used to constrain social-cognitive theories.
NASA Astrophysics Data System (ADS)
Hobley, Daniel E. J.; Adams, Jordan M.; Nudurupati, Sai Siddhartha; Hutton, Eric W. H.; Gasparini, Nicole M.; Istanbulluoglu, Erkan; Tucker, Gregory E.
2017-01-01
The ability to model surface processes and to couple them to both subsurface and atmospheric regimes has proven invaluable to research in the Earth and planetary sciences. However, creating a new model typically demands a very large investment of time, and modifying an existing model to address a new problem typically means the new work is constrained to its detriment by model adaptations for a different problem. Landlab is an open-source software framework explicitly designed to accelerate the development of new process models by providing (1) a set of tools and existing grid structures - including both regular and irregular grids - to make it faster and easier to develop new process components, or numerical implementations of physical processes; (2) a suite of stable, modular, and interoperable process components that can be combined to create an integrated model; and (3) a set of tools for data input, output, manipulation, and visualization. A set of example models built with these components is also provided. Landlab's structure makes it ideal not only for fully developed modelling applications but also for model prototyping and classroom use. Because of its modular nature, it can also act as a platform for model intercomparison and epistemic uncertainty and sensitivity analyses. Landlab exposes a standardized model interoperability interface, and is able to couple to third-party models and software. Landlab also offers tools to allow the creation of cellular automata, and allows native coupling of such models to more traditional continuous differential equation-based modules. We illustrate the principles of component coupling in Landlab using a model of landform evolution, a cellular ecohydrologic model, and a flood-wave routing model.
NASA Astrophysics Data System (ADS)
Dziadek, R.; Gohl, K.; Diehl, A.; Kaul, N.
2017-07-01
Focused research on the Pine Island and Thwaites glaciers, which drain the West Antarctic Ice Shelf (WAIS) into the Amundsen Sea Embayment (ASE), revealed strong signs of instability in recent decades that result from variety of reasons, such as inflow of warmer ocean currents and reverse bedrock topography, and has been established as the Marine Ice Sheet Instability hypothesis. Geothermal heat flux (GHF) is a poorly constrained parameter in Antarctica and suspected to affect basal conditions of ice sheets, i.e., basal melting and subglacial hydrology. Thermomechanical models demonstrate the influential boundary condition of geothermal heat flux for (paleo) ice sheet stability. Due to a complex tectonic and magmatic history of West Antarctica, the region is suspected to exhibit strong heterogeneous geothermal heat flux variations. We present an approach to investigate ranges of realistic heat fluxes in the ASE by different methods, discuss direct observations, and 3-D numerical models that incorporate boundary conditions derived from various geophysical studies, including our new Depth to the Bottom of the Magnetic Source (DBMS) estimates. Our in situ temperature measurements at 26 sites in the ASE more than triples the number of direct GHF observations in West Antarctica. We demonstrate by our numerical 3-D models that GHF spatially varies from 68 up to 110 mW m-2.
NASA Technical Reports Server (NTRS)
Deavours, Daniel D.; Qureshi, M. Akber; Sanders, William H.
1997-01-01
Modeling tools and technologies are important for aerospace development. At the University of Illinois, we have worked on advancing the state of the art in modeling by Markov reward models in two important areas: reducing the memory necessary to numerically solve systems represented as stochastic activity networks and other stochastic Petri net extensions while still obtaining solutions in a reasonable amount of time, and finding numerically stable and memory-efficient methods to solve for the reward accumulated during a finite mission time. A long standing problem when modeling with high level formalisms such as stochastic activity networks is the so-called state space explosion, where the number of states increases exponentially with size of the high level model. Thus, the corresponding Markov model becomes prohibitively large and solution is constrained by the the size of primary memory. To reduce the memory necessary to numerically solve complex systems, we propose new methods that can tolerate such large state spaces that do not require any special structure in the model (as many other techniques do). First, we develop methods that generate row and columns of the state transition-rate-matrix on-the-fly, eliminating the need to explicitly store the matrix at all. Next, we introduce a new iterative solution method, called modified adaptive Gauss-Seidel, that exhibits locality in its use of data from the state transition-rate-matrix, permitting us to cache portions of the matrix and hence reduce the solution time. Finally, we develop a new memory and computationally efficient technique for Gauss-Seidel based solvers that avoids the need for generating rows of A in order to solve Ax = b. This is a significant performance improvement for on-the-fly methods as well as other recent solution techniques based on Kronecker operators. Taken together, these new results show that one can solve very large models without any special structure.
ODE constrained mixture modelling: a method for unraveling subpopulation structures and dynamics.
Hasenauer, Jan; Hasenauer, Christine; Hucho, Tim; Theis, Fabian J
2014-07-01
Functional cell-to-cell variability is ubiquitous in multicellular organisms as well as bacterial populations. Even genetically identical cells of the same cell type can respond differently to identical stimuli. Methods have been developed to analyse heterogeneous populations, e.g., mixture models and stochastic population models. The available methods are, however, either incapable of simultaneously analysing different experimental conditions or are computationally demanding and difficult to apply. Furthermore, they do not account for biological information available in the literature. To overcome disadvantages of existing methods, we combine mixture models and ordinary differential equation (ODE) models. The ODE models provide a mechanistic description of the underlying processes while mixture models provide an easy way to capture variability. In a simulation study, we show that the class of ODE constrained mixture models can unravel the subpopulation structure and determine the sources of cell-to-cell variability. In addition, the method provides reliable estimates for kinetic rates and subpopulation characteristics. We use ODE constrained mixture modelling to study NGF-induced Erk1/2 phosphorylation in primary sensory neurones, a process relevant in inflammatory and neuropathic pain. We propose a mechanistic pathway model for this process and reconstructed static and dynamical subpopulation characteristics across experimental conditions. We validate the model predictions experimentally, which verifies the capabilities of ODE constrained mixture models. These results illustrate that ODE constrained mixture models can reveal novel mechanistic insights and possess a high sensitivity.
Nayhouse, Michael; Kwon, Joseph Sang-Il; Orkoulas, G
2012-05-28
In simulation studies of fluid-solid transitions, the solid phase is usually modeled as a constrained system in which each particle is confined to move in a single Wigner-Seitz cell. The constrained cell model has been used in the determination of fluid-solid coexistence via thermodynamic integration and other techniques. In the present work, the phase diagram of such a constrained system of Lennard-Jones particles is determined from constant-pressure simulations. The pressure-density isotherms exhibit inflection points which are interpreted as the mechanical stability limit of the solid phase. The phase diagram of the constrained system contains a critical and a triple point. The temperature and pressure at the critical and the triple point are both higher than those of the unconstrained system due to the reduction in the entropy caused by the single occupancy constraint.
Optimality conditions for the numerical solution of optimization problems with PDE constraints :
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aguilo Valentin, Miguel Alejandro; Ridzal, Denis
2014-03-01
A theoretical framework for the numerical solution of partial di erential equation (PDE) constrained optimization problems is presented in this report. This theoretical framework embodies the fundamental infrastructure required to e ciently implement and solve this class of problems. Detail derivations of the optimality conditions required to accurately solve several parameter identi cation and optimal control problems are also provided in this report. This will allow the reader to further understand how the theoretical abstraction presented in this report translates to the application.
Broadband Spectral Investigations of Magnetar Bursts
NASA Astrophysics Data System (ADS)
Kırmızıbayrak, Demet; Şaşmaz Muş, Sinem; Kaneko, Yuki; Göğüş, Ersin
2017-09-01
We present our broadband (2-250 keV) time-averaged spectral analysis of 388 bursts from SGR J1550-5418, SGR 1900+14, and SGR 1806-20 detected with the Rossi X-ray Timing Explorer (RXTE) here and as a database in a companion web-catalog. We find that two blackbody functions (BB+BB), the sum of two modified blackbody functions (LB+LB), the sum of a blackbody function and a power-law function (BB+PO), and a power law with a high-energy exponential cutoff (COMPT) all provide acceptable fits at similar levels. We performed numerical simulations to constrain the best fitting model for each burst spectrum and found that 67.6% of burst spectra with well-constrained parameters are better described by the Comptonized model. We also found that 64.7% of these burst spectra are better described with the LB+LB model, which is employed in the spectral analysis of a soft gamma repeater (SGR) for the first time here, than with the BB+BB and BB+PO models. We found a significant positive lower bound trend on photon index, suggesting a decreasing upper bound on hardness, with respect to total flux and fluence. We compare this result with bursts observed from SGR and AXP (anomalous X-ray pulsar) sources and suggest that the relationship is a distinctive characteristic between the two. We confirm a significant anticorrelation between burst emission area and blackbody temperature, and find that it varies between the hot and cool blackbody temperatures differently than previously discussed. We expand on the interpretation of our results in the framework of a strongly magnetized neutron star.
A methodology for constraining power in finite element modeling of radiofrequency ablation.
Jiang, Yansheng; Possebon, Ricardo; Mulier, Stefaan; Wang, Chong; Chen, Feng; Feng, Yuanbo; Xia, Qian; Liu, Yewei; Yin, Ting; Oyen, Raymond; Ni, Yicheng
2017-07-01
Radiofrequency ablation (RFA) is a minimally invasive thermal therapy for the treatment of cancer, hyperopia, and cardiac tachyarrhythmia. In RFA, the power delivered to the tissue is a key parameter. The objective of this study was to establish a methodology for the finite element modeling of RFA with constant power. Because of changes in the electric conductivity of tissue with temperature, a nonconventional boundary value problem arises in the mathematic modeling of RFA: neither the voltage (Dirichlet condition) nor the current (Neumann condition), but the power, that is, the product of voltage and current was prescribed on part of boundary. We solved the problem using Lagrange multiplier: the product of the voltage and current on the electrode surface is constrained to be equal to the Joule heating. We theoretically proved the equality between the product of the voltage and current on the surface of the electrode and the Joule heating in the domain. We also proved the well-posedness of the problem of solving the Laplace equation for the electric potential under a constant power constraint prescribed on the electrode surface. The Pennes bioheat transfer equation and the Laplace equation for electric potential augmented with the constraint of constant power were solved simultaneously using the Newton-Raphson algorithm. Three problems for validation were solved. Numerical results were compared either with an analytical solution deduced in this study or with results obtained by ANSYS or experiments. This work provides the finite element modeling of constant power RFA with a firm mathematical basis and opens pathway for achieving the optimal RFA power. Copyright © 2016 John Wiley & Sons, Ltd.
Combining VPL tools with NEMESIS to Probe Hot Jupiter Exoclimes for JWST
NASA Astrophysics Data System (ADS)
Afrin Badhan, Mahmuda; Kopparapu, Ravi Kumar; Domagal-Goldman, Shawn; Hébrard, Eric; Deming, Drake; Barstow, Joanna; Claire, Mark; Irwin, Patrick GJ; Mandell, Avi; Batalha, Natasha; Garland, Ryan
2016-06-01
Hot Jupiters are the most readily detected exoplanets by present technology. Since the scorching temperatures (>1000K) from high stellar irradiation levels do not allow for cold traps to form in their atmospheres, we can constrain their envelope’s elemental composition with greater confidence compared to our own Jupiter. Thus highly irradiated giant exoplanets hold keys to advancing our understanding of the origin and evolution of planetary systems.Constraining the atmospheric constituents through retrieval methods demands high-precision spectroscopic measurements and robust models to match those measurements. The former will be provided by NASA’s upcoming missions such as JWST. We meet the latter by producing self-consistent retrievals. Here I present modeling results for the temperature structure and photochemical gas abundances of water, methane, carbon dioxide and carbon monoxide, in the dayside atmospheres of selected H2-dominated hot Jupiters observed by present space missions and JWST/NIRSpec simulations, for two [C]/[O] metallicity ratios.The photochemical models were computed using a recently upgraded version of the NASA Astrobiology Institute’s VPL/Atmos software suite. For the radiative transfer and retrieval work, I have utilized a combination of two different numerical approaches in the extensively validated NEMESIS Atmospheric Retrieval Algorithm (Oxford Planetary Group). I have also represented the temperature profile in an analytical radiative equilibrium form to ascertain their physical plausibility. Finally, high-temperature (T> 1000K) spectroscopic opacity databases are slowly but continually being improved. Since this carries the potential of impacting irradiated atmospheric models quite significantly, I also talk about the potential observable impact of such improvements on the retrieval results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kırmızıbayrak, Demet; Şaşmaz Muş, Sinem; Kaneko, Yuki
We present our broadband (2–250 keV) time-averaged spectral analysis of 388 bursts from SGR J1550−5418, SGR 1900+14, and SGR 1806−20 detected with the Rossi X-ray Timing Explorer ( RXTE ) here and as a database in a companion web-catalog. We find that two blackbody functions (BB+BB), the sum of two modified blackbody functions (LB+LB), the sum of a blackbody function and a power-law function (BB+PO), and a power law with a high-energy exponential cutoff (COMPT) all provide acceptable fits at similar levels. We performed numerical simulations to constrain the best fitting model for each burst spectrum and found that 67.6%more » of burst spectra with well-constrained parameters are better described by the Comptonized model. We also found that 64.7% of these burst spectra are better described with the LB+LB model, which is employed in the spectral analysis of a soft gamma repeater (SGR) for the first time here, than with the BB+BB and BB+PO models. We found a significant positive lower bound trend on photon index, suggesting a decreasing upper bound on hardness, with respect to total flux and fluence. We compare this result with bursts observed from SGR and AXP (anomalous X-ray pulsar) sources and suggest that the relationship is a distinctive characteristic between the two. We confirm a significant anticorrelation between burst emission area and blackbody temperature, and find that it varies between the hot and cool blackbody temperatures differently than previously discussed. We expand on the interpretation of our results in the framework of a strongly magnetized neutron star.« less
H2, fixed architecture, control design for large scale systems. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Mercadal, Mathieu
1990-01-01
The H2, fixed architecture, control problem is a classic linear quadratic Gaussian (LQG) problem whose solution is constrained to be a linear time invariant compensator with a decentralized processing structure. The compensator can be made of p independent subcontrollers, each of which has a fixed order and connects selected sensors to selected actuators. The H2, fixed architecture, control problem allows the design of simplified feedback systems needed to control large scale systems. Its solution becomes more complicated, however, as more constraints are introduced. This work derives the necessary conditions for optimality for the problem and studies their properties. It is found that the filter and control problems couple when the architecture constraints are introduced, and that the different subcontrollers must be coordinated in order to achieve global system performance. The problem requires the simultaneous solution of highly coupled matrix equations. The use of homotopy is investigated as a numerical tool, and its convergence properties studied. It is found that the general constrained problem may have multiple stabilizing solutions, and that these solutions may be local minima or saddle points for the quadratic cost. The nature of the solution is not invariant when the parameters of the system are changed. Bifurcations occur, and a solution may continuously transform into a nonstabilizing compensator. Using a modified homotopy procedure, fixed architecture compensators are derived for models of large flexible structures to help understand the properties of the constrained solutions and compare them to the corresponding unconstrained ones.
NASA Astrophysics Data System (ADS)
Bai, Chao-ying; He, Lei-yu; Li, Xing-wang; Sun, Jia-yu
2018-05-01
To conduct forward and simultaneous inversion in a complex geological model, including an irregular topography (or irregular reflector or velocity anomaly), we in this paper combined our previous multiphase arrival tracking method (referred as triangular shortest-path method, TSPM) in triangular (2D) or tetrahedral (3D) cell model and a linearized inversion solver (referred to as damped minimum norms and constrained least squares problem solved using the conjugate gradient method, DMNCLS-CG) to formulate a simultaneous travel time inversion method for updating both velocity and reflector geometry by using multiphase arrival times. In the triangular/tetrahedral cells, we deduced the partial derivative of velocity variation with respective to the depth change of reflector. The numerical simulation results show that the computational accuracy can be tuned to a high precision in forward modeling and the irregular velocity anomaly and reflector geometry can be accurately captured in the simultaneous inversion, because the triangular/tetrahedral cell can be easily used to stitch the irregular topography or subsurface interface.
NASA Astrophysics Data System (ADS)
Bai, Chao-ying; He, Lei-yu; Li, Xing-wang; Sun, Jia-yu
2017-12-01
To conduct forward and simultaneous inversion in a complex geological model, including an irregular topography (or irregular reflector or velocity anomaly), we in this paper combined our previous multiphase arrival tracking method (referred as triangular shortest-path method, TSPM) in triangular (2D) or tetrahedral (3D) cell model and a linearized inversion solver (referred to as damped minimum norms and constrained least squares problem solved using the conjugate gradient method, DMNCLS-CG) to formulate a simultaneous travel time inversion method for updating both velocity and reflector geometry by using multiphase arrival times. In the triangular/tetrahedral cells, we deduced the partial derivative of velocity variation with respective to the depth change of reflector. The numerical simulation results show that the computational accuracy can be tuned to a high precision in forward modeling and the irregular velocity anomaly and reflector geometry can be accurately captured in the simultaneous inversion, because the triangular/tetrahedral cell can be easily used to stitch the irregular topography or subsurface interface.
Belcher, Wayne R.; Sweetkind, Donald S.; Elliott, Peggy E.
2002-01-01
The use of geologic information such as lithology and rock properties is important to constrain conceptual and numerical hydrogeologic models. This geologic information is difficult to apply explicitly to numerical modeling and analyses because it tends to be qualitative rather than quantitative. This study uses a compilation of hydraulic-conductivity measurements to derive estimates of the probability distributions for several hydrogeologic units within the Death Valley regional ground-water flow system, a geologically and hydrologically complex region underlain by basin-fill sediments, volcanic, intrusive, sedimentary, and metamorphic rocks. Probability distributions of hydraulic conductivity for general rock types have been studied previously; however, this study provides more detailed definition of hydrogeologic units based on lithostratigraphy, lithology, alteration, and fracturing and compares the probability distributions to the aquifer test data. Results suggest that these probability distributions can be used for studies involving, for example, numerical flow modeling, recharge, evapotranspiration, and rainfall runoff. These probability distributions can be used for such studies involving the hydrogeologic units in the region, as well as for similar rock types elsewhere. Within the study area, fracturing appears to have the greatest influence on the hydraulic conductivity of carbonate bedrock hydrogeologic units. Similar to earlier studies, we find that alteration and welding in the Tertiary volcanic rocks greatly influence hydraulic conductivity. As alteration increases, hydraulic conductivity tends to decrease. Increasing degrees of welding appears to increase hydraulic conductivity because welding increases the brittleness of the volcanic rocks, thus increasing the amount of fracturing.
NASA Technical Reports Server (NTRS)
Wahls, Richard A.
1990-01-01
The method presented is designed to improve the accuracy and computational efficiency of existing numerical methods for the solution of flows with compressible turbulent boundary layers. A compressible defect stream function formulation of the governing equations assuming an arbitrary turbulence model is derived. This formulation is advantageous because it has a constrained zero-order approximation with respect to the wall shear stress and the tangential momentum equation has a first integral. Previous problems with this type of formulation near the wall are eliminated by using empirically based analytic expressions to define the flow near the wall. The van Driest law of the wall for velocity and the modified Crocco temperature-velocity relationship are used. The associated compressible law of the wake is determined and it extends the valid range of the analytical expressions beyond the logarithmic region of the boundary layer. The need for an inner-region eddy viscosity model is completely avoided. The near-wall analytic expressions are patched to numerically computed outer region solutions at a point determined during the computation. A new boundary condition on the normal derivative of the tangential velocity at the surface is presented; this condition replaces the no-slip condition and enables numerical integration to the surface with a relatively coarse grid using only an outer region turbulence model. The method was evaluated for incompressible and compressible equilibrium flows and was implemented into an existing Navier-Stokes code using the assumption of local equilibrium flow with respect to the patching. The method has proven to be accurate and efficient.
NASA Astrophysics Data System (ADS)
Wienkers, A. F.; Ogilvie, G. I.
2018-07-01
Non-linear evolution of the parametric instability of inertial waves inherent to eccentric discs is studied by way of a new local numerical model. Mode coupling of tidal deformation with the disc eccentricity is known to produce exponentially growing eccentricities at certain mean-motion resonances. However, the details of an efficient saturation mechanism balancing this growth still are not fully understood. This paper develops a local numerical model for an eccentric quasi-axisymmetric shearing box which generalizes the often-used Cartesian shearing box model. The numerical method is an overall second-order well-balanced finite volume method which maintains the stratified and oscillatory steady-state solution by construction. This implementation is employed to study the non-linear outcome of the parametric instability in eccentric discs with vertical structure. Stratification is found to constrain the perturbation energy near the mid-plane and localize the effective region of inertial wave breaking that sources turbulence. A saturated marginally sonic turbulent state results from the non-linear breaking of inertial waves and is subsequently unstable to large-scale axisymmetric zonal flow structures. This resulting limit-cycle behaviour reduces access to the eccentric energy source and prevents substantial transport of angular momentum radially through the disc. Still, the saturation of this parametric instability of inertial waves is shown to damp eccentricity on a time-scale of a thousand orbital periods. It may thus be a promising mechanism for intermittently regaining balance with the exponential growth of eccentricity from the eccentric Lindblad resonances and may also help explain the occurrence of 'bursty' dynamics such as the superhump phenomenon.
Multiply scaled constrained nonlinear equation solvers. [for nonlinear heat conduction problems
NASA Technical Reports Server (NTRS)
Padovan, Joe; Krishna, Lala
1986-01-01
To improve the numerical stability of nonlinear equation solvers, a partitioned multiply scaled constraint scheme is developed. This scheme enables hierarchical levels of control for nonlinear equation solvers. To complement the procedure, partitioned convergence checks are established along with self-adaptive partitioning schemes. Overall, such procedures greatly enhance the numerical stability of the original solvers. To demonstrate and motivate the development of the scheme, the problem of nonlinear heat conduction is considered. In this context the main emphasis is given to successive substitution-type schemes. To verify the improved numerical characteristics associated with partitioned multiply scaled solvers, results are presented for several benchmark examples.
Turner, Simon; Sandiford, Mike; Reagan, Mark; Hawkesworth, Chris; Hildreth, Wes
2010-01-01
We present the results of a combined U-series isotope and numerical modeling study of the 1912 Katmai-Novarupta eruption in Alaska. A stratigraphically constrained set of samples have compositions that range from basalt through basaltic andesite, andesite, dacite, and rhyolite. The major and trace element range can be modeled by 80–90% closed-system crystal fractionation over a temperature interval from 1279°C to 719°C at 100 MPa, with an implied volume of parental basalt of ∼65 km3. Numerical models suggest, for wall rock temperatures appropriate to this depth, that 90% of this volume of magma would cool and crystallize over this temperature interval within a few tens of kiloyears. However, the range in 87Sr/86Sr, (230Th/238U), and (226Ra/230Th) requires open-system processes. Assimilation of the host sediments can replicate the range of Sr isotopes. The variation of (226Ra/230Th) ratios in the basalt to andesite compositional range requires that these were generated less than several thousand years before eruption. Residence times for dacites are close to 8000 years, whereas the rhyolites appear to be 50–200 kyr old. Thus, the magmas that erupted within only 60 h had a wide range of crustal residence times. Nevertheless, they were emplaced in the same thermal regime and evolved along similar liquid lines of descent from parental magmas with similar compositions. The system was built progressively with multiple inputs providing both mass and heat, some of which led to thawing of older silicic material that provided much of the rhyolite.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Juanes, Ruben
The overall goals of this research are: (1) to determine the physical fate of single and multiple methane bubbles emitted to the water column by dissociating gas hydrates at seep sites deep within the hydrate stability zone or at the updip limit of gas hydrate stability, and (2) to quantitatively link theoretical and laboratory findings on methane transport to the analysis of real-world field-scale methane plume data placed within the context of the degrading methane hydrate province on the US Atlantic margin. The project is arranged to advance on three interrelated fronts (numerical modeling, laboratory experiments, and analysis of field-basedmore » plume data) simultaneously. The fundamental objectives of each component are the following: Numerical modeling: Constraining the conditions under which rising bubbles become armored with hydrate, the impact of hydrate armoring on the eventual fate of a bubble’s methane, and the role of multiple bubble interactions in survival of methane plumes to very shallow depths in the water column. Laboratory experiments: Exploring the parameter space (e.g., bubble size, gas saturation in the liquid phase, “proximity” to the stability boundary) for formation of a hydrate shell around a free bubble in water, the rise rate of such bubbles, and the bubble’s acoustic characteristics using field-scale frequencies. Field component: Extending the results of numerical modeling and laboratory experiments to the field-scale using brand new, existing, public-domain, state-of-the-art real world data on US Atlantic margin methane seeps, without acquiring new field data in the course of this particular project. This component quantitatively analyzes data on Atlantic margin methane plumes and place those new plumes and their corresponding seeps within the context of gas hydrate degradation processes on this margin.« less
Full-waveform inversion for the Iranian plateau
NASA Astrophysics Data System (ADS)
Masouminia, N.; Fichtner, A.; Rahimi, H.
2017-12-01
We aim to obtain a detailed tomographic model for the Iranian plateau facilitated by full-waveform inversion. By using this method, we intend to better constrain the 3-D structure of the crust and the upper mantle in the region. The Iranian plateau is a complex tectonic area resulting from the collision of the Arabian and Eurasian tectonic plates. This region is subject to complex tectonic processes such as Makran subduction zone, which runs along the southeastern coast of Iran, and the convergence of the Arabian and- Eurasian plates, which itself led to another subduction under Central Iran. This continent-continent collision has also caused shortening and crustal thickening, which can be seen today as Zagros mountain range in the south and Kopeh Dagh mountain range in the northeast. As a result of such a tectonic activity, the crust and the mantle beneath the region are expected to be highly heterogeneous. To further our understanding of the region and its tectonic history, a detailed 3-D velocity model is required.To construct a 3-D model, we propose to use full-waveform inversion, which allows us to incorporate all types of waves recorded in the seismogram, including body waves as well as fundamental- and higher-mode surface waves. Exploiting more information from the observed data using this approach is likely to constrain features which have not been found by classical tomography studies so far. We address the forward problem using Salvus - a numerical wave propagation solver, based on spectral-element method and run on high-performance computers. The solver allows us to simulate wave field propagating in highly heterogeneous, attenuating and anisotropic media, respecting the surface topography. To improve the model, we solve the optimization problem. Solution of this optimization problem is based on an iterative approach which employs adjoint methods to calculate the gradient and uses steepest descent and conjugate-gradient methods to minimize the objective function. Each iteration of such an approach is expected to bring the model closer to the true model.Our model domain extends between 25°N and 40°N in latitude and 42°E and 63°E in longitude. To constrain the 3-D structure of the area we use 83 broadband seismic stations and 146 earthquakes with magnitude Mw>4.5 -that occurred in the region between 2012 and 2017.
NASA Astrophysics Data System (ADS)
Londrillo, P.; del Zanna, L.
2004-03-01
We present a general framework to design Godunov-type schemes for multidimensional ideal magnetohydrodynamic (MHD) systems, having the divergence-free relation and the related properties of the magnetic field B as built-in conditions. Our approach mostly relies on the constrained transport (CT) discretization technique for the magnetic field components, originally developed for the linear induction equation, which assures [∇.B]num=0 and its preservation in time to within machine accuracy in a finite-volume setting. We show that the CT formalism, when fully exploited, can be used as a general guideline to design the reconstruction procedures of the B vector field, to adapt standard upwind procedures for the momentum and energy equations, avoiding the onset of numerical monopoles of O(1) size, and to formulate approximate Riemann solvers for the induction equation. This general framework will be named here upwind constrained transport (UCT). To demonstrate the versatility of our method, we apply it to a variety of schemes, which are finally validated numerically and compared: a novel implementation for the MHD case of the second-order Roe-type positive scheme by Liu and Lax [J. Comput. Fluid Dyn. 5 (1996) 133], and both the second- and third-order versions of a central-type MHD scheme presented by Londrillo and Del Zanna [Astrophys. J. 530 (2000) 508], where the basic UCT strategies have been first outlined.
NASA Astrophysics Data System (ADS)
Tsamados, Michel; Heorton, Harry; Feltham, Daniel; Muir, Alan; Baker, Steven
2016-04-01
The new elastic-plastic anisotropic (EAP) rheology that explicitly accounts for the sub-continuum anisotropy of the sea ice cover has been implemented into the latest version of the Los Alamos sea ice model CICE. The EAP rheology is widely used in the climate modeling scientific community (i.e. CPOM stand alone, RASM high resolution regional ice-ocean model, MetOffice fully coupled model). Early results from sensitivity studies (Tsamados et al, 2013) have shown the potential for an improved representation of the observed main sea ice characteristics with a substantial change of the spatial distribution of ice thickness and ice drift relative to model runs with the reference visco-plastic (VP) rheology. The model contains one new prognostic variable, the local structure tensor, which quantifies the degree of anisotropy of the sea ice, and two parameters that set the time scale of the evolution of this tensor. Observations from high resolution satellite SAR imagery as well as numerical simulation results from a discrete element model (DEM, see Wilchinsky, 2010) have shown that these individual floes can organize under external wind and thermal forcing to form an emergent isotropic sea ice state (via thermodynamic healing, thermal cracking) or an anisotropic sea ice state (via Coulombic failure lines due to shear rupture). In this work we use for the first time in the context of sea ice research a mathematical metric, the Tensorial Minkowski functionals (Schroeder-Turk, 2010), to measure quantitatively the degree of anisotropy and alignment of the sea ice at different scales. We apply the methodology on the GlobICE Envisat satellite deformation product (www.globice.info), on a prototype modified version of GlobICE applied on Sentinel-1 Synthetic Aperture Radar (SAR) imagery and on the DEM ice floe aggregates. By comparing these independent measurements of the sea ice anisotropy as well as its temporal evolution against the EAP model we are able to constrain the uncertain model parameters and functions in the EAP model.
Mobile high-performance computing (HPC) for synthetic aperture radar signal processing
NASA Astrophysics Data System (ADS)
Misko, Joshua; Kim, Youngsoo; Qi, Chenchen; Sirkeci, Birsen
2018-04-01
The importance of mobile high-performance computing has emerged in numerous battlespace applications at the tactical edge in hostile environments. Energy efficient computing power is a key enabler for diverse areas ranging from real-time big data analytics and atmospheric science to network science. However, the design of tactical mobile data centers is dominated by power, thermal, and physical constraints. Presently, it is very unlikely to achieve required computing processing power by aggregating emerging heterogeneous many-core processing platforms consisting of CPU, Field Programmable Gate Arrays and Graphic Processor cores constrained by power and performance. To address these challenges, we performed a Synthetic Aperture Radar case study for Automatic Target Recognition (ATR) using Deep Neural Networks (DNNs). However, these DNN models are typically trained using GPUs with gigabytes of external memories and massively used 32-bit floating point operations. As a result, DNNs do not run efficiently on hardware appropriate for low power or mobile applications. To address this limitation, we proposed for compressing DNN models for ATR suited to deployment on resource constrained hardware. This proposed compression framework utilizes promising DNN compression techniques including pruning and weight quantization while also focusing on processor features common to modern low-power devices. Following this methodology as a guideline produced a DNN for ATR tuned to maximize classification throughput, minimize power consumption, and minimize memory footprint on a low-power device.
Observational Signatures of Mass-loading in Jets Launched by Rotating Black Holes
NASA Astrophysics Data System (ADS)
O’ Riordan, Michael; Pe’er, Asaf; McKinney, Jonathan C.
2018-01-01
It is widely believed that relativistic jets in X-ray binaries (XRBs) and active-galactic nuclei are powered by the rotational energy of black holes. This idea is supported by general-relativistic magnetohydrodynamic (GRMHD) simulations of accreting black holes, which demonstrate efficient energy extraction via the Blandford–Znajek mechanism. However, due to uncertainties in the physics of mass loading, and the failure of GRMHD numerical schemes in the highly magnetized funnel region, the matter content of the jet remains poorly constrained. We investigate the observational signatures of mass loading in the funnel by performing general-relativistic radiative transfer calculations on a range of 3D GRMHD simulations of accreting black holes. We find significant observational differences between cases in which the funnel is empty and cases where the funnel is filled with plasma, particularly in the optical and X-ray bands. In the context of Sgr A*, current spectral data constrains the jet filling only if the black hole is rapidly rotating with a ≳ 0.9. In this case, the limits on the infrared flux disfavor a strong contribution from material in the funnel. We comment on the implications of our models for interpreting future Event Horizon Telescope observations. We also scale our models to stellar-mass black holes, and discuss their applicability to the low-luminosity state in XRBs.
Astrophysical Model Selection in Gravitational Wave Astronomy
NASA Technical Reports Server (NTRS)
Adams, Matthew R.; Cornish, Neil J.; Littenberg, Tyson B.
2012-01-01
Theoretical studies in gravitational wave astronomy have mostly focused on the information that can be extracted from individual detections, such as the mass of a binary system and its location in space. Here we consider how the information from multiple detections can be used to constrain astrophysical population models. This seemingly simple problem is made challenging by the high dimensionality and high degree of correlation in the parameter spaces that describe the signals, and by the complexity of the astrophysical models, which can also depend on a large number of parameters, some of which might not be directly constrained by the observations. We present a method for constraining population models using a hierarchical Bayesian modeling approach which simultaneously infers the source parameters and population model and provides the joint probability distributions for both. We illustrate this approach by considering the constraints that can be placed on population models for galactic white dwarf binaries using a future space-based gravitational wave detector. We find that a mission that is able to resolve approximately 5000 of the shortest period binaries will be able to constrain the population model parameters, including the chirp mass distribution and a characteristic galaxy disk radius to within a few percent. This compares favorably to existing bounds, where electromagnetic observations of stars in the galaxy constrain disk radii to within 20%.
Modeling Regolith Temperatures and Volatile Ice Processes (Invited)
NASA Astrophysics Data System (ADS)
Mellon, M. T.
2013-12-01
Surface and subsurface temperatures are an important tool for exploring the distribution and dynamics of volatile ices on and within planetary regoliths. I will review thermal-analysis approaches and recent applications in the studies of volatile ice processes. Numerical models of regolith temperatures allow us to examine the response of ices to periodic and secular changes in heat sources such as insolation. Used in conjunction with spatially and temporally distributed remotely-sensed temperatures, numerical models can: 1) constrain the stability and dynamics of volatile ices; 2) define the partitioning between phases of ice, gas, liquid, and adsorbate; and 3) in some instances be used to probe the distribution of ice hidden from view beneath the surface. The vapor pressure of volatile ices (such as water, carbon dioxide, and methane) depends exponentially on temperature. Small changes in temperature can result in transitions between stable phases. Cyclic temperatures and the propagation of thermal waves into the subsurface can produce a strong hysteresis in the population and partitioning of various phases (such as between ice, vapor, and adsorbate) and result in bulk transport. Condensation of ice will also have a pronounced effect on the thermal properties of otherwise loose particulate regolith. Cementing grains at their contacts through ice deposition will increase the thermal conductivity, and may enhance the stability of additional ice. Likewise sintering of grains within a predominantly icy regolith will increase the thermal conductivity. Subsurface layers that result from ice redistribution can be discriminated by remote sensing when combined with numerical modeling. Applications of these techniques include modeling of seasonal carbon dioxide frosts on Mars, predicting and interpreting the subsurface ice distribution on Mars and in Antarctica, and estimating the current depth of ice-rich permafrost on Mars. Additionally, understanding cold trapping ices in regions of the regolith of airless bodies, such as Mercury and the Moon, are aided by numerical modeling of regolith temperatures. Thermally driven sublimation of volatiles (water ice on Mars and more exotic species on icy moons in the outer solar system) can result in terrain degradation and collapse.
Constraints on Lobate Debris Apron Evolution and Rheology from Numerical Modeling of Ice Flow
NASA Astrophysics Data System (ADS)
Parsons, R.; Nimmo, F.
2010-12-01
Recent radar observations of mid-latitude lobate debris aprons (LDAs) have confirmed the presence of ice within these deposits. Radar observations in Deuteronilus Mensae have constrained the concentration of dust found within the ice deposits to <30% by volume based on the strength of the returned signal. In addition to constraining the dust fraction, these radar observations can measure the ice thickness - providing an opportunity to more accurately estimate the flow behavior of ice responsible for the formation of LDAs. In order to further constrain the age and rheology of LDA ice, we developed a numerical model simulating ice flow under Martian conditions using results from ice deformation experiments, theory of ice grain growth based on terrestrial ice cores, and observational constraints from radar profiles and laser altimetry. This finite difference model calculates the LDA profile shape as it flows over time assuming no basal slip. In our model, the ice rheology is determined by the concentration of dust which influences the ice grain size by pinning the ice grain boundaries and halting ice grain growth. By varying the dust fraction (and therefore the ice grain size), the ice temperature, the subsurface slope, and the initial ice volume we are able to determine the combination of parameters that best reproduce the observed LDA lengths and thicknesses over a period of time comparable to crater age dates of LDA surfaces (90 - 300 My, see figure). Based on simulations using different combinations of ice temperature, ice grain size, and basal slope, we find that an ice temperature of 205 K, a dust volume fraction of 0.5% (resulting in an ice grain size of 5 mm), and a flat subsurface slope give reasonable model LDA ages for many LDAs in the northern mid-latitudes of Mars. However, we find that there is no single combination of dust fraction, temperature, and subsurface slope which can give realistic ages for all LDAs suggesting that all or some of these variables are spatially heterogeneous. We conclude that there are important regional differences in either the amount of dust mixed in with the ice, or in the presence of a basal slope below the LDA ice. Alternatively, the ice temperature and/or timing of ice deposition may vary significantly between different mid-latitude regions. a) Topographic profiles plotted every 200 My (thin, solid lines) from a 1 Gy simulation of ice flow for an initial ice deposit (thick, solid line) 5 km long and 1 km thick using an ice temperature of 205 K and a dust fraction, φ, of 0.047%. A MOLA profile of an LDA at 38.6oN, 24.3oE (dashed line) is shown for comparison. b) Final profiles for simulations lasting 100 My using temperatures of 195, 205 and 215 K illustrate the effect of both temperature and increasing the dust volume fraction to 1.2% (resulting in an ice grain size of 1 mm).
Describing litho-constrained layout by a high-resolution model filter
NASA Astrophysics Data System (ADS)
Tsai, Min-Chun
2008-05-01
A novel high-resolution model (HRM) filtering technique was proposed to describe litho-constrained layouts. Litho-constrained layouts are layouts that have difficulties to pattern or are highly sensitive to process-fluctuations under current lithography technologies. HRM applies a short-wavelength (or high NA) model simulation directly on the pre-OPC, original design layout to filter out low spatial-frequency regions, and retain high spatial-frequency components which are litho-constrained. Since no OPC neither mask-synthesis steps are involved, this new technique is highly efficient in run time and can be used in design stage to detect and fix litho-constrained patterns. This method has successfully captured all the hot-spots with less than 15% overshoots on a realistic 80 mm2 full-chip M1 layout in 65nm technology node. A step by step derivation of this HRM technique is presented in this paper.
NASA Astrophysics Data System (ADS)
Farzamian, Mohammad; Monteiro Santos, Fernando A.; Khalil, Mohamed A.
2017-12-01
The coupled hydrogeophysical approach has proved to be a valuable tool for improving the use of geoelectrical data for hydrological model parameterization. In the coupled approach, hydrological parameters are directly inferred from geoelectrical measurements in a forward manner to eliminate the uncertainty connected to the independent inversion of electrical resistivity data. Several numerical studies have been conducted to demonstrate the advantages of a coupled approach; however, only a few attempts have been made to apply the coupled approach to actual field data. In this study, we developed a 1D coupled hydrogeophysical code to estimate the van Genuchten-Mualem model parameters, K s, n, θ r and α, from time-lapse vertical electrical sounding data collected during a constant inflow infiltration experiment. van Genuchten-Mualem parameters were sampled using the Latin hypercube sampling method to provide a full coverage of the range of each parameter from their distributions. By applying the coupled approach, vertical electrical sounding data were coupled to hydrological models inferred from van Genuchten-Mualem parameter samples to investigate the feasibility of constraining the hydrological model. The key approaches taken in the study are to (1) integrate electrical resistivity and hydrological data and avoiding data inversion, (2) estimate the total water mass recovery of electrical resistivity data and consider it in van Genuchten-Mualem parameters evaluation and (3) correct the influence of subsurface temperature fluctuations during the infiltration experiment on electrical resistivity data. The results of the study revealed that the coupled hydrogeophysical approach can improve the value of geophysical measurements in hydrological model parameterization. However, the approach cannot overcome the technical limitations of the geoelectrical method associated with resolution and of water mass recovery.
Solving constrained inverse problems for waveform tomography with Salvus
NASA Astrophysics Data System (ADS)
Boehm, C.; Afanasiev, M.; van Driel, M.; Krischer, L.; May, D.; Rietmann, M.; Fichtner, A.
2016-12-01
Finding a good balance between flexibility and performance is often difficult within domain-specific software projects. To achieve this balance, we introduce Salvus: an open-source high-order finite element package built upon PETSc and Eigen, that focuses on large-scale full-waveform modeling and inversion. One of the key features of Salvus is its modular design, based on C++ mixins, that separates the physical equations from the numerical discretization and the mathematical optimization. In this presentation we focus on solving inverse problems with Salvus and discuss (i) dealing with inexact derivatives resulting, e.g., from lossy wavefield compression, (ii) imposing additional constraints on the model parameters, e.g., from effective medium theory, and (iii) integration with a workflow management tool. We present a feasible-point trust-region method for PDE-constrained inverse problems that can handle inexactly computed derivatives. The level of accuracy in the approximate derivatives is controlled by localized error estimates to ensure global convergence of the method. Additional constraints on the model parameters are typically cheap to compute without the need for further simulations. Hence, including them in the trust-region subproblem introduces only a small computational overhead, but ensures feasibility of the model in every iteration. We show examples with homogenization constraints derived from effective medium theory (i.e. all fine-scale updates must upscale to a physically meaningful long-wavelength model). Salvus has a built-in workflow management framework to automate the inversion with interfaces to user-defined misfit functionals and data structures. This significantly reduces the amount of manual user interaction and enhances reproducibility which we demonstrate for several applications from the laboratory to global scale.
NASA Astrophysics Data System (ADS)
Xie, M.; Agus, S. S.; Schanz, T.; Kolditz, O.
2004-12-01
This paper presents an upscaling concept of swelling/shrinking processes of a compacted bentonite/sand mixture, which also applies to swelling of porous media in general. A constitutive approach for highly compacted bentonite/sand mixture is developed accordingly. The concept is based on the diffuse double layer theory and connects microstructural properties of the bentonite as well as chemical properties of the pore fluid with swelling potential. Main factors influencing the swelling potential of bentonite, i.e. variation of water content, dry density, chemical composition of pore fluid, as well as the microstructures and the amount of swelling minerals are taken into account. According to the proposed model, porosity is divided into interparticle and interlayer porosity. Swelling is the potential of interlayer porosity increase, which reveals itself as volume change in the case of free expansion, or turns to be swelling pressure in the case of constrained swelling. The constitutive equations for swelling/shrinking are implemented in the software GeoSys/RockFlow as a new chemo-hydro-mechanical model, which is able to simulate isothermal multiphase flow in bentonite. Details of the mathematical and numerical multiphase flow formulations, as well as the code implementation are described. The proposed model is verified using experimental data of tests on a highly compacted bentonite/sand mixture. Comparison of the 1D modelling results with the experimental data evidences the capability of the proposed model to satisfactorily predict free swelling of the material under investigation. Copyright
NASA Astrophysics Data System (ADS)
Weiss, C. J.; Knight, R.
2009-05-01
One of the key factors in the sensible inference of subsurface geologic properties from both field and laboratory experiments is the ability to quantify the linkages between the inherently fine-scale structures, such as bedding planes and fracture sets, and their macroscopic expression through geophysical interrogation. Central to this idea is the concept of a "minimal sampling volume" over which a given geophysical method responds to an effective medium property whose value is dictated by the geometry and distribution of sub- volume heterogeneities as well as the experiment design. In this contribution we explore the concept of effective resistivity volumes for the canonical depth-to-bedrock problem subject to industry-standard DC resistivity survey designs. Four models representing a sedimentary overburden and flat bedrock interface were analyzed through numerical experiments of six different resistivity arrays. In each of the four models, the sedimentary overburden consists of a thinly interbedded resistive and conductive laminations, with equivalent volume-averaged resistivity but differing lamination thickness, geometry, and layering sequence. The numerical experiments show striking differences in the apparent resistivity pseudo-sections which belie the volume-averaged equivalence of the models. These models constitute the synthetic data set offered for inversion in this Back to Basics Resistivity Modeling session and offer the promise to further our understanding of how the sampling volume, as affected by survey design, can be constrained by joint-array inversion of resistivity data.
Use of paired simple and complex models to reduce predictive bias and quantify uncertainty
NASA Astrophysics Data System (ADS)
Doherty, John; Christensen, Steen
2011-12-01
Modern environmental management and decision-making is based on the use of increasingly complex numerical models. Such models have the advantage of allowing representation of complex processes and heterogeneous system property distributions inasmuch as these are understood at any particular study site. The latter are often represented stochastically, this reflecting knowledge of the character of system heterogeneity at the same time as it reflects a lack of knowledge of its spatial details. Unfortunately, however, complex models are often difficult to calibrate because of their long run times and sometimes questionable numerical stability. Analysis of predictive uncertainty is also a difficult undertaking when using models such as these. Such analysis must reflect a lack of knowledge of spatial hydraulic property details. At the same time, it must be subject to constraints on the spatial variability of these details born of the necessity for model outputs to replicate observations of historical system behavior. In contrast, the rapid run times and general numerical reliability of simple models often promulgates good calibration and ready implementation of sophisticated methods of calibration-constrained uncertainty analysis. Unfortunately, however, many system and process details on which uncertainty may depend are, by design, omitted from simple models. This can lead to underestimation of the uncertainty associated with many predictions of management interest. The present paper proposes a methodology that attempts to overcome the problems associated with complex models on the one hand and simple models on the other hand, while allowing access to the benefits each of them offers. It provides a theoretical analysis of the simplification process from a subspace point of view, this yielding insights into the costs of model simplification, and into how some of these costs may be reduced. It then describes a methodology for paired model usage through which predictive bias of a simplified model can be detected and corrected, and postcalibration predictive uncertainty can be quantified. The methodology is demonstrated using a synthetic example based on groundwater modeling environments commonly encountered in northern Europe and North America.
Hybrid real-code ant colony optimisation for constrained mechanical design
NASA Astrophysics Data System (ADS)
Pholdee, Nantiwat; Bureerat, Sujin
2016-01-01
This paper proposes a hybrid meta-heuristic based on integrating a local search simplex downhill (SDH) method into the search procedure of real-code ant colony optimisation (ACOR). This hybridisation leads to five hybrid algorithms where a Monte Carlo technique, a Latin hypercube sampling technique (LHS) and a translational propagation Latin hypercube design (TPLHD) algorithm are used to generate an initial population. Also, two numerical schemes for selecting an initial simplex are investigated. The original ACOR and its hybrid versions along with a variety of established meta-heuristics are implemented to solve 17 constrained test problems where a fuzzy set theory penalty function technique is used to handle design constraints. The comparative results show that the hybrid algorithms are the top performers. Using the TPLHD technique gives better results than the other sampling techniques. The hybrid optimisers are a powerful design tool for constrained mechanical design problems.
NASA Astrophysics Data System (ADS)
Shi, Z.; Crowell, S.; Luo, Y.; Rayner, P. J.; Moore, B., III
2015-12-01
Uncertainty in predicted carbon-climate feedback largely stems from poor parameterization of global land models. However, calibration of global land models with observations has been extremely challenging at least for two reasons. First we lack global data products from systematical measurements of land surface processes. Second, computational demand is insurmountable for estimation of model parameter due to complexity of global land models. In this project, we will use OCO-2 retrievals of dry air mole fraction XCO2 and solar induced fluorescence (SIF) to independently constrain estimation of net ecosystem exchange (NEE) and gross primary production (GPP). The constrained NEE and GPP will be combined with data products of global standing biomass, soil organic carbon and soil respiration to improve the community land model version 4.5 (CLM4.5). Specifically, we will first develop a high fidelity emulator of CLM4.5 according to the matrix representation of the terrestrial carbon cycle. It has been shown that the emulator fully represents the original model and can be effectively used for data assimilation to constrain parameter estimation. We will focus on calibrating those key model parameters (e.g., maximum carboxylation rate, turnover time and transfer coefficients of soil carbon pools, and temperature sensitivity of respiration) for carbon cycle. The Bayesian Markov chain Monte Carlo method (MCMC) will be used to assimilate the global databases into the high fidelity emulator to constrain the model parameters, which will be incorporated back to the original CLM4.5. The calibrated CLM4.5 will be used to make scenario-based projections. In addition, we will conduct observing system simulation experiments (OSSEs) to evaluate how the sampling frequency and length could affect the model constraining and prediction.
Constrained and Unconstrained Partial Adjacent Category Logit Models for Ordinal Response Variables
ERIC Educational Resources Information Center
Fullerton, Andrew S.; Xu, Jun
2018-01-01
Adjacent category logit models are ordered regression models that focus on comparisons of adjacent categories. These models are particularly useful for ordinal response variables with categories that are of substantive interest. In this article, we consider unconstrained and constrained versions of the partial adjacent category logit model, which…
ODE Constrained Mixture Modelling: A Method for Unraveling Subpopulation Structures and Dynamics
Hasenauer, Jan; Hasenauer, Christine; Hucho, Tim; Theis, Fabian J.
2014-01-01
Functional cell-to-cell variability is ubiquitous in multicellular organisms as well as bacterial populations. Even genetically identical cells of the same cell type can respond differently to identical stimuli. Methods have been developed to analyse heterogeneous populations, e.g., mixture models and stochastic population models. The available methods are, however, either incapable of simultaneously analysing different experimental conditions or are computationally demanding and difficult to apply. Furthermore, they do not account for biological information available in the literature. To overcome disadvantages of existing methods, we combine mixture models and ordinary differential equation (ODE) models. The ODE models provide a mechanistic description of the underlying processes while mixture models provide an easy way to capture variability. In a simulation study, we show that the class of ODE constrained mixture models can unravel the subpopulation structure and determine the sources of cell-to-cell variability. In addition, the method provides reliable estimates for kinetic rates and subpopulation characteristics. We use ODE constrained mixture modelling to study NGF-induced Erk1/2 phosphorylation in primary sensory neurones, a process relevant in inflammatory and neuropathic pain. We propose a mechanistic pathway model for this process and reconstructed static and dynamical subpopulation characteristics across experimental conditions. We validate the model predictions experimentally, which verifies the capabilities of ODE constrained mixture models. These results illustrate that ODE constrained mixture models can reveal novel mechanistic insights and possess a high sensitivity. PMID:24992156
An unusual mode of failure of a tripolar constrained acetabular liner: a case report.
Banks, Louisa N; McElwain, John P
2010-04-01
Dislocation after primary total hip arthroplasty (THA) is the most commonly encountered complication and is unpleasant for both the patient and the surgeon. Constrained acetabular components can be used to treat or prevent instability after primary total hip arthroplasty. We present the case of a 42-year-old female with a BMI of 41. At 18 months post-primary THA the patient underwent further revision hip surgery after numerous (more than 20) dislocations. She had a tripolar Trident acetabular cup (Stryker-Howmedica-Osteonics, Rutherford, New Jersey) inserted. Shortly afterwards the unusual mode of failure of the constrained acetabular liner was noted from radiographs in that the inner liner had dissociated from the outer. The reinforcing ring remained intact and in place. We believe that the patient's weight, combined with poor abductor musculature caused excessive demand on the device leading to failure at this interface when the patient flexed forward. Constrained acetabular components are useful implants to treat instability but have been shown to have up to 42% long-term failure rates with problems such as dissociated inserts, dissociated constraining rings and dissociated femoral rings being sited. Sometimes they may be the only option left in difficult cases such as illustrated here, but still unfortunately have the capacity to fail in unusual ways.
Tissue resistivity estimation in the presence of positional and geometrical uncertainties.
Baysal, U; Eyüboğlu, B M
2000-08-01
Geometrical uncertainties (organ boundary variation and electrode position uncertainties) are the biggest sources of error in estimating electrical resistivity of tissues from body surface measurements. In this study, in order to decrease estimation errors, the statistically constrained minimum mean squared error estimation algorithm (MiMSEE) is constrained with a priori knowledge of the geometrical uncertainties in addition to the constraints based on geometry, resistivity range, linearization and instrumentation errors. The MiMSEE calculates an optimum inverse matrix, which maps the surface measurements to the unknown resistivity distribution. The required data are obtained from four-electrode impedance measurements, similar to injected-current electrical impedance tomography (EIT). In this study, the surface measurements are simulated by using a numerical thorax model. The data are perturbed with additive instrumentation noise. Simulated surface measurements are then used to estimate the tissue resistivities by using the proposed algorithm. The results are compared with the results of conventional least squares error estimator (LSEE). Depending on the region, the MiMSEE yields an estimation error between 0.42% and 31.3% compared with 7.12% to 2010% for the LSEE. It is shown that the MiMSEE is quite robust even in the case of geometrical uncertainties.
Fractal dust constrains the collisional history of comets
NASA Astrophysics Data System (ADS)
Fulle, M.; Blum, J.
2017-07-01
The fractal dust particles observed by Rosetta cannot form in the physical conditions observed today in comet 67P/Churyumov-Gerasimenko (67P hereinafter), being instead consistent with models of the pristine dust aggregates coagulated in the solar nebula. Since bouncing collisions in the protoplanetary disc restructure fractals into compact aggregates (pebbles), the only way to preserve fractals in a comet is the gentle gravitational collapse of a mixture of pebbles and fractals, which must occur before their mutual collision speeds overcome ≈1 m s-1. This condition fixes the pebble radius to ≲1 cm, as confirmed by Comet Nucleus Infrared and Visible Analyser onboard Philae. Here, we show that the flux of fractal particles measured by Rosetta constrains the 67P nucleus in a random packing of cm-sized pebbles, with all the voids among them filled by fractal particles. This structure is inconsistent with any catastrophic collision, which would have compacted or dispersed most fractals, thus leaving empty most voids in the reassembled nucleus. Comets are less numerous than current estimates, as confirmed by lacking small craters on Pluto and Charon. Bilobate comets accreted at speeds <1 m s-1 from cometesimals born in the same disc stream.
Wang, Sen; Wang, Weihong; Xiong, Shaofeng
2016-09-01
Considering a class of skid-to-turn (STT) missile with fixed target and constrained terminal impact angles, a novel three-dimensional (3D) integrated guidance and control (IGC) scheme is proposed in this paper. Based on coriolis theorem, the fully nonlinear IGC model without the assumption that the missile flies heading to the target at initial time is established in the three-dimensional space. For this strict-feedback form of multi-variable system, dynamic surface control algorithm is implemented combining with extended observer (ESO) to complete the preliminary design. Then, in order to deal with the problems of the input constraints, a hyperbolic tangent function is introduced to approximate the saturation function and auxiliary system including a Nussbaum function established to compensate for the approximation error. The stability of the closed-loop system is proven based on Lyapunov theory. Numerical simulations results show that the proposed integrated guidance and control algorithm can ensure the accuracy of target interception with initial alignment angle deviation and the input saturation is suppressed with smooth deflection curves. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Digging into the corona: A modeling framework trained with Sun-grazing comet observations
NASA Astrophysics Data System (ADS)
Jia, Y. D.; Pesnell, W. D.; Bryans, P.; Downs, C.; Liu, W.; Schwartz, S. J.
2017-12-01
Images of comets diving into the low corona have been captured a few times in the past decade. Structures visible at various wavelengths during these encounters indicate a strong variation of the ambient conditions of the corona. We combine three numerical models: a global coronal model, a particle transportation model, and a cometary plasma interaction model into one framework to model the interaction of such Sun-grazing comets with plasma in the low corona. In our framework, cometary vapors are ionized via multiple channels and then captured by the coronal magnetic field. In seconds, these ions are further ionized into their highest charge state, which is revealed by certain coronal emission lines. Constrained by observations, we apply our framework to trace back to the local conditions of the ambient corona, and their spatial/time variation over a broad range of scales. Once trained by multiple stages of the comet's journey in the low corona, we illustrate how this framework can leverage these unique observations to probe the structure of the solar corona and solar wind.
Nonlinear hyperspectral unmixing based on sparse non-negative matrix factorization
NASA Astrophysics Data System (ADS)
Li, Jing; Li, Xiaorun; Zhao, Liaoying
2016-01-01
Hyperspectral unmixing aims at extracting pure material spectra, accompanied by their corresponding proportions, from a mixed pixel. Owing to modeling more accurate distribution of real material, nonlinear mixing models (non-LMM) are usually considered to hold better performance than LMMs in complicated scenarios. In the past years, numerous nonlinear models have been successfully applied to hyperspectral unmixing. However, most non-LMMs only think of sum-to-one constraint or positivity constraint while the widespread sparsity among real materials mixing is the very factor that cannot be ignored. That is, for non-LMMs, a pixel is usually composed of a few spectral signatures of different materials from all the pure pixel set. Thus, in this paper, a smooth sparsity constraint is incorporated into the state-of-the-art Fan nonlinear model to exploit the sparsity feature in nonlinear model and use it to enhance the unmixing performance. This sparsity-constrained Fan model is solved with the non-negative matrix factorization. The algorithm was implemented on synthetic and real hyperspectral data and presented its advantage over those competing algorithms in the experiments.
NASA Astrophysics Data System (ADS)
Guo, Sangang
2017-09-01
There are two stages in solving security-constrained unit commitment problems (SCUC) within Lagrangian framework: one is to obtain feasible units’ states (UC), the other is power economic dispatch (ED) for each unit. The accurate solution of ED is more important for enhancing the efficiency of the solution to SCUC for the fixed feasible units’ statues. Two novel methods named after Convex Combinatorial Coefficient Method and Power Increment Method respectively based on linear programming problem for solving ED are proposed by the piecewise linear approximation to the nonlinear convex fuel cost functions. Numerical testing results show that the methods are effective and efficient.
The inverse problem of the calculus of variations for discrete systems
NASA Astrophysics Data System (ADS)
Barbero-Liñán, María; Farré Puiggalí, Marta; Ferraro, Sebastián; Martín de Diego, David
2018-05-01
We develop a geometric version of the inverse problem of the calculus of variations for discrete mechanics and constrained discrete mechanics. The geometric approach consists of using suitable Lagrangian and isotropic submanifolds. We also provide a transition between the discrete and the continuous problems and propose variationality as an interesting geometric property to take into account in the design and computer simulation of numerical integrators for constrained systems. For instance, nonholonomic mechanics is generally non variational but some special cases admit an alternative variational description. We apply some standard nonholonomic integrators to such an example to study which ones conserve this property.
Optimal vibration control of a rotating plate with self-sensing active constrained layer damping
NASA Astrophysics Data System (ADS)
Xie, Zhengchao; Wong, Pak Kin; Lo, Kin Heng
2012-04-01
This paper proposes a finite element model for optimally controlled constrained layer damped (CLD) rotating plate with self-sensing technique and frequency-dependent material property in both the time and frequency domain. Constrained layer damping with viscoelastic material can effectively reduce the vibration in rotating structures. However, most existing research models use complex modulus approach to model viscoelastic material, and an additional iterative approach which is only available in frequency domain has to be used to include the material's frequency dependency. It is meaningful to model the viscoelastic damping layer in rotating part by using the anelastic displacement fields (ADF) in order to include the frequency dependency in both the time and frequency domain. Also, unlike previous ones, this finite element model treats all three layers as having the both shear and extension strains, so all types of damping are taken into account. Thus, in this work, a single layer finite element is adopted to model a three-layer active constrained layer damped rotating plate in which the constraining layer is made of piezoelectric material to work as both the self-sensing sensor and actuator under an linear quadratic regulation (LQR) controller. After being compared with verified data, this newly proposed finite element model is validated and could be used for future research.
DART: New Research Using Ensemble Data Assimilation in Geophysical Models
NASA Astrophysics Data System (ADS)
Hoar, T. J.; Raeder, K.
2015-12-01
The Data Assimilation Research Testbed (DART) is a community facilityfor ensemble data assimilation developed and supported by the NationalCenter for Atmospheric Research. DART provides a comprehensive suite of software, documentation, and tutorials that can be used for ensemble data assimilation research, operations, and education. Scientists and software engineers at NCAR are available to support DART users who want to use existing DART products or develop their own applications. Current DART users range from university professors teaching data assimilation, to individual graduate students working with simple models, through national laboratories doing operational prediction with large state-of-the-art models. DART runs efficiently on many computational platforms ranging from laptops through thousands of cores on the newest supercomputers.This poster focuses on several recent research activities using DART with geophysical models.Using CAM/DART to understand whether OCO-2 Total Precipitable Water observations can be useful in numerical weather prediction.Impacts of the synergistic use of Infra-red CO retrievals (MOPITT, IASI) in CAM-CHEM/DART assimilations.Assimilation and Analysis of Observations of Amazonian Biomass Burning Emissions by MOPITT (aerosol optical depth), MODIS (carbon monoxide) and MISR (plume height).Long term evaluation of the chemical response of MOPITT-CO assimilation in CAM-CHEM/DART OSSEs for satellite planning and emission inversion capabilities.Improved forward observation operators for land models that have multiple land use/land cover segments in a single grid cell,Simulating mesoscale convective systems (MCSs) using a variable resolution, unstructured grid in the Model for Prediction Across Scales (MPAS) and DART.The mesoscale WRF+DART system generated an ensemble of year-long, real-time initializations of a convection allowing model over the United States.Constraining WACCM with observations in the tropical band (30S-30N) using DART also constrains the polar stratosphere during the same winter. Assimilation of MOPITT carbon monoxide Compact Phase Space Retrievals (CPSR) in WRF-Chem/DART.Future work:DART interface to the CICE (CESM) sea ice model.Fully coupled assimilations in CESM.
Zammit-Mangion, Andrew; Rougier, Jonathan; Schön, Nana; Lindgren, Finn; Bamber, Jonathan
2015-01-01
Antarctica is the world's largest fresh-water reservoir, with the potential to raise sea levels by about 60 m. An ice sheet contributes to sea-level rise (SLR) when its rate of ice discharge and/or surface melting exceeds accumulation through snowfall. Constraining the contribution of the ice sheets to present-day SLR is vital both for coastal development and planning, and climate projections. Information on various ice sheet processes is available from several remote sensing data sets, as well as in situ data such as global positioning system data. These data have differing coverage, spatial support, temporal sampling and sensing characteristics, and thus, it is advantageous to combine them all in a single framework for estimation of the SLR contribution and the assessment of processes controlling mass exchange with the ocean. In this paper, we predict the rate of height change due to salient geophysical processes in Antarctica and use these to provide estimates of SLR contribution with associated uncertainties. We employ a multivariate spatio-temporal model, approximated as a Gaussian Markov random field, to take advantage of differing spatio-temporal properties of the processes to separate the causes of the observed change. The process parameters are estimated from geophysical models, while the remaining parameters are estimated using a Markov chain Monte Carlo scheme, designed to operate in a high-performance computing environment across multiple nodes. We validate our methods against a separate data set and compare the results to those from studies that invariably employ numerical model outputs directly. We conclude that it is possible, and insightful, to assess Antarctica's contribution without explicit use of numerical models. Further, the results obtained here can be used to test the geophysical numerical models for which in situ data are hard to obtain. © 2015 The Authors. Environmetrics published by John Wiley & Sons Ltd. PMID:25937792
SpF: Enabling Petascale Performance for Pseudospectral Dynamo Models
NASA Astrophysics Data System (ADS)
Jiang, W.; Clune, T.; Vriesema, J.; Gutmann, G.
2013-12-01
Pseudospectral (PS) methods possess a number of characteristics (e.g., efficiency, accuracy, natural boundary conditions) that are extremely desirable for dynamo models. Unfortunately, dynamo models based upon PS methods face a number of daunting challenges, which include exposing additional parallelism, leveraging hardware accelerators, exploiting hybrid parallelism, and improving the scalability of global memory transposes. Although these issues are a concern for most models, solutions for PS methods tend to require far more pervasive changes to underlying data and control structures. Further, improvements in performance in one model are difficult to transfer to other models, resulting in significant duplication of effort across the research community. We have developed an extensible software framework for pseudospectral methods called SpF that is intended to enable extreme scalability and optimal performance. High-level abstractions provided by SpF unburden applications of the responsibility of managing domain decomposition and load balance while reducing the changes in code required to adapt to new computing architectures. The key design concept in SpF is that each phase of the numerical calculation is partitioned into disjoint numerical 'kernels' that can be performed entirely in-processor. The granularity of domain-decomposition provided by SpF is only constrained by the data-locality requirements of these kernels. SpF builds on top of optimized vendor libraries for common numerical operations such as transforms, matrix solvers, etc., but can also be configured to use open source alternatives for portability. SpF includes several alternative schemes for global data redistribution and is expected to serve as an ideal testbed for further research into optimal approaches for different network architectures. In this presentation, we will describe the basic architecture of SpF as well as preliminary performance data and experience with adapting legacy dynamo codes. We will conclude with a discussion of planned extensions to SpF that will provide pseudospectral applications with additional flexibility with regard to time integration, linear solvers, and discretization in the radial direction.
NASA Astrophysics Data System (ADS)
Arragoni, S.; Maggi, M.; Cianfarra, P.; Salvini, F.
2016-06-01
Newly collected structural data in Eastern Sardinia (Italy) integrated with numerical techniques led to the reconstruction of a 2-D admissible and balanced model revealing the presence of a widespread Cenozoic fold-and-thrust belt. The model was achieved with the FORC software, obtaining a 3-D (2-D + time) numerical reconstruction of the continuous evolution of the structure through time. The Mesozoic carbonate units of Eastern Sardinia and their basement present a fold-and-thrust tectonic setting, with a westward direction of tectonic transport (referred to the present-day coordinates). The tectonic style of the upper levels is thin skinned, with flat sectors prevailing over ramps and younger-on-older thrusts. Three regional tectonic units are present, bounded by two regional thrusts. Strike-slip faults overprint the fold-and-thrust belt and developed during the Sardinia-Corsica Block rotation along the strike of the preexisting fault ramps, not affecting the numerical section balancing. This fold-and-thrust belt represents the southward prosecution of the Alpine Corsica collisional chain and the missing link between the Alpine Chain and the Calabria-Peloritani Block. Relative ages relate its evolution to the meso-Alpine event (Eocene-Oligocene times), prior to the opening of the Tyrrhenian Sea (Tortonian). Results fill a gap of information about the geodynamic evolution of the European margin in Central Mediterranean, between Corsica and the Calabria-Peloritani Block, and imply the presence of remnants of this double-verging belt, missing in the Southern Tyrrhenian basin, within the Southern Apennine chain. The used methodology proved effective for constraining balanced cross sections also for areas lacking exposures of the large-scale structures, as the case of Eastern Sardinia.
The AGORA High-resolution Galaxy Simulations Comparison Project II: Isolated disk test
Kim, Ji-hoon; Agertz, Oscar; Teyssier, Romain; ...
2016-12-20
Using an isolated Milky Way-mass galaxy simulation, we compare results from 9 state-of-the-art gravito-hydrodynamics codes widely used in the numerical community. We utilize the infrastructure we have built for the AGORA High-resolution Galaxy Simulations Comparison Project. This includes the common disk initial conditions, common physics models (e.g., radiative cooling and UV background by the standardized package Grackle) and common analysis toolkit yt, all of which are publicly available. Subgrid physics models such as Jeans pressure floor, star formation, supernova feedback energy, and metal production are carefully constrained across code platforms. With numerical accuracy that resolves the disk scale height, wemore » find that the codes overall agree well with one another in many dimensions including: gas and stellar surface densities, rotation curves, velocity dispersions, density and temperature distribution functions, disk vertical heights, stellar clumps, star formation rates, and Kennicutt-Schmidt relations. Quantities such as velocity dispersions are very robust (agreement within a few tens of percent at all radii) while measures like newly-formed stellar clump mass functions show more significant variation (difference by up to a factor of ~3). Systematic differences exist, for example, between mesh-based and particle-based codes in the low density region, and between more diffusive and less diffusive schemes in the high density tail of the density distribution. Yet intrinsic code differences are generally small compared to the variations in numerical implementations of the common subgrid physics such as supernova feedback. Lastly, our experiment reassures that, if adequately designed in accordance with our proposed common parameters, results of a modern high-resolution galaxy formation simulation are more sensitive to input physics than to intrinsic differences in numerical schemes.« less
The AGORA High-resolution Galaxy Simulations Comparison Project II: Isolated disk test
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Ji-hoon; Agertz, Oscar; Teyssier, Romain
Using an isolated Milky Way-mass galaxy simulation, we compare results from 9 state-of-the-art gravito-hydrodynamics codes widely used in the numerical community. We utilize the infrastructure we have built for the AGORA High-resolution Galaxy Simulations Comparison Project. This includes the common disk initial conditions, common physics models (e.g., radiative cooling and UV background by the standardized package Grackle) and common analysis toolkit yt, all of which are publicly available. Subgrid physics models such as Jeans pressure floor, star formation, supernova feedback energy, and metal production are carefully constrained across code platforms. With numerical accuracy that resolves the disk scale height, wemore » find that the codes overall agree well with one another in many dimensions including: gas and stellar surface densities, rotation curves, velocity dispersions, density and temperature distribution functions, disk vertical heights, stellar clumps, star formation rates, and Kennicutt-Schmidt relations. Quantities such as velocity dispersions are very robust (agreement within a few tens of percent at all radii) while measures like newly-formed stellar clump mass functions show more significant variation (difference by up to a factor of ~3). Systematic differences exist, for example, between mesh-based and particle-based codes in the low density region, and between more diffusive and less diffusive schemes in the high density tail of the density distribution. Yet intrinsic code differences are generally small compared to the variations in numerical implementations of the common subgrid physics such as supernova feedback. Lastly, our experiment reassures that, if adequately designed in accordance with our proposed common parameters, results of a modern high-resolution galaxy formation simulation are more sensitive to input physics than to intrinsic differences in numerical schemes.« less
THE AGORA HIGH-RESOLUTION GALAXY SIMULATIONS COMPARISON PROJECT. II. ISOLATED DISK TEST
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Ji-hoon; Agertz, Oscar; Teyssier, Romain
Using an isolated Milky Way-mass galaxy simulation, we compare results from nine state-of-the-art gravito-hydrodynamics codes widely used in the numerical community. We utilize the infrastructure we have built for the AGORA High-resolution Galaxy Simulations Comparison Project. This includes the common disk initial conditions, common physics models (e.g., radiative cooling and UV background by the standardized package Grackle) and common analysis toolkit yt, all of which are publicly available. Subgrid physics models such as Jeans pressure floor, star formation, supernova feedback energy, and metal production are carefully constrained across code platforms. With numerical accuracy that resolves the disk scale height, wemore » find that the codes overall agree well with one another in many dimensions including: gas and stellar surface densities, rotation curves, velocity dispersions, density and temperature distribution functions, disk vertical heights, stellar clumps, star formation rates, and Kennicutt–Schmidt relations. Quantities such as velocity dispersions are very robust (agreement within a few tens of percent at all radii) while measures like newly formed stellar clump mass functions show more significant variation (difference by up to a factor of ∼3). Systematic differences exist, for example, between mesh-based and particle-based codes in the low-density region, and between more diffusive and less diffusive schemes in the high-density tail of the density distribution. Yet intrinsic code differences are generally small compared to the variations in numerical implementations of the common subgrid physics such as supernova feedback. Our experiment reassures that, if adequately designed in accordance with our proposed common parameters, results of a modern high-resolution galaxy formation simulation are more sensitive to input physics than to intrinsic differences in numerical schemes.« less
NASA/MSFC FY92 Earth Science and Applications Program Research Review
NASA Technical Reports Server (NTRS)
Arnold, James E. (Editor); Leslie, Fred W. (Editor)
1993-01-01
A large amount of attention has recently been given to global issues such as the ozone hole, tropospheric temperature variability, etc. A scientific challenge is to better understand atmospheric processes on a variety of spatial and temporal scales in order to predict environmental changes. Measurement of geophysical parameters such as wind, temperature, and moisture are needed to validate theories, provide analyzed data sets, and initialize or constrain numerical models. One of NASA's initiatives is the Mission to Planet Earth Program comprised of an Earth Observation System (EOS) and the scientific strategy to analyze these data. This work describes these efforts in the context of satellite data analysis and fundamental studies of atmospheric dynamics which examine selected processes important to the global circulation.
Knudson, M D; Desjarlais, M P; Becker, A; Lemke, R W; Cochrane, K R; Savage, M E; Bliss, D E; Mattsson, T R; Redmer, R
2015-06-26
Eighty years ago, it was proposed that solid hydrogen would become metallic at sufficiently high density. Despite numerous investigations, this transition has not yet been experimentally observed. More recently, there has been much interest in the analog of this predicted metallic transition in the dense liquid, due to its relevance to planetary science. Here, we show direct observation of an abrupt insulator-to-metal transition in dense liquid deuterium. Experimental determination of the location of this transition provides a much-needed benchmark for theory and may constrain the region of hydrogen-helium immiscibility and the boundary-layer pressure in standard models of the internal structure of gas-giant planets. Copyright © 2015, American Association for the Advancement of Science.
Theoretical study of strength of elastic-plastic water-saturated interface under constrained shear
NASA Astrophysics Data System (ADS)
Dimaki, Andrey V.; Shilko, Evgeny V.; Psakhie, Sergey G.
2016-11-01
This paper presents a theoretical study of shear strength of an elastic-plastic water-filled interface between elastic permeable blocks under compression. The medium is described within the discrete element method. The relationship between the stress-strain state of the solid skeleton and pore pressure of a liquid is described in the framework of the Biot's model of poroelasticity. The simulation demonstrates that shear strength of an elastic-plastic interface depends non-linearly on the values of permeability and loading to a great extent. We have proposed an empirical relation that approximates the obtained results of the numerical simulation in assumption of the interplay of dilation of the material and mass transfer of the liquid.
Newly-Developed 3D GRMHD Code and its Application to Jet Formation
NASA Technical Reports Server (NTRS)
Mizuno, Y.; Nishikawa, K.-I.; Koide, S.; Hardee, P.; Fishman, G. J.
2006-01-01
We have developed a new three-dimensional general relativistic magnetohydrodynamic code by using a conservative, high-resolution shock-capturing scheme. The numerical fluxes are calculated using the HLL approximate Riemann solver scheme. The flux-interpolated constrained transport scheme is used to maintain a divergence-free magnetic field. We have performed various 1-dimensional test problems in both special and general relativity by using several reconstruction methods and found that the new 3D GRMHD code shows substantial improvements over our previous model. The . preliminary results show the jet formations from a geometrically thin accretion disk near a non-rotating and a rotating black hole. We will discuss the jet properties depended on the rotation of a black hole and the magnetic field strength.
Combined micro and macro geodynamic modelling of mantle flow: methods, potentialities and limits.
NASA Astrophysics Data System (ADS)
Faccenda, M.
2015-12-01
Over the last few years, geodynamic simulations aiming at reconstructing the Earth's internal dynamics have increasingly attempted to link processes occurring at the micro (i.e., strain-induced lattice preferred orientation (LPO) of crystal aggregates) and macro scale (2D/3D mantle convection). As a major outcome, such a combined approach results in the prediction of the modelled region's elastic properties that, in turn, can be used to perform seismological synthetic experiments. By comparison with observables, the geodynamic simulations can then be considered as a good numerical analogue of specific tectonic settings, constraining their deep structure and recent tectonic evolution. In this contribution, I will discuss the recent methodologies, potentialities and current limits of combined micro- and macro-flow simulations, with particular attention to convergent margins whose dynamics and deep structure is still the object of extensive studies.
NASA Technical Reports Server (NTRS)
Mueller, T. J. (Editor)
1985-01-01
Topics of interest in the design, flow modeling and visualization, and turbulence and flow separation effects for low Reynolds number (Re) airfoils are discussed. Design methods are presented for Re from 50,000-500,000, including a viscous-inviscid coupling method and by using a constrained pitching moment. The effects of pressure gradients, unsteady viscous aerodynamics and separation bubbles are investigated, with particular note made of factors which most influence the size and location of separation bubbles and control their effects. Attention is also given to experimentation with low Re airfoils and to numerical models of symmetry breaking and lift hysteresis from separation. Both steady and unsteady flow experiments are reviewed, with the trials having been held in wind tunnels and the free atmosphere. The topics discussed are of interest to designers of RPVs, high altitude aircraft, sailplanes, ultralights and wind turbines.
Dark matter in E 6 Grand unification
NASA Astrophysics Data System (ADS)
Schwichtenberg, Jakob
2018-02-01
We discuss fermionic dark matter in non-supersymmetric E 6 Grand Unification. The fundamental representation of E 6 contains, in addition to the standard model fermions, exotic fermions and we argue that one of them is a viable, interesting dark matter candidate. Its stability is guaranteed by a discrete remnant symmetry, which is an unbroken subgroup of the E 6 gauge symmetry. We compute the symmetry breaking scales and the effect of possible threshold corrections by solving the renormalization group equations numerically after imposing gauge coupling unification. Since the Yukawa couplings of the exotic and the standard model fermions have a common origin, the mass of the dark matter particles is constrained. We find a mass range of 3 · 109 GeV ≲ m DM ≲ 1 · 1013 GeV for our E 6 dark matter candidate, which is within the reach of next-generation direct detection experiments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma, Tianxing; Lin, Hai-Qing; Gubernatis, James E.
2015-09-01
By using the constrained-phase quantum Monte Carlo method, we performed a systematic study of the pairing correlations in the ground state of the doped Kane-Mele-Hubbard model on a honeycomb lattice. We find that pairing correlations with d + id symmetry dominate close to half filling, but pairing correlations with p+ip symmetry dominate as hole doping moves the system below three-quarters filling. We correlate these behaviors of the pairing correlations with the topology of the Fermi surfaces of the non-interacting problem. We also find that the effective pairing correlation is enhanced greatly as the interaction increases, and these superconducting correlations aremore » robust against varying the spin-orbit coupling strength. Finally, our numerical results suggest a possible way to realize spin triplet superconductivity in doped honeycomb-like materials or ultracold atoms in optical traps.« less
An efficient structural finite element for inextensible flexible risers
NASA Astrophysics Data System (ADS)
Papathanasiou, T. K.; Markolefas, S.; Khazaeinejad, P.; Bahai, H.
2017-12-01
A core part of all numerical models used for flexible riser analysis is the structural component representing the main body of the riser as a slender beam. Loads acting on this structural element are self-weight, buoyant and hydrodynamic forces, internal pressure and others. A structural finite element for an inextensible riser with a point-wise enforcement of the inextensibility constrain is presented. In particular, the inextensibility constraint is applied only at the nodes of the meshed arc length parameter. Among the virtues of the proposed approach is the flexibility in the application of boundary conditions and the easy incorporation of dissipative forces. Several attributes of the proposed finite element scheme are analysed and computation times for the solution of some simplified examples are discussed. Future developments aim at the appropriate implementation of material and geometric parameters for the beam model, i.e. flexural and torsional rigidity.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blennow, Mattias; Fernandez-Martínez, Enrique; Zaldívar, Bryan, E-mail: emb@kth.se, E-mail: enrique.fernandez-martinez@uam.es, E-mail: b.zaldivar.m@csic.es
2014-01-01
The popular freeze-out paradigm for Dark Matter (DM) production, relies on DM-baryon couplings of the order of the weak interactions. However, different search strategies for DM have failed to provide a conclusive evidence of such (non-gravitational) interactions, while greatly reducing the parameter space of many representative models. This motivates the study of alternative mechanisms for DM genesis. In the freeze-in framework, the DM is slowly populated from the thermal bath while never reaching equilibrium. In this work, we analyse in detail the possibility of producing a frozen-in DM via a mediator particle which acts as a portal. We give analyticalmore » estimates of different freeze-in regimes and support them with full numerical analyses, taking into account the proper distribution functions of bath particles. Finally, we constrain the parameter space of generic models by requiring agreement with DM relic abundance observations.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bettoni, Dario; Nusser, Adi; Blas, Diego
We develop the framework for testing Lorentz invariance in the dark matter sector using galactic dynamics. We consider a Lorentz violating (LV) vector field acting on the dark matter component of a satellite galaxy orbiting in a host halo. We introduce a numerical model for the dynamics of satellites in a galactic halo and for a galaxy in a rich cluster to explore observational consequences of such an LV field. The orbital motion of a satellite excites a time dependent LV force which greatly affects its internal dynamics. Our analysis points out key observational signatures which serve as probes ofmore » LV forces. These include modifications to the line of sight velocity dispersion, mass profiles and shapes of satellites. With future data and a more detailed modeling these signatures can be exploited to constrain a new region of the parameter space describing the LV in the dark matter sector.« less
Extension of non-linear beam models with deformable cross sections
NASA Astrophysics Data System (ADS)
Sokolov, I.; Krylov, S.; Harari, I.
2015-12-01
Geometrically exact beam theory is extended to allow distortion of the cross section. We present an appropriate set of cross-section basis functions and provide physical insight to the cross-sectional distortion from linear elastostatics. The beam formulation in terms of material (back-rotated) beam internal force resultants and work-conjugate kinematic quantities emerges naturally from the material description of virtual work of constrained finite elasticity. The inclusion of cross-sectional deformation allows straightforward application of three-dimensional constitutive laws in the beam formulation. Beam counterparts of applied loads are expressed in terms of the original three-dimensional data. Special attention is paid to the treatment of the applied stress, keeping in mind applications such as hydrogel actuators under environmental stimuli or devices made of electroactive polymers. Numerical comparisons show the ability of the beam model to reproduce finite elasticity results with good efficiency.
Role of mantle flow in Nubia-Somalia plate divergence
NASA Astrophysics Data System (ADS)
Stamps, D. S.; Iaffaldano, G.; Calais, E.
2015-01-01
Present-day continental extension along the East African Rift System (EARS) has often been attributed to diverging sublithospheric mantle flow associated with the African Superplume. This implies a degree of viscous coupling between mantle and lithosphere that remains poorly constrained. Recent advances in estimating present-day opening rates along the EARS from geodesy offer an opportunity to address this issue with geodynamic modeling of the mantle-lithosphere system. Here we use numerical models of the global mantle-plates coupled system to test the role of present-day mantle flow in Nubia-Somalia plate divergence across the EARS. The scenario yielding the best fit to geodetic observations is one where torques associated with gradients of gravitational potential energy stored in the African highlands are resisted by weak continental faults and mantle basal drag. These results suggest that shear tractions from diverging mantle flow play a minor role in present-day Nubia-Somalia divergence.
Visualization in mechanics: the dynamics of an unbalanced roller
NASA Astrophysics Data System (ADS)
Cumber, Peter S.
2017-04-01
It is well known that mechanical engineering students often find mechanics a difficult area to grasp. This article describes a system of equations describing the motion of a balanced and an unbalanced roller constrained by a pivot arm. A wide range of dynamics can be simulated with the model. The equations of motion are embedded in a graphical user interface for its numerical solution in MATLAB. This allows a student's focus to be on the influence of different parameters on the system dynamics. The simulation tool can be used as a dynamics demonstrator in a lecture or as an educational tool driven by the imagination of the student. By way of demonstration the simulation tool has been applied to a range of roller-pivot arm configurations. In addition, approximations to the equations of motion are explored and a second-order model is shown to be accurate for a limited range of parameters.
Modeling chain folding in protein-constrained circular DNA.
Martino, J A; Olson, W K
1998-01-01
An efficient method for sampling equilibrium configurations of DNA chains binding one or more DNA-bending proteins is presented. The technique is applied to obtain the tertiary structures of minimal bending energy for a selection of dinucleosomal minichromosomes that differ in degree of protein-DNA interaction, protein spacing along the DNA chain contour, and ring size. The protein-bound portions of the DNA chains are represented by tight, left-handed supercoils of fixed geometry. The protein-free regions are modeled individually as elastic rods. For each random spatial arrangement of the two nucleosomes assumed during a stochastic search for the global minimum, the paths of the flexible connecting DNA segments are determined through a numerical solution of the equations of equilibrium for torsionally relaxed elastic rods. The minimal energy forms reveal how protein binding and spacing and plasmid size differentially affect folding and offer new insights into experimental minichromosome systems. PMID:9591675
A Self-Calibrating Radar Sensor System for Measuring Vital Signs.
Huang, Ming-Chun; Liu, Jason J; Xu, Wenyao; Gu, Changzhan; Li, Changzhi; Sarrafzadeh, Majid
2016-04-01
Vital signs (i.e., heartbeat and respiration) are crucial physiological signals that are useful in numerous medical applications. The process of measuring these signals should be simple, reliable, and comfortable for patients. In this paper, a noncontact self-calibrating vital signs monitoring system based on the Doppler radar is presented. The system hardware and software were designed with a four-tiered layer structure. To enable accurate vital signs measurement, baseband signals in the radar sensor were modeled and a framework for signal demodulation was proposed. Specifically, a signal model identification method was formulated into a quadratically constrained l1 minimization problem and solved using the upper bound and linear matrix inequality (LMI) relaxations. The performance of the proposed system was comprehensively evaluated using three experimental sets, and the results indicated that this system can be used to effectively measure human vital signs.
Application of precomputed control laws in a reconfigurable aircraft flight control system
NASA Technical Reports Server (NTRS)
Moerder, Daniel D.; Halyo, Nesim; Broussard, John R.; Caglayan, Alper K.
1989-01-01
A self-repairing flight control system concept in which the control law is reconfigured after actuator and/or control surface damage to preserve stability and pilot command tracking is described. A key feature of the controller is reconfigurable multivariable feedback. The feedback gains are designed off-line and scheduled as a function of the aircraft control impairment status so that reconfiguration is performed simply by updating the gain schedule after detection of an impairment. A novel aspect of the gain schedule design procedure is that the schedule is calculated using a linear quadratic optimization-based simultaneous stabilization algorithm in which the scheduled gain is constrained to stabilize a collection of plant models representing the aircraft in various control failure modes. A description and numerical evaluation of a controller design for a model of a statically unstable high-performance aircraft are given.
Liu, Qingshan; Dang, Chuangyin; Huang, Tingwen
2013-02-01
This paper presents a decision-making model described by a recurrent neural network for dynamic portfolio optimization. The portfolio-optimization problem is first converted into a constrained fractional programming problem. Since the objective function in the programming problem is not convex, the traditional optimization techniques are no longer applicable for solving this problem. Fortunately, the objective function in the fractional programming is pseudoconvex on the feasible region. It leads to a one-layer recurrent neural network modeled by means of a discontinuous dynamic system. To ensure the optimal solutions for portfolio optimization, the convergence of the proposed neural network is analyzed and proved. In fact, the neural network guarantees to get the optimal solutions for portfolio-investment advice if some mild conditions are satisfied. A numerical example with simulation results substantiates the effectiveness and illustrates the characteristics of the proposed neural network.
ERIC Educational Resources Information Center
Hoijtink, Herbert; Molenaar, Ivo W.
1997-01-01
This paper shows that a certain class of constrained latent class models may be interpreted as a special case of nonparametric multidimensional item response models. Parameters of this latent class model are estimated using an application of the Gibbs sampler, and model fit is investigated using posterior predictive checks. (SLD)
Uncertainty Analysis of Decomposing Polyurethane Foam
NASA Technical Reports Server (NTRS)
Hobbs, Michael L.; Romero, Vicente J.
2000-01-01
Sensitivity/uncertainty analyses are necessary to determine where to allocate resources for improved predictions in support of our nation's nuclear safety mission. Yet, sensitivity/uncertainty analyses are not commonly performed on complex combustion models because the calculations are time consuming, CPU intensive, nontrivial exercises that can lead to deceptive results. To illustrate these ideas, a variety of sensitivity/uncertainty analyses were used to determine the uncertainty associated with thermal decomposition of polyurethane foam exposed to high radiative flux boundary conditions. The polyurethane used in this study is a rigid closed-cell foam used as an encapsulant. Related polyurethane binders such as Estane are used in many energetic materials of interest to the JANNAF community. The complex, finite element foam decomposition model used in this study has 25 input parameters that include chemistry, polymer structure, and thermophysical properties. The response variable was selected as the steady-state decomposition front velocity calculated as the derivative of the decomposition front location versus time. An analytical mean value sensitivity/uncertainty (MV) analysis was used to determine the standard deviation by taking numerical derivatives of the response variable with respect to each of the 25 input parameters. Since the response variable is also a derivative, the standard deviation was essentially determined from a second derivative that was extremely sensitive to numerical noise. To minimize the numerical noise, 50-micrometer element dimensions and approximately 1-msec time steps were required to obtain stable uncertainty results. As an alternative method to determine the uncertainty and sensitivity in the decomposition front velocity, surrogate response surfaces were generated for use with a constrained Latin Hypercube Sampling (LHS) technique. Two surrogate response surfaces were investigated: 1) a linear surrogate response surface (LIN) and 2) a quadratic response surface (QUAD). The LHS techniques do not require derivatives of the response variable and are subsequently relatively insensitive to numerical noise. To compare the LIN and QUAD methods to the MV method, a direct LHS analysis (DLHS) was performed using the full grid and timestep resolved finite element model. The surrogate response models (LIN and QUAD) are shown to give acceptable values of the mean and standard deviation when compared to the fully converged DLHS model.
Ellison, C. L.; Burby, J. W.; Qin, H.
2015-11-01
One popular technique for the numerical time advance of charged particles interacting with electric and magnetic fields according to the Lorentz force law [1], [2], [3] and [4] is the Boris algorithm. Its popularity stems from simple implementation, rapid iteration, and excellent long-term numerical fidelity [1] and [5]. Excellent long-term behavior strongly suggests the numerical dynamics exhibit conservation laws analogous to those governing the continuous Lorentz force system [6]. Moreover, without conserved quantities to constrain the numerical dynamics, algorithms typically dissipate or accumulate important observables such as energy and momentum over long periods of simulated time [6]. Identification of themore » conservative properties of an algorithm is important for establishing rigorous expectations on the long-term behavior; energy-preserving, symplectic, and volume-preserving methods each have particular implications for the qualitative numerical behavior [6], [7], [8], [9], [10] and [11].« less
Interior point techniques for LP and NLP
DOE Office of Scientific and Technical Information (OSTI.GOV)
Evtushenko, Y.
By using surjective mapping the initial constrained optimization problem is transformed to a problem in a new space with only equality constraints. For the numerical solution of the latter problem we use the generalized gradient-projection method and Newton`s method. After inverse transformation to the initial space we obtain the family of numerical methods for solving optimization problems with equality and inequality constraints. In the linear programming case after some simplification we obtain Dikin`s algorithm, affine scaling algorithm and generalized primal dual interior point linear programming algorithm.
Empirical models of Jupiter's interior from Juno data. Moment of inertia and tidal Love number k2
NASA Astrophysics Data System (ADS)
Ni, Dongdong
2018-05-01
Context. The Juno spacecraft has significantly improved the accuracy of gravitational harmonic coefficients J4, J6 and J8 during its first two perijoves. However, there are still differences in the interior model predictions of core mass and envelope metallicity because of the uncertainties in the hydrogen-helium equations of state. New theoretical approaches or observational data are hence required in order to further constrain the interior models of Jupiter. A well constrained interior model of Jupiter is helpful for understanding not only the dynamic flows in the interior, but also the formation history of giant planets. Aims: We present the radial density profiles of Jupiter fitted to the Juno gravity field observations. Also, we aim to investigate our ability to constrain the core properties of Jupiter using its moment of inertia and tidal Love number k2 which could be accessible by the Juno spacecraft. Methods: In this work, the radial density profile was constrained by the Juno gravity field data within the empirical two-layer model in which the equations of state are not needed as an input model parameter. Different two-layer models are constructed in terms of core properties. The dependence of the calculated moment of inertia and tidal Love number k2 on the core properties was investigated in order to discern their abilities to further constrain the internal structure of Jupiter. Results: The calculated normalized moment of inertia (NMOI) ranges from 0.2749 to 0.2762, in reasonable agreement with the other predictions. There is a good correlation between the NMOI value and the core properties including masses and radii. Therefore, measurements of NMOI by Juno can be used to constrain both the core mass and size of Jupiter's two-layer interior models. For the tidal Love number k2, the degeneracy of k2 is found and analyzed within the two-layer interior model. In spite of this, measurements of k2 can still be used to further constrain the core mass and size of Jupiter's two-layer interior models.
NASA Astrophysics Data System (ADS)
Sassi, F.; McDonald, S. E.; McCormack, J. P.; Tate, J.; Liu, H.; Kuhl, D.
2017-12-01
The 2015-2016 boreal winter and spring is a dynamically very interesting time in the lower atmosphere: a minor high latitude stratospheric warming occurred in February 2016; an interrupted descent of the QBO was found in the tropical stratosphere; and a large warm ENSO took place in the tropical Pacific Ocean. The stratospheric warming, the QBO and ENSO are known to affect in different ways the meteorology of the upper atmosphere in different ways: low latitude solar tides and high latitude planetary-scale waves have potentially important implications on the structure of the ionosphere. In this study, we use global atmospheric analyses from a high-altitude version of the High-Altitude Navy Global Environmental Model (HA-NAVGEM) to constrain the meteorology of numerical simulations of the Specified Dynamics Whole Atmosphere Community Climate Model, extended version (SD-WACCM-X). We describe the large-scale behavior of tropical tides and mid-latitude planetary waves that emerge in the lower thermosphere. The effect on the ionosphere is captured by numerical simulations of the Navy Highly Integrated Thermosphere Ionosphere Demonstration System (Navy-HITIDES) that uses the meteorology generated by SD-WACCM-X to drive ionospheric simulations during this time period. We will analyze the impact of various dynamical fields on the zonal behavior of the ionosphere by selectively filtering the relevant dynamical modes.
NASA Astrophysics Data System (ADS)
Becker, Roland; Vexler, Boris
2005-06-01
We consider the calibration of parameters in physical models described by partial differential equations. This task is formulated as a constrained optimization problem with a cost functional of least squares type using information obtained from measurements. An important issue in the numerical solution of this type of problem is the control of the errors introduced, first, by discretization of the equations describing the physical model, and second, by measurement errors or other perturbations. Our strategy is as follows: we suppose that the user defines an interest functional I, which might depend on both the state variable and the parameters and which represents the goal of the computation. First, we propose an a posteriori error estimator which measures the error with respect to this functional. This error estimator is used in an adaptive algorithm to construct economic meshes by local mesh refinement. The proposed estimator requires the solution of an auxiliary linear equation. Second, we address the question of sensitivity. Applying similar techniques as before, we derive quantities which describe the influence of small changes in the measurements on the value of the interest functional. These numbers, which we call relative condition numbers, give additional information on the problem under consideration. They can be computed by means of the solution of the auxiliary problem determined before. Finally, we demonstrate our approach at hand of a parameter calibration problem for a model flow problem.
A new 3-D thin-skinned rock glacier model based on helicopter GPR results from the Swiss Alps
NASA Astrophysics Data System (ADS)
Merz, Kaspar; Green, Alan G.; Buchli, Thomas; Springman, Sarah M.; Maurer, Hansruedi
2015-06-01
Mountainous locations and steep rugged surfaces covered by boulders and other loose debris are the main reasons why rock glaciers are among the most challenging geological features to investigate using ground-based geophysical methods. Consequently, geophysical surveys of rock glaciers have only ever involved recording data along sparse lines. To address this issue, we acquired quasi-3-D ground-penetrating radar (GPR) data across a rock glacier in the Swiss Alps using a helicopter-mounted system. Our interpretation of the derived GPR images constrained by borehole information results in a novel "thin-skinned" rock glacier model that explains a concentration of deformation across a principal shear zone (décollement) and faults across which rock glacier lobes are juxtaposed. The new model may be applicable to many rock glaciers worldwide. We suggest that the helicopter GPR method may be useful for 3-D surveying numerous other difficult-to-access mountainous terrains.
Topography associated with crustal flow in continental collisions, with application to Tibet
NASA Astrophysics Data System (ADS)
Bendick, R.; McKenzie, D.; Etienne, J.
2008-10-01
Collision between an undeformable indenter and a viscous region generates isostatically compensated topography by solid-state flow. We model this process numerically, using a finite element scheme. The slope, amplitude and symmetry of the topographic signal depend on the indenter size and the Argand number of the viscous region, a dimensionless ratio of gravitational body forces to viscous forces. When applied to convergent continental settings, these scaling rules provide estimates of the position of an indenter at depth and the mechanical properties of the viscous region, especially effective viscosity. In Tibet, forward modelling suggests that some elevated, low relief topography within the northern plateau may be attributed to lower crustal flow, stimulated by a crustal indenter, possibly Indian lithosphere. The best-fit model constrains the northernmost limit of this indenter to 33.7°N and the maximum effective viscosity of Eurasian middle and lower crust to 1 × 1020 +/- 0.3 × 1020 Pa s.
NASA Astrophysics Data System (ADS)
Goyal, Abheeti; Toschi, Federico; van der Schoot, Paul
2017-11-01
We study the morphological evolution and dynamics of phase separation of multi-component mixture in thin film constrained by a substrate. Specifically, we have explored the surface-directed spinodal decomposition of multicomponent mixture numerically by Free Energy Lattice Boltzmann (LB) simulations. The distinguishing feature of this model over the Shan-Chen (SC) model is that we have explicit and independent control over the free energy functional and EoS of the system. This vastly expands the ambit of physical systems that can be realistically simulated by LB simulations. We investigate the effect of composition, film thickness and substrate wetting on the phase morphology and the mechanism of growth in the vicinity of the substrate. The phase morphology and averaged size in the vicinity of the substrate fluctuate greatly due to the wetting of the substrate in both the parallel and perpendicular directions. Additionally, we also describe how the model presented here can be extended to include an arbitrary number of fluid components.
NASA Astrophysics Data System (ADS)
Orlando, S.; Miceli, M.; Petruk, O.
2017-02-01
Supernova remnants (SNRs) are diffuse extended sources characterized by a complex morphology and a non-uniform distribution of ejecta. Such a morphology reflects pristine structures and features of the progenitor supernova (SN) and the early interaction of the SN blast wave with the inhomogeneous circumstellar medium (CSM). Deciphering the observations of SNRs might open the possibility to investigate the physical properties of both the interacting ejecta and the shocked CSM. This requires accurate numerical models which describe the evolution from the SN explosion to the remnant development and which connect the emission properties of the remnants to the progenitor SNe. Here we show how multi-dimensional SN-SNR hydrodynamic models have been very effective in deciphering observations of SNR Cassiopeia A and SN 1987A, thus unveiling the structure of ejecta in the immediate aftermath of the SN explosion and constraining the 3D pre-supernova structure and geometry of the environment surrounding the progenitor SN.
Strongly coupled gauge theories: What can lattice calculations teach us?
NASA Astrophysics Data System (ADS)
Hasenfratz, A.; Brower, R. C.; Rebbi, C.; Weinberg, E.; Witzel, O.
2017-12-01
The dynamical origin of electroweak symmetry breaking is an open question with many possible theoretical explanations. Strongly coupled systems predicting the Higgs boson as a bound state of a new gauge-fermion interaction form one class of candidate models. Due to increased statistics, LHC run II will further constrain the phenomenologically viable models in the near future. In the meanwhile it is important to understand the general properties and specific features of the different competing models. In this work we discuss many-flavor gauge-fermion systems that contain both massless (light) and massive fermions. The former provide Goldstone bosons and trigger electroweak symmetry breaking, while the latter indirectly influence the infrared dynamics. Numerical results reveal that such systems can exhibit a light 0++ isosinglet scalar, well separated from the rest of the spectrum. Further, when we set the scale via the vev of electroweak symmetry breaking, we predict a 2 TeV vector resonance which could be a generic feature of SU(3) gauge theories.
"Virtual shear box" experiments of stress and slip cycling within a subduction interface mélange
NASA Astrophysics Data System (ADS)
Webber, Sam; Ellis, Susan; Fagereng, Åke
2018-04-01
What role does the progressive geometric evolution of subduction-related mélange shear zones play in the development of strain transients? We use a "virtual shear box" experiment, based on outcrop-scale observations from an ancient exhumed subduction interface - the Chrystalls Beach Complex (CBC), New Zealand - to constrain numerical models of slip processes within a meters-thick shear zone. The CBC is dominated by large, competent clasts surrounded by interconnected weak matrix. Under constant slip velocity boundary conditions, models of the CBC produce stress cycling behavior, accompanied by mixed brittle-viscous deformation. This occurs as a consequence of the reorganization of competent clasts, and the progressive development and breakdown of stress bridges as clasts mutually obstruct one another. Under constant shear stress boundary conditions, the models show periods of relative inactivity punctuated by aseismic episodic slip at rapid rates (meters per year). Such a process may contribute to the development of strain transients such as slow slip.
NASA Astrophysics Data System (ADS)
Capdeville, Yann; Métivier, Ludovic
2018-05-01
Seismic imaging is an efficient tool to investigate the Earth interior. Many of the different imaging techniques currently used, including the so-called full waveform inversion (FWI), are based on limited frequency band data. Such data are not sensitive to the true earth model, but to a smooth version of it. This smooth version can be related to the true model by the homogenization technique. Homogenization for wave propagation in deterministic media with no scale separation, such as geological media, has been recently developed. With such an asymptotic theory, it is possible to compute an effective medium valid for a given frequency band such that effective waveforms and true waveforms are the same up to a controlled error. In this work we make the link between limited frequency band inversion, mainly FWI, and homogenization. We establish the relation between a true model and an FWI result model. This relation is important for a proper interpretation of FWI images. We numerically illustrate, in the 2-D case, that an FWI result is at best the homogenized version of the true model. Moreover, it appears that the homogenized FWI model is quite independent of the FWI parametrization, as long as it has enough degrees of freedom. In particular, inverting for the full elastic tensor is, in each of our tests, always a good choice. We show how the homogenization can help to understand FWI behaviour and help to improve its robustness and convergence by efficiently constraining the solution space of the inverse problem.
Do cosmological data rule out f (R ) with w ≠-1 ?
NASA Astrophysics Data System (ADS)
Battye, Richard A.; Bolliet, Boris; Pace, Francesco
2018-05-01
We review the equation of state (EoS) approach to dark sector perturbations and apply it to f (R ) gravity models of dark energy. We show that the EoS approach is numerically stable and use it to set observational constraints on designer models. Within the EoS approach we build an analytical understanding of the dynamics of cosmological perturbations for the designer class of f (R ) gravity models, characterized by the parameter B0 and the background equation of state of dark energy w . When we use the Planck cosmic microwave background temperature anisotropy, polarization, and lensing data as well as the baryonic acoustic oscillation data from SDSS and WiggleZ, we find B0<0.006 (95% C.L.) for the designer models with w =-1 . Furthermore, we find B0<0.0045 and |w +1 |<0.002 (95% C.L.) for the designer models with w ≠-1 . Previous analyses found similar results for designer and Hu-Sawicki f (R ) gravity models using the effective field theory approach [Raveri et al., Phys. Rev. D 90, 043513 (2014), 10.1103/PhysRevD.90.043513; Hu et al., Mon. Not. R. Astron. Soc. 459, 3880 (2016), 10.1093/mnras/stw775]; therefore this hints for the fact that generic f (R ) models with w ≠-1 can be tightly constrained by current cosmological data, complementary to solar system tests [Brax et al., Phys. Rev. D 78, 104021 (2008), 10.1103/PhysRevD.78.104021; Faulkner et al., Phys. Rev. D 76, 063505 (2007), 10.1103/PhysRevD.76.063505]. When compared to a w CDM fluid with the same sound speed, we find that the equation of state for f (R ) models is better constrained to be close to -1 by about an order of magnitude, due to the strong dependence of the perturbations on w .
A cellular automata approach for modeling surface water runoff
NASA Astrophysics Data System (ADS)
Jozefik, Zoltan; Nanu Frechen, Tobias; Hinz, Christoph; Schmidt, Heiko
2015-04-01
This abstract reports the development and application of a two-dimensional cellular automata based model, which couples the dynamics of overland flow, infiltration processes and surface evolution through sediment transport. The natural hill slopes are represented by their topographic elevation and spatially varying soil properties infiltration rates and surface roughness coefficients. This model allows modeling of Hortonian overland flow and infiltration during complex rainfall events. An advantage of the cellular automata approach over the kinematic wave equations is that wet/dry interfaces that often appear with rainfall overland flows can be accurately captured and are not a source of numerical instabilities. An adaptive explicit time stepping scheme allows for rainfall events to be adequately resolved in time, while large time steps are taken during dry periods to provide for simulation run time efficiency. The time step is constrained by the CFL condition and mass conservation considerations. The spatial discretization is shown to be first-order accurate. For validation purposes, hydrographs for non-infiltrating and infiltrating plates are compared to the kinematic wave analytic solutions and data taken from literature [1,2]. Results show that our cellular automata model quantitatively accurately reproduces hydrograph patterns. However, recent works have showed that even through the hydrograph is satisfyingly reproduced, the flow field within the plot might be inaccurate [3]. For a more stringent validation, we compare steady state velocity, water flux, and water depth fields to rainfall simulation experiments conducted in Thies, Senegal [3]. Comparisons show that our model is able to accurately capture these flow properties. Currently, a sediment transport and deposition module is being implemented and tested. [1] M. Rousseau, O. Cerdan, O. Delestre, F. Dupros, F. James, S. Cordier. Overland flow modeling with the Shallow Water Equation using a well balanced numerical scheme: Adding efficiency or sum more complexity?. 2012.
Behaviour of mudflows realized in a laboratory apparatus and relative numerical calibration
NASA Astrophysics Data System (ADS)
Brezzi, Lorenzo; Gabrieli, Fabio; Kaitna, Roland; Cola, Simonetta
2016-04-01
Nowadays, numerical simulations are indispensable allies for the researchers to reproduce phenomena such as earth-flows, debris-flows and mudflows. One of the most difficult and problematic phases is about the choice and the calibration of the parameters to be included in the model at the real scale. Surely, it can be useful to start from laboratory experiment that simplify as much as possible the case study with the aim of reducing uncertainties related to the trigger and the propagation of a real flow. In this way, geometry of the problem, identification of the triggering mass, are well known and constrained in the experimental tests as in the numerical simulations and the focus of the study may be moved to the material parameters. This article wants to analyze the behavior of different mixtures of water and kaolin, which flow in a laboratory channel. A 10 dm3 prismatic container that discharges the material into a channel 2m long and 0.16 m wide composes the simple experimental apparatus. The chute base was roughened by glued sand and inclined with a 21° angle. Initially, we evaluated the lengths of run-out, the spread and shape of the deposit for five different mixtures. A huge quantity of information were obtained by 3 laser sensors attached to the channel and by photogrammetry, that gives out a 3D model of the deposit shape at the end of the flow. Subsequently, we reproduced these physical phenomena by using the numerical model Geoflow-SPH (Pastor et al., 2008; 2014) , governed by a Bingham rheological law (O'Brien & Julien, 1988), and we calibrated the different tests by back-analysis to assess optimum parameters. The final goal was the comprehension of the relationship that characterizes the parameters with the variation of the kaolin content in the mixtures.
A reduced order, test verified component mode synthesis approach for system modeling applications
NASA Astrophysics Data System (ADS)
Butland, Adam; Avitabile, Peter
2010-05-01
Component mode synthesis (CMS) is a very common approach used for the generation of large system models. In general, these modeling techniques can be separated into two categories: those utilizing a combination of constraint modes and fixed interface normal modes and those based on a combination of free interface normal modes and residual flexibility terms. The major limitation of the methods utilizing constraint modes and fixed interface normal modes is the inability to easily obtain the required information from testing; the result of this limitation is that constraint mode-based techniques are primarily used with numerical models. An alternate approach is proposed which utilizes frequency and shape information acquired from modal testing to update reduced order finite element models using exact analytical model improvement techniques. The connection degrees of freedom are then rigidly constrained in the test verified, reduced order model to provide the boundary conditions necessary for constraint modes and fixed interface normal modes. The CMS approach is then used with this test verified, reduced order model to generate the system model for further analysis. A laboratory structure is used to show the application of the technique with both numerical and simulated experimental components to describe the system and validate the proposed approach. Actual test data is then used in the approach proposed. Due to typical measurement data contaminants that are always included in any test, the measured data is further processed to remove contaminants and is then used in the proposed approach. The final case using improved data with the reduced order, test verified components is shown to produce very acceptable results from the Craig-Bampton component mode synthesis approach. Use of the technique with its strengths and weaknesses are discussed.
Geodynamic inversion to constrain the rheology of the lithosphere: What is the effect of elasticity?
NASA Astrophysics Data System (ADS)
Baumann, Tobias; Kaus, Boris; Thielmann, Marcel
2016-04-01
The concept of elastic thickness (T_e) is one of the main methods to describe the integrated strength of oceanic lithosphere (e.g. Watts, 2001). Observations of the Te are in general agreement with yield strength envelopes estimated from laboratory experiments (Burov, 2007, Goetze & Evans 1979). Yet, applying the same concept to the continental lithosphere has proven to be more difficult (Burov & Diament, 1995), which resulted in an ongoing discussion on the rheological structure of the lithosphere (e.g. Burov & Watts, 2006, Jackson, 2002; Maggi et al., 2000). Recently, we proposed a new approach, which constrains rheological properties of the lithosphere directly from geophysical observations such as GPS-velocity, topography and gravity (Baumann & Kaus, 2015). This approach has the advantage that available data sets (such as Moho depth) can be directly taken into account without making the a-priori assumption that the lithosphere is thin elastic plate floating on the mantle. Our results show that a Bayesian inversion method combined with numerical thermo-mechanical models can be used as independent tool to constrain non-linear viscous and plastic parameters of the lithosphere. As the rheology of the lithosphere is strongly temperature dependent, it is even possible to add a temperature parameterisation to the inversion method and constrain the thermal structure of the lithosphere in this manner. Results for the India-Asia collision zone show that existing geophysical data require India to have a quite high effective viscosity. Yet, the rheological structure of Tibet less well constrained and a number of scenarios give a nearly equally good fit to the data. Yet, one of the assumptions that we make while doing this geodynamic inversion is that the rheology is viscoplastic, and that elastic effects do not significantly alter the large-scale dynamics of the lithosphere. Here, we test the validity of this assumption by performing synthetic forward models and retrieving the rheological parameters of these models with viscoplastic geodynamic inversions. We focus on a typical intra-oceanic subduction system as well as a typical scenario of subduction of an oceanic plate underneath a continental arc. Baumann, T. S. & Kaus, B. J. P., 2015. Geodynamic inversion to constrain thenon-linear rheology of the lithosphere, Geophys. J. Int., 202(2), 1289-1316. Burov, E. B. & Diament, M., 1995. The effective elastic thickness (Te) of continental lithosphere: What does it really mean?, J. Geophys. Res., 100, 3905-3927. Burov, E. B. & Watts, A. B., 2006. The long-term strength of continental lithosphere : jelly sandwich or crème brûlée?, GSA today, 16(1), 4-10. Burov, E. B., 2007. Crust and Lithosphere Dynamics: Plate Rheology and Mechanics, in Treatise Geophys., vol. 6, chap. 3, pp. 99-151, ed. Watts, A. B., Elsevier. Goetze, C. & Evans, B., 1979. Stress and temperature in the bending lithosphere as constrained by experimental rock mechanics, Geophys. J. Int., 59(3), 463-478. Jackson, J., 2002. Strength of the continental lithosphere: Time to abandon the jelly sandwich?, GSA today, 12(9), 4-9. Maggi, A., Jackson, J. A., McKenzie, D., & Priestley, K., 2000a. Earthquake focal depths, effective elastic thickness, and the strength of the continental lithosphere, Geology, 28, 495-498. Watts, A. B., 2001. Isostasy and Flexure of the Lithosphere, Cambridge University Press.
Quantitative Analysis of the Effective Functional Structure in Yeast Glycolysis
De la Fuente, Ildefonso M.; Cortes, Jesus M.
2012-01-01
The understanding of the effective functionality that governs the enzymatic self-organized processes in cellular conditions is a crucial topic in the post-genomic era. In recent studies, Transfer Entropy has been proposed as a rigorous, robust and self-consistent method for the causal quantification of the functional information flow among nonlinear processes. Here, in order to quantify the functional connectivity for the glycolytic enzymes in dissipative conditions we have analyzed different catalytic patterns using the technique of Transfer Entropy. The data were obtained by means of a yeast glycolytic model formed by three delay differential equations where the enzymatic rate equations of the irreversible stages have been explicitly considered. These enzymatic activity functions were previously modeled and tested experimentally by other different groups. The results show the emergence of a new kind of dynamical functional structure, characterized by changing connectivity flows and a metabolic invariant that constrains the activity of the irreversible enzymes. In addition to the classical topological structure characterized by the specific location of enzymes, substrates, products and feedback-regulatory metabolites, an effective functional structure emerges in the modeled glycolytic system, which is dynamical and characterized by notable variations of the functional interactions. The dynamical structure also exhibits a metabolic invariant which constrains the functional attributes of the enzymes. Finally, in accordance with the classical biochemical studies, our numerical analysis reveals in a quantitative manner that the enzyme phosphofructokinase is the key-core of the metabolic system, behaving for all conditions as the main source of the effective causal flows in yeast glycolysis. PMID:22393350
NASA Astrophysics Data System (ADS)
Witter, Robert C.; Zhang, Yinglong; Wang, Kelin; Goldfinger, Chris; Priest, George R.; Allan, Jonathan C.
2012-10-01
We test hypothetical tsunami scenarios against a 4,600-year record of sandy deposits in a southern Oregon coastal lake that offer minimum inundation limits for prehistoric Cascadia tsunamis. Tsunami simulations constrain coseismic slip estimates for the southern Cascadia megathrust and contrast with slip deficits implied by earthquake recurrence intervals from turbidite paleoseismology. We model the tsunamigenic seafloor deformation using a three-dimensional elastic dislocation model and test three Cascadia earthquake rupture scenarios: slip partitioned to a splay fault; slip distributed symmetrically on the megathrust; and slip skewed seaward. Numerical tsunami simulations use the hydrodynamic finite element model, SELFE, that solves nonlinear shallow-water wave equations on unstructured grids. Our simulations of the 1700 Cascadia tsunami require >12-13 m of peak slip on the southern Cascadia megathrust offshore southern Oregon. The simulations account for tidal and shoreline variability and must crest the ˜6-m-high lake outlet to satisfy geological evidence of inundation. Accumulating this slip deficit requires ≥360-400 years at the plate convergence rate, exceeding the 330-year span of two earthquake cycles preceding 1700. Predecessors of the 1700 earthquake likely involved >8-9 m of coseismic slip accrued over >260 years. Simple slip budgets constrained by tsunami simulations allow an average of 5.2 m of slip per event for 11 additional earthquakes inferred from the southern Cascadia turbidite record. By comparison, slip deficits inferred from time intervals separating earthquake-triggered turbidites are poor predictors of coseismic slip because they meet geological constraints for only 4 out of 12 (˜33%) Cascadia tsunamis.
NASA Astrophysics Data System (ADS)
Gan, Zhaoming; Yuan, Feng; Ostriker, Jeremiah P.; Ciotti, Luca; Novak, Gregory S.
2014-07-01
Based on two-dimensional high-resolution hydrodynamic numerical simulation, we study the mechanical and radiative feedback effects from the central active galactic nucleus (AGN) on the cosmological evolution of an isolated elliptical galaxy. The inner boundary of the simulation domain is carefully chosen so that the fiducial Bondi radius is resolved and the accretion rate of the black hole is determined self-consistently. It is well known that when the accretion rates are high and low, the central AGNs will be in cold and hot accretion modes, which correspond to the radiative and kinetic feedback modes, respectively. The emitted spectrum from the hot accretion flows is harder than that from the cold accretion flows, which could result in a higher Compton temperature accompanied by a more efficient radiative heating, according to previous theoretical works. Such a difference of the Compton temperature between the two feedback modes, the focus of this study, has been neglected in previous works. Significant differences in the kinetic feedback mode are found as a result of the stronger Compton heating. More importantly, if we constrain models to correctly predict black hole growth and AGN duty cycle after cosmological evolution, we find that the favored model parameters are constrained: mechanical feedback efficiency diminishes with decreasing luminosity (the maximum efficiency being ~= 10-3.5), and X-ray Compton temperature increases with decreasing luminosity, although models with fixed mechanical efficiency and Compton temperature can be found that are satisfactory as well. We conclude that radiative feedback in the kinetic mode is much more important than previously thought.
ERIC Educational Resources Information Center
Mare, Robert D.; Mason, William M.
An important class of applications of measurement error or constrained factor analytic models consists of comparing models for several populations. In such cases, it is appropriate to make explicit statistical tests of model similarity across groups and to constrain some parameters of the models to be equal across groups using a priori substantive…
Computing Generalized Matrix Inverse on Spiking Neural Substrate
Shukla, Rohit; Khoram, Soroosh; Jorgensen, Erik; Li, Jing; Lipasti, Mikko; Wright, Stephen
2018-01-01
Emerging neural hardware substrates, such as IBM's TrueNorth Neurosynaptic System, can provide an appealing platform for deploying numerical algorithms. For example, a recurrent Hopfield neural network can be used to find the Moore-Penrose generalized inverse of a matrix, thus enabling a broad class of linear optimizations to be solved efficiently, at low energy cost. However, deploying numerical algorithms on hardware platforms that severely limit the range and precision of representation for numeric quantities can be quite challenging. This paper discusses these challenges and proposes a rigorous mathematical framework for reasoning about range and precision on such substrates. The paper derives techniques for normalizing inputs and properly quantizing synaptic weights originating from arbitrary systems of linear equations, so that solvers for those systems can be implemented in a provably correct manner on hardware-constrained neural substrates. The analytical model is empirically validated on the IBM TrueNorth platform, and results show that the guarantees provided by the framework for range and precision hold under experimental conditions. Experiments with optical flow demonstrate the energy benefits of deploying a reduced-precision and energy-efficient generalized matrix inverse engine on the IBM TrueNorth platform, reflecting 10× to 100× improvement over FPGA and ARM core baselines. PMID:29593483
An adaptive multi-moment FVM approach for incompressible flows
NASA Astrophysics Data System (ADS)
Liu, Cheng; Hu, Changhong
2018-04-01
In this study, a multi-moment finite volume method (FVM) based on block-structured adaptive Cartesian mesh is proposed for simulating incompressible flows. A conservative interpolation scheme following the idea of the constrained interpolation profile (CIP) method is proposed for the prolongation operation of the newly created mesh. A sharp immersed boundary (IB) method is used to model the immersed rigid body. A moving least squares (MLS) interpolation approach is applied for reconstruction of the velocity field around the solid surface. An efficient method for discretization of Laplacian operators on adaptive meshes is proposed. Numerical simulations on several test cases are carried out for validation of the proposed method. For the case of viscous flow past an impulsively started cylinder (Re = 3000 , 9500), the computed surface vorticity coincides with the result of the body-fitted method. For the case of a fast pitching NACA 0015 airfoil at moderate Reynolds numbers (Re = 10000 , 45000), the predicted drag coefficient (CD) and lift coefficient (CL) agree well with other numerical or experimental results. For 2D and 3D simulations of viscous flow past a pitching plate with prescribed motions (Re = 5000 , 40000), the predicted CD, CL and CM (moment coefficient) are in good agreement with those obtained by other numerical methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mirocha, Jordan; Burns, Jack O.; Harker, Geraint J. A., E-mail: mirocha@astro.ucla.edu
2015-11-01
Following our previous work, which related generic features in the sky-averaged (global) 21-cm signal to properties of the intergalactic medium, we now investigate the prospects for constraining a simple galaxy formation model with current and near-future experiments. Markov-Chain Monte Carlo fits to our synthetic data set, which includes a realistic galactic foreground, a plausible model for the signal, and noise consistent with 100 hr of integration by an ideal instrument, suggest that a simple four-parameter model that links the production rate of Lyα, Lyman-continuum, and X-ray photons to the growth rate of dark matter halos can be well-constrained (to ∼0.1more » dex in each dimension) so long as all three spectral features expected to occur between 40 ≲ ν/MHz ≲ 120 are detected. Several important conclusions follow naturally from this basic numerical result, namely that measurements of the global 21-cm signal can in principle (i) identify the characteristic halo mass threshold for star formation at all redshifts z ≳ 15, (ii) extend z ≲ 4 upper limits on the normalization of the X-ray luminosity star formation rate (L{sub X}–SFR) relation out to z ∼ 20, and (iii) provide joint constraints on stellar spectra and the escape fraction of ionizing radiation at z ∼ 12. Though our approach is general, the importance of a broadband measurement renders our findings most relevant to the proposed Dark Ages Radio Explorer, which will have a clean view of the global 21-cm signal from ∼40 to 120 MHz from its vantage point above the radio-quiet, ionosphere-free lunar far-side.« less
Laterally constrained inversion for CSAMT data interpretation
NASA Astrophysics Data System (ADS)
Wang, Ruo; Yin, Changchun; Wang, Miaoyue; Di, Qingyun
2015-10-01
Laterally constrained inversion (LCI) has been successfully applied to the inversion of dc resistivity, TEM and airborne EM data. However, it hasn't been yet applied to the interpretation of controlled-source audio-frequency magnetotelluric (CSAMT) data. In this paper, we apply the LCI method for CSAMT data inversion by preconditioning the Jacobian matrix. We apply a weighting matrix to Jacobian to balance the sensitivity of model parameters, so that the resolution with respect to different model parameters becomes more uniform. Numerical experiments confirm that this can improve the convergence of the inversion. We first invert a synthetic dataset with and without noise to investigate the effect of LCI applications to CSAMT data, for the noise free data, the results show that the LCI method can recover the true model better compared to the traditional single-station inversion; and for the noisy data, the true model is recovered even with a noise level of 8%, indicating that LCI inversions are to some extent noise insensitive. Then, we re-invert two CSAMT datasets collected respectively in a watershed and a coal mine area in Northern China and compare our results with those from previous inversions. The comparison with the previous inversion in a coal mine shows that LCI method delivers smoother layer interfaces that well correlate to seismic data, while comparison with a global searching algorithm of simulated annealing (SA) in a watershed shows that though both methods deliver very similar good results, however, LCI algorithm presented in this paper runs much faster. The inversion results for the coal mine CSAMT survey show that a conductive water-bearing zone that was not revealed by the previous inversions has been identified by the LCI. This further demonstrates that the method presented in this paper works for CSAMT data inversion.
WHERE ARE THE LOW-MASS POPULATION III STARS?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ishiyama, Tomoaki; Sudo, Kae; Yokoi, Shingo
2016-07-20
We study the number and the distribution of low-mass Population III (Pop III) stars in the Milky Way. In our numerical model, hierarchical formation of dark matter minihalos and Milky-Way-sized halos are followed by a high-resolution cosmological simulation. We model the Pop III formation in H{sub 2} cooling minihalos without metal under UV radiation of the Lyman–Werner bands. Assuming a Kroupa initial mass function (IMF) from 0.15 to 1.0 M {sub ⊙} for low-mass Pop III stars, as a working hypothesis, we try to constrain the theoretical models in reverse by current and future observations. We find that the survivorsmore » tend to concentrate on the center of halo and subhalos. We also evaluate the observability of Pop III survivors in the Milky Way and dwarf galaxies, and constraints on the number of Pop III survivors per minihalo. The higher latitude fields require lower sample sizes because of the high number density of stars in the galactic disk, the required sample sizes are comparable in the high- and middle-latitude fields by photometrically selecting low-metallicity stars with optimized narrow-band filters, and the required number of dwarf galaxies to find one Pop III survivor is less than 10 at <100 kpc for the tip of red giant stars. Provided that available observations have not detected any survivors, the formation models of low-mass Pop III stars with more than 10 stars per minihalo are already excluded. Furthermore, we discuss the way to constrain the IMF of Pop III stars at a high mass range of ≳10 M {sub ⊙}.« less
NASA Astrophysics Data System (ADS)
Beretta, Gian Paolo; Rivadossi, Luca; Janbozorgi, Mohammad
2018-04-01
Rate-Controlled Constrained-Equilibrium (RCCE) modeling of complex chemical kinetics provides acceptable accuracies with much fewer differential equations than for the fully Detailed Kinetic Model (DKM). Since its introduction by James C. Keck, a drawback of the RCCE scheme has been the absence of an automatable, systematic procedure to identify the constraints that most effectively warrant a desired level of approximation for a given range of initial, boundary, and thermodynamic conditions. An optimal constraint identification has been recently proposed. Given a DKM with S species, E elements, and R reactions, the procedure starts by running a probe DKM simulation to compute an S-vector that we call overall degree of disequilibrium (ODoD) because its scalar product with the S-vector formed by the stoichiometric coefficients of any reaction yields its degree of disequilibrium (DoD). The ODoD vector evolves in the same (S-E)-dimensional stoichiometric subspace spanned by the R stoichiometric S-vectors. Next we construct the rank-(S-E) matrix of ODoD traces obtained from the probe DKM numerical simulation and compute its singular value decomposition (SVD). By retaining only the first C largest singular values of the SVD and setting to zero all the others we obtain the best rank-C approximation of the matrix of ODoD traces whereby its columns span a C-dimensional subspace of the stoichiometric subspace. This in turn yields the best approximation of the evolution of the ODoD vector in terms of only C parameters that we call the constraint potentials. The resulting order-C RCCE approximate model reduces the number of independent differential equations related to species, mass, and energy balances from S+2 to C+E+2, with substantial computational savings when C ≪ S-E.
Order-Constrained Bayes Inference for Dichotomous Models of Unidimensional Nonparametric IRT
ERIC Educational Resources Information Center
Karabatsos, George; Sheu, Ching-Fan
2004-01-01
This study introduces an order-constrained Bayes inference framework useful for analyzing data containing dichotomous scored item responses, under the assumptions of either the monotone homogeneity model or the double monotonicity model of nonparametric item response theory (NIRT). The framework involves the implementation of Gibbs sampling to…
Iwasaki, Toshiki; Nelson, Jonathan M.; Shimizu, Yasuyuki; Parker, Gary
2017-01-01
Asymptotic characteristics of the transport of bed load tracer particles in rivers have been described by advection-dispersion equations. Here we perform numerical simulations designed to study the role of free bars, and more specifically single-row alternate bars, on streamwise tracer particle dispersion. In treating the conservation of tracer particle mass, we use two alternative formulations for the Exner equation of sediment mass conservation: the flux-based formulation, in which bed elevation varies with the divergence of the bed load transport rate, and the entrainment-based formulation, in which bed elevation changes with the net deposition rate. Under the condition of no net bed aggradation/degradation, a 1-D flux-based deterministic model that does not describe free bars yields no streamwise dispersion. The entrainment-based 1-D formulation, on the other hand, models stochasticity via the probability density function (PDF) of particle step length, and as a result does show tracer dispersion. When the formulation is generalized to 2-D to include free alternate bars, however, both models yield almost identical asymptotic advection-dispersion characteristics, in which streamwise dispersion is dominated by randomness inherent in free bar morphodynamics. This randomness can result in a heavy-tailed PDF of waiting time. In addition, migrating bars may constrain the travel distance through temporary burial, causing a thin-tailed PDF of travel distance. The superdiffusive character of streamwise particle dispersion predicted by the model is attributable to the interaction of these two effects.
How Much Dust Does Enceladus eject?
NASA Astrophysics Data System (ADS)
Kempf, S.; Southworth, B.; Srama, R.; Schmidt, J.; Postberg, F.
2016-12-01
There is an ongoing argument how much dust per second the ice volcanoes on Saturn's ice moon eject. By adjusting their plume model to the dust flux measured by the Cassini dust detector during the close Enceladus flyby in 2005, Schmidt et al. (2008) obtained a total dust production rate in the plumes of about 5 kg/s. On the other hand, Ingersoll and Ewald (2005) derived a dust production rate of 51 kg/s from the total plume brightness. Knowledge of the production rate is essential for estimating the dust to gas mass ratio, which in turn is an important constraint for finding the plume source mechanism. Here we report on measurements of the plume dust density during the last close Cassini flyby at Enceladus in October 2015. The data match our numerical model for the Enceladus plume. The model is based on a large number of dynamical simulations including gravity and Lorentz force to investigate the earliest phase of the ring particle life span. The evolution of the electrostatic charge carried by the initially uncharged grains is treated self-consistently. Our numerical simulations reproduce all Enceladus data sets obtained by Cassini's Cosmic Dust Analyzer (CDA). Our model calculations together with the new density data constrain the Enceladus dust source rate to < 5 kg/s. Based on our simulation results we are able to draw conclusions about the emission of plume particles along the fractures in the south polar terrain.
NASA Astrophysics Data System (ADS)
Iwasaki, Toshiki; Nelson, Jonathan; Shimizu, Yasuyuki; Parker, Gary
2017-04-01
Asymptotic characteristics of the transport of bed load tracer particles in rivers have been described by advection-dispersion equations. Here we perform numerical simulations designed to study the role of free bars, and more specifically single-row alternate bars, on streamwise tracer particle dispersion. In treating the conservation of tracer particle mass, we use two alternative formulations for the Exner equation of sediment mass conservation: the flux-based formulation, in which bed elevation varies with the divergence of the bed load transport rate, and the entrainment-based formulation, in which bed elevation changes with the net deposition rate. Under the condition of no net bed aggradation/degradation, a 1-D flux-based deterministic model that does not describe free bars yields no streamwise dispersion. The entrainment-based 1-D formulation, on the other hand, models stochasticity via the probability density function (PDF) of particle step length, and as a result does show tracer dispersion. When the formulation is generalized to 2-D to include free alternate bars, however, both models yield almost identical asymptotic advection-dispersion characteristics, in which streamwise dispersion is dominated by randomness inherent in free bar morphodynamics. This randomness can result in a heavy-tailed PDF of waiting time. In addition, migrating bars may constrain the travel distance through temporary burial, causing a thin-tailed PDF of travel distance. The superdiffusive character of streamwise particle dispersion predicted by the model is attributable to the interaction of these two effects.
NASA Astrophysics Data System (ADS)
Strak, V.; Dominguez, S.; Petit, C.; Meyer, B.; Loget, N.
2013-12-01
Relief evolution in active tectonic areas is controlled by the interactions between tectonics and surface processes (erosion, transport and sedimentation). These interactions lead to the formation of geomorphologic markers that remain stable during the equilibrium reached in the long-term between tectonics and erosion. In regions experiencing active extension, drainage basins and faceted spurs (triangular facets) are such long-lived morphologic markers and they can help in quantifying the competing effects between tectonics, erosion and sedimentation. We performed analog and numerical models simulating the morphologic evolution of a mountain range bounded by a normal fault. In each approach we imposed identical initial conditions. We carried out several models by varying the fault slip rate (V) and keeping a constant rainfall rate allowing us to study the effect of V on morphology. Both approaches highlight the main control of V on the topographic evolution of the footwall. The experimental approach shows that V controls erosion rates (incision rate, erosion rate of slopes and regressive erosion rate) and possibly the height of triangular facets. This approach indicates likewise that the parameter K of the stream power law depends on V even for non-equilibrium topography. The numerical approach corroborates the control of V on erosion rates and facet height. It also shows a correlation between the shape of drainage basins and V (slope-area relationship) and it suggests the same for the parameters of the stream power law. Therefore both approaches suggest the possibility of using the height of triangular facets and the slope-area relationship to infer the fault slip rate of normal faults situated in a given climatic context.
NASA Astrophysics Data System (ADS)
Hernandez-Marin, Martin; Burbey, Thomas J.
2009-12-01
Land subsidence and earth fissuring can cause damage in semiarid urbanized valleys where pumping exceeds natural recharge. In places such as Las Vegas Valley (USA), Quaternary faults play an important role in the surface deformation patterns by constraining the migration of land subsidence and creating complex relationships with surface fissures. These fissures typically result from horizontal displacements that occur in zones where extensional stress derived from groundwater flow exceeds the tensile strength of the near-surface sediments. A series of hypothetical numerical models, using the finite-element code ABAQUS and based on the observed conditions of the Eglington Fault zone, were developed. The models reproduced the (1) long-term natural recharge and discharge, (2) heavy pumping and (3) incorporation of artificial recharge that reflects the conditions of Las Vegas Valley. The simulated hydrostratigraphy consists of three aquifers, two aquitards and a relatively dry vadose zone, plus a normal fault zone that reflects the Quaternary Eglington fault. Numerical results suggest that a 100-m-wide fault zone composed of sand-like material produces: (1) conditions most similar to those observed in Las Vegas Valley and (2) the most favorable conditions for the development of fissures to form on the surface adjacent to the fault zone.
Determination of the Conservation Time of Periodicals for Optimal Shelf Maintenance of a Library.
ERIC Educational Resources Information Center
Miyamoto, Sadaaki; Nakayama, Kazuhiko
1981-01-01
Presents a method based on a constrained optimization technique that determines the time of removal of scientific periodicals from the shelf of a library. A geometrical interpretation of the theoretical result is given, and a numerical example illustrates how the technique is applicable to real bibliographic data. (FM)
ERIC Educational Resources Information Center
Zhou, Xiaolin; Jiang, Xiaoming; Ye, Zheng; Zhang, Yaxu; Lou, Kaiyang; Zhan, Weidong
2010-01-01
An event-related potential (ERP) study was conducted to investigate the temporal neural dynamics of semantic integration processes at different levels of syntactic hierarchy during Chinese sentence reading. In a hierarchical structure, "subject noun" + "verb" + "numeral" + "classifier" + "object noun," the object noun is constrained by selectional…
ERIC Educational Resources Information Center
Russell, David W.; Lucas, Keith B.; McRobbie, Campbell J.
2003-01-01
Investigates how microcomputer-based laboratory (MBL) activities specifically designed to be consistent with a constructivist theory of learning support or constrain student construction of understanding. Analysis of students' discourse and actions reveal that students invented numerous techniques for manipulating data in the service of their…
Sparsest representations and approximations of an underdetermined linear system
NASA Astrophysics Data System (ADS)
Tardivel, Patrick J. C.; Servien, Rémi; Concordet, Didier
2018-05-01
In an underdetermined linear system of equations, constrained l 1 minimization methods such as the basis pursuit or the lasso are often used to recover one of the sparsest representations or approximations of the system. The null space property is a sufficient and ‘almost’ necessary condition to recover a sparsest representation with the basis pursuit. Unfortunately, this property cannot be easily checked. On the other hand, the mutual coherence is an easily checkable sufficient condition insuring the basis pursuit to recover one of the sparsest representations. Because the mutual coherence condition is too strong, it is hardly met in practice. Even if one of these conditions holds, to our knowledge, there is no theoretical result insuring that the lasso solution is one of the sparsest approximations. In this article, we study a novel constrained problem that gives, without any condition, one of the sparsest representations or approximations. To solve this problem, we provide a numerical method and we prove its convergence. Numerical experiments show that this approach gives better results than both the basis pursuit problem and the reweighted l 1 minimization problem.
Constrained orbital intercept-evasion
NASA Astrophysics Data System (ADS)
Zatezalo, Aleksandar; Stipanovic, Dusan M.; Mehra, Raman K.; Pham, Khanh
2014-06-01
An effective characterization of intercept-evasion confrontations in various space environments and a derivation of corresponding solutions considering a variety of real-world constraints are daunting theoretical and practical challenges. Current and future space-based platforms have to simultaneously operate as components of satellite formations and/or systems and at the same time, have a capability to evade potential collisions with other maneuver constrained space objects. In this article, we formulate and numerically approximate solutions of a Low Earth Orbit (LEO) intercept-maneuver problem in terms of game-theoretic capture-evasion guaranteed strategies. The space intercept-evasion approach is based on Liapunov methodology that has been successfully implemented in a number of air and ground based multi-player multi-goal game/control applications. The corresponding numerical algorithms are derived using computationally efficient and orbital propagator independent methods that are previously developed for Space Situational Awareness (SSA). This game theoretical but at the same time robust and practical approach is demonstrated on a realistic LEO scenario using existing Two Line Element (TLE) sets and Simplified General Perturbation-4 (SGP-4) propagator.
Trinchero, Paolo; Puigdomenech, Ignasi; Molinero, Jorge; Ebrahimi, Hedieh; Gylling, Björn; Svensson, Urban; Bosbach, Dirk; Deissmann, Guido
2017-05-01
We present an enhanced continuum-based approach for the modelling of groundwater flow coupled with reactive transport in crystalline fractured rocks. In the proposed formulation, flow, transport and geochemical parameters are represented onto a numerical grid using Discrete Fracture Network (DFN) derived parameters. The geochemical reactions are further constrained by field observations of mineral distribution. To illustrate how the approach can be used to include physical and geochemical complexities into reactive transport calculations, we have analysed the potential ingress of oxygenated glacial-meltwater in a heterogeneous fractured rock using the Forsmark site (Sweden) as an example. The results of high-performance reactive transport calculations show that, after a quick oxygen penetration, steady state conditions are attained where abiotic reactions (i.e. the dissolution of chlorite and the homogeneous oxidation of aqueous iron(II) ions) counterbalance advective oxygen fluxes. The results show that most of the chlorite becomes depleted in the highly conductive deformation zones where higher mineral surface areas are available for reactions. Copyright © 2017 Elsevier B.V. All rights reserved.
On the use of infrasound for constraining global climate models
NASA Astrophysics Data System (ADS)
Millet, Christophe; Ribstein, Bruno; Lott, Francois; Cugnet, David
2017-11-01
Numerical prediction of infrasound is a complex issue due to constantly changing atmospheric conditions and to the random nature of small-scale flows. Although part of the upward propagating wave is refracted at stratospheric levels, where gravity waves significantly affect the temperature and the wind, yet the process by which the gravity wave field changes the infrasound arrivals remains poorly understood. In the present work, we use a stochastic parameterization to represent the subgrid scale gravity wave field from the atmospheric specifications provided by the European Centre for Medium-Range Weather Forecasts. It is shown that regardless of whether the gravity wave field possesses relatively small or large features, the sensitivity of acoustic waveforms to atmospheric disturbances can be extremely different. Using infrasound signals recorded during campaigns of ammunition destruction explosions, a new set of tunable parameters is proposed which more accurately predicts the small-scale content of gravity wave fields in the middle atmosphere. Climate simulations are performed using the updated parameterization. Numerical results demonstrate that a network of ground-based infrasound stations is a promising technology for dynamically tuning the gravity wave parameterization.
NASA Astrophysics Data System (ADS)
Pizzati, Mattia; Cavozzi, Cristian; Magistroni, Corrado; Storti, Fabrizio
2016-04-01
Fracture density pattern predictions with low uncertainty is a fundamental issue for constraining fluid flow pathways in thrust-related anticlines in the frontal parts of thrust-and-fold belts and accretionary prisms, which can also provide plays for hydrocarbon exploration and development. Among the drivers that concur to determine the distribution of fractures in fold-and-thrust-belts, the complex kinematic pathways of folded structures play a key role. In areas with scarce and not reliable underground information, analogue modelling can provide effective support for developing and validating reliable hypotheses on structural architectures and their evolution. In this contribution, we propose a working method that combines analogue and numerical modelling. We deformed a sand-silicone multilayer to eventually produce a non-cylindrical thrust-related anticline at the wedge toe, which was our test geological structure at the reservoir scale. We cut 60 serial cross-sections through the central part of the deformed model to analyze faults and folds geometry using dedicated software (3D Move). The cross-sections were also used to reconstruct the 3D geometry of reference surfaces that compose the mechanical stratigraphy thanks to the use of the software GoCad. From the 3D model of the experimental anticline, by using 3D Move it was possible to calculate the cumulative stress and strain underwent by the deformed reference layers at the end of the deformation and also in incremental steps of fold growth. Based on these model outputs it was also possible to predict the orientation of three main fractures sets (joints and conjugate shear fractures) and their occurrence and density on model surfaces. The next step was the upscaling of the fracture network to the entire digital model volume, to create DFNs.
Forecasting of wet snow avalanche activity: Proof of concept and operational implementation
NASA Astrophysics Data System (ADS)
Gobiet, Andreas; Jöbstl, Lisa; Rieder, Hannes; Bellaire, Sascha; Mitterer, Christoph
2017-04-01
State-of-the-art tools for the operational assessment of avalanche danger include field observations, recordings from automatic weather stations, meteorological analyses and forecasts, and recently also indices derived from snowpack models. In particular, an index for identifying the onset of wet-snow avalanche cycles (LWCindex), has been demonstrated to be useful. However, its value for operational avalanche forecasting is currently limited, since detailed, physically based snowpack models are usually driven by meteorological data from automatic weather stations only and have therefore no prognostic ability. Since avalanche risk management heavily relies on timely information and early warnings, many avalanche services in Europe nowadays start issuing forecasts for the following days, instead of the traditional assessment of the current avalanche danger. In this context, the prognostic operation of detailed snowpack models has recently been objective of extensive research. In this study a new, observationally constrained setup for forecasting the onset of wet-snow avalanche cycles with the detailed snow cover model SNOWPACK is presented and evaluated. Based on data from weather stations and different numerical weather prediction models, we demonstrate that forecasts of the LWCindex as indicator for wet-snow avalanche cycles can be useful for operational warning services, but is so far not reliable enough to be used as single warning tool without considering other factors. Therefore, further development currently focuses on the improvement of the forecasts by applying ensemble techniques and suitable post processing approaches to the output of numerical weather prediction models. In parallel, the prognostic meteo-snow model chain is operationally used by two regional avalanche warning services in Austria since winter 2016/2017 for the first time. Experiences from the first operational season and first results from current model developments will be reported.
Two-length-scale turbulence model for self-similar buoyancy-, shock-, and shear-driven mixing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morgan, Brandon E.; Schilling, Oleg; Hartland, Tucker A.
The three-equation k-L-a turbulence model [B. Morgan and M. Wickett, Three-equation model for the self-similar growth of Rayleigh-Taylor and Richtmyer-Meshkov instabilities," Phys. Rev. E 91 (2015)] is extended by the addition of a second length scale equation. It is shown that the separation of turbulence transport and turbulence destruction length scales is necessary for simultaneous prediction of the growth parameter and turbulence intensity of a Kelvin-Helmholtz shear layer when model coeficients are constrained by similarity analysis. Constraints on model coeficients are derived that satisfy an ansatz of self-similarity in the low-Atwood-number limit and allow the determination of model coeficients necessarymore » to recover expected experimental behavior. The model is then applied in one-dimensional simulations of Rayleigh-Taylor, reshocked Richtmyer-Meshkov, Kelvin{Helmholtz, and combined Rayleigh-Taylor/Kelvin-Helmholtz instability mixing layers to demonstrate that the expected growth rates are recovered numerically. Finally, it is shown that model behavior in the case of combined instability is to predict a mixing width that is a linear combination of Rayleigh-Taylor and Kelvin-Helmholtz mixing processes.« less
Effects from Unsaturated Zone Flow during Oscillatory Hydraulic Testing
NASA Astrophysics Data System (ADS)
Lim, D.; Zhou, Y.; Cardiff, M. A.; Barrash, W.
2014-12-01
In analyzing pumping tests on unconfined aquifers, the impact of the unsaturated zone is often neglected. Instead, desaturation at the water table is often treated as a free-surface boundary, which is simple and allows for relatively fast computation. Richards' equation models, which account for unsaturated flow, can be compared with saturated flow models to validate the use of Darcy's Law. In this presentation, we examine the appropriateness of using fast linear steady-periodic models based on linearized water table conditions in order to simulate oscillatory pumping tests in phreatic aquifers. We compare oscillatory pumping test models including: 1) a 2-D radially-symmetric phreatic aquifer model with a partially penetrating well, simulated using both Darcy's Law and Richards' Equation in COMSOL; and 2) a linear phase-domain numerical model developed in MATLAB. Both COMSOL and MATLAB models are calibrated to match oscillatory pumping test data collected in the summer of 2013 at the Boise Hydrogeophysical Research Site (BHRS), and we examine the effect of model type on the associated parameter estimates. The results of this research will aid unconfined aquifer characterization efforts and help to constrain the impact of the simplifying physical assumptions often employed during test analysis.
Two-length-scale turbulence model for self-similar buoyancy-, shock-, and shear-driven mixing
Morgan, Brandon E.; Schilling, Oleg; Hartland, Tucker A.
2018-01-10
The three-equation k-L-a turbulence model [B. Morgan and M. Wickett, Three-equation model for the self-similar growth of Rayleigh-Taylor and Richtmyer-Meshkov instabilities," Phys. Rev. E 91 (2015)] is extended by the addition of a second length scale equation. It is shown that the separation of turbulence transport and turbulence destruction length scales is necessary for simultaneous prediction of the growth parameter and turbulence intensity of a Kelvin-Helmholtz shear layer when model coeficients are constrained by similarity analysis. Constraints on model coeficients are derived that satisfy an ansatz of self-similarity in the low-Atwood-number limit and allow the determination of model coeficients necessarymore » to recover expected experimental behavior. The model is then applied in one-dimensional simulations of Rayleigh-Taylor, reshocked Richtmyer-Meshkov, Kelvin{Helmholtz, and combined Rayleigh-Taylor/Kelvin-Helmholtz instability mixing layers to demonstrate that the expected growth rates are recovered numerically. Finally, it is shown that model behavior in the case of combined instability is to predict a mixing width that is a linear combination of Rayleigh-Taylor and Kelvin-Helmholtz mixing processes.« less
NASA Astrophysics Data System (ADS)
Wang, Yang; Ma, Guowei; Ren, Feng; Li, Tuo
2017-12-01
A constrained Delaunay discretization method is developed to generate high-quality doubly adaptive meshes of highly discontinuous geological media. Complex features such as three-dimensional discrete fracture networks (DFNs), tunnels, shafts, slopes, boreholes, water curtains, and drainage systems are taken into account in the mesh generation. The constrained Delaunay triangulation method is used to create adaptive triangular elements on planar fractures. Persson's algorithm (Persson, 2005), based on an analogy between triangular elements and spring networks, is enriched to automatically discretize a planar fracture into mesh points with varying density and smooth-quality gradient. The triangulated planar fractures are treated as planar straight-line graphs (PSLGs) to construct piecewise-linear complex (PLC) for constrained Delaunay tetrahedralization. This guarantees the doubly adaptive characteristic of the resulted mesh: the mesh is adaptive not only along fractures but also in space. The quality of elements is compared with the results from an existing method. It is verified that the present method can generate smoother elements and a better distribution of element aspect ratios. Two numerical simulations are implemented to demonstrate that the present method can be applied to various simulations of complex geological media that contain a large number of discontinuities.
Comprehensive approach to fast ion measurements in the beam-driven FRC
NASA Astrophysics Data System (ADS)
Magee, Richard; Smirnov, Artem; Onofri, Marco; Dettrick, Sean; Korepanov, Sergey; Knapp, Kurt; the TAE Team
2015-11-01
The C-2U experiment combines tangential neutral beam injection, edge biasing, and advanced recycling control to explore the sustainment of field-reversed configuration (FRC) plasmas. To study fast ion confinement in such advanced, beam-driven FRCs, a synergetic technique was developed that relies on the measurements of the DD fusion reaction products and the hybrid code Q2D, which treats the plasma as a fluid and the fast ions kinetically. Data from calibrated neutron and proton detectors are used in a complementary fashion to constrain the simulations: neutron detectors measure the volume integrated fusion rate to constrain the total number of fast ions, while proton detectors with multiple lines of sight through the plasma constrain the axial profile of fast ions. One application of this technique is the diagnosis of fast ion energy transfer and pitch angle scattering. A parametric numerical study was conducted, in which additional ad hoc loss and scattering terms of varying strengths were introduced in the code and constrained with measurement. Initial results indicate that the energy transfer is predominantly classical, while, in some cases, non-classical pitch angle scattering can be observed.
NASA Astrophysics Data System (ADS)
Simon, E.; Bertino, L.; Samuelsen, A.
2011-12-01
Combined state-parameter estimation in ocean biogeochemical models with ensemble-based Kalman filters is a challenging task due to the non-linearity of the models, the constraints of positiveness that apply to the variables and parameters, and the non-Gaussian distribution of the variables in which they result. Furthermore, these models are sensitive to numerous parameters that are poorly known. Previous works [1] demonstrated that the Gaussian anamorphosis extensions of ensemble-based Kalman filters were relevant tools to perform combined state-parameter estimation in such non-Gaussian framework. In this study, we focus on the estimation of the grazing preferences parameters of zooplankton species. These parameters are introduced to model the diet of zooplankton species among phytoplankton species and detritus. They are positive values and their sum is equal to one. Because the sum-to-one constraint cannot be handled by ensemble-based Kalman filters, a reformulation of the parameterization is proposed. We investigate two types of changes of variables for the estimation of sum-to-one constrained parameters. The first one is based on Gelman [2] and leads to the estimation of normal distributed parameters. The second one is based on the representation of the unit sphere in spherical coordinates and leads to the estimation of parameters with bounded distributions (triangular or uniform). These formulations are illustrated and discussed in the framework of twin experiments realized in the 1D coupled model GOTM-NORWECOM with Gaussian anamorphosis extensions of the deterministic ensemble Kalman filter (DEnKF). [1] Simon E., Bertino L. : Gaussian anamorphosis extension of the DEnKF for combined state and parameter estimation : application to a 1D ocean ecosystem model. Journal of Marine Systems, 2011. doi :10.1016/j.jmarsys.2011.07.007 [2] Gelman A. : Method of Moments Using Monte Carlo Simulation. Journal of Computational and Graphical Statistics, 4, 1, 36-54, 1995.
NASA Technical Reports Server (NTRS)
Capotondi, Antonietta; Malanotte-Rizzoli, Paola; Holland, William R.
1995-01-01
The dynamical consequences of constraining a numerical model with sea surface height data have been investigated. The model used for this study is a quasigeostrophic model of the Gulf Stream region. The data that have been assimilated are maps of sea surface height obtained as the superposition of sea surface height variability deduced from the Geosat altimeter measurements and a mean field constructed from historical hydrographic data. The method used for assimilating the data is the nudging technique. Nudging has been implemented in such a way as to achieve a high degree of convergence of the surface model fields toward the observations. The assimilation of the surface data is thus equivalent to the prescription of a surface pressure boundary condition. The authors analyzed the mechanisms of the model adjustment and the characteristics of the resultant equilibrium state when the surface data are assimilated. Since the surface data are the superposition of a mean component and an eddy component, in order to understand the relative role of these two components in determining the characteristics of the final equilibrium state, two different experiments have been considered: in the first experiment only the climatological mean field is assimilated, while in the second experiment the total surface streamfunction field (mean plus eddies) has been used. It is shown that the model behavior in the presence of the surface data constraint can be conveniently described in terms of baroclinic Fofonoff modes. The prescribed mean component of the surface data acts as a 'surface topography' in this problem. Its presence determines a distortion of the geostrophic contours in the subsurface layers, thus constraining the mean circulation in those layers. The intensity of the mean flow is determined by the inflow/outflow conditions at the open boundaries, as well as by eddy forcing and dissipation.
Constrained multibody system dynamics: An automated approach
NASA Technical Reports Server (NTRS)
Kamman, J. W.; Huston, R. L.
1982-01-01
The governing equations for constrained multibody systems are formulated in a manner suitable for their automated, numerical development and solution. The closed loop problem of multibody chain systems is addressed. The governing equations are developed by modifying dynamical equations obtained from Lagrange's form of d'Alembert's principle. The modifications is based upon a solution of the constraint equations obtained through a zero eigenvalues theorem, is a contraction of the dynamical equations. For a system with n-generalized coordinates and m-constraint equations, the coefficients in the constraint equations may be viewed as constraint vectors in n-dimensional space. In this setting the system itself is free to move in the n-m directions which are orthogonal to the constraint vectors.
NASA Technical Reports Server (NTRS)
Folta, David C.; Bosanac, Natasha; Cox, Andrew; Howell, Kathleen C.
2016-01-01
Lunar IceCube, a 6U CubeSat, will prospect for water and other volatiles from a low-periapsis, highly inclined elliptical lunar orbit. Injected from Exploration Mission-1, a lunar gravity assisted multi-body transfer trajectory will capture into a lunar science orbit. The constrained departure asymptote and value of trans-lunar energy limit transfer trajectory types that re-encounter the Moon with the necessary energy and flight duration. Purdue University and Goddard Space Flight Center's Adaptive Trajectory Design tool and dynamical system research is applied to uncover cislunar spatial regions permitting viable transfer arcs. Numerically integrated transfer designs applying low-thrust and a design framework are described.
Explicit reference governor for linear systems
NASA Astrophysics Data System (ADS)
Garone, Emanuele; Nicotra, Marco; Ntogramatzidis, Lorenzo
2018-06-01
The explicit reference governor is a constrained control scheme that was originally introduced for generic nonlinear systems. This paper presents two explicit reference governor strategies that are specifically tailored for the constrained control of linear time-invariant systems subject to linear constraints. Both strategies are based on the idea of maintaining the system states within an invariant set which is entirely contained in the constraints. This invariant set can be constructed by exploiting either the Lyapunov inequality or modal decomposition. To improve the performance, we show that the two strategies can be combined by choosing at each time instant the least restrictive set. Numerical simulations illustrate that the proposed scheme achieves performances that are comparable to optimisation-based reference governors.
A numerical study of adaptive space and time discretisations for Gross–Pitaevskii equations
Thalhammer, Mechthild; Abhau, Jochen
2012-01-01
As a basic principle, benefits of adaptive discretisations are an improved balance between required accuracy and efficiency as well as an enhancement of the reliability of numerical computations. In this work, the capacity of locally adaptive space and time discretisations for the numerical solution of low-dimensional nonlinear Schrödinger equations is investigated. The considered model equation is related to the time-dependent Gross–Pitaevskii equation arising in the description of Bose–Einstein condensates in dilute gases. The performance of the Fourier-pseudo spectral method constrained to uniform meshes versus the locally adaptive finite element method and of higher-order exponential operator splitting methods with variable time stepsizes is studied. Numerical experiments confirm that a local time stepsize control based on a posteriori local error estimators or embedded splitting pairs, respectively, is effective in different situations with an enhancement either in efficiency or reliability. As expected, adaptive time-splitting schemes combined with fast Fourier transform techniques are favourable regarding accuracy and efficiency when applied to Gross–Pitaevskii equations with a defocusing nonlinearity and a mildly varying regular solution. However, the numerical solution of nonlinear Schrödinger equations in the semi-classical regime becomes a demanding task. Due to the highly oscillatory and nonlinear nature of the problem, the spatial mesh size and the time increments need to be of the size of the decisive parameter 0<ε≪1, especially when it is desired to capture correctly the quantitative behaviour of the wave function itself. The required high resolution in space constricts the feasibility of numerical computations for both, the Fourier pseudo-spectral and the finite element method. Nevertheless, for smaller parameter values locally adaptive time discretisations facilitate to determine the time stepsizes sufficiently small in order that the numerical approximation captures correctly the behaviour of the analytical solution. Further illustrations for Gross–Pitaevskii equations with a focusing nonlinearity or a sharp Gaussian as initial condition, respectively, complement the numerical study. PMID:25550676
A numerical study of adaptive space and time discretisations for Gross-Pitaevskii equations.
Thalhammer, Mechthild; Abhau, Jochen
2012-08-15
As a basic principle, benefits of adaptive discretisations are an improved balance between required accuracy and efficiency as well as an enhancement of the reliability of numerical computations. In this work, the capacity of locally adaptive space and time discretisations for the numerical solution of low-dimensional nonlinear Schrödinger equations is investigated. The considered model equation is related to the time-dependent Gross-Pitaevskii equation arising in the description of Bose-Einstein condensates in dilute gases. The performance of the Fourier-pseudo spectral method constrained to uniform meshes versus the locally adaptive finite element method and of higher-order exponential operator splitting methods with variable time stepsizes is studied. Numerical experiments confirm that a local time stepsize control based on a posteriori local error estimators or embedded splitting pairs, respectively, is effective in different situations with an enhancement either in efficiency or reliability. As expected, adaptive time-splitting schemes combined with fast Fourier transform techniques are favourable regarding accuracy and efficiency when applied to Gross-Pitaevskii equations with a defocusing nonlinearity and a mildly varying regular solution. However, the numerical solution of nonlinear Schrödinger equations in the semi-classical regime becomes a demanding task. Due to the highly oscillatory and nonlinear nature of the problem, the spatial mesh size and the time increments need to be of the size of the decisive parameter [Formula: see text], especially when it is desired to capture correctly the quantitative behaviour of the wave function itself. The required high resolution in space constricts the feasibility of numerical computations for both, the Fourier pseudo-spectral and the finite element method. Nevertheless, for smaller parameter values locally adaptive time discretisations facilitate to determine the time stepsizes sufficiently small in order that the numerical approximation captures correctly the behaviour of the analytical solution. Further illustrations for Gross-Pitaevskii equations with a focusing nonlinearity or a sharp Gaussian as initial condition, respectively, complement the numerical study.
NASA Astrophysics Data System (ADS)
Elkhateeb, Esraa
2018-01-01
We consider a cosmological model based on a generalization of the equation of state proposed by Nojiri and Odintsov (2004) and Štefančić (2005, 2006). We argue that this model works as a dark fluid model which can interpolate between dust equation of state and the dark energy equation of state. We show how the asymptotic behavior of the equation of state constrained the parameters of the model. The causality condition for the model is also studied to constrain the parameters and the fixed points are tested to determine different solution classes. Observations of Hubble diagram of SNe Ia supernovae are used to further constrain the model. We present an exact solution of the model and calculate the luminosity distance and the energy density evolution. We also calculate the deceleration parameter to test the state of the universe expansion.
NASA Astrophysics Data System (ADS)
George, D. L.; Iverson, R. M.
2012-12-01
Numerically simulating debris-flow motion presents many challenges due to the complicated physics of flowing granular-fluid mixtures, the diversity of spatial scales (ranging from a characteristic particle size to the extent of the debris flow deposit), and the unpredictability of the flow domain prior to a simulation. Accurately predicting debris-flows requires models that are complex enough to represent the dominant effects of granular-fluid interaction, while remaining mathematically and computationally tractable. We have developed a two-phase depth-averaged mathematical model for debris-flow initiation and subsequent motion. Additionally, we have developed software that numerically solves the model equations efficiently on large domains. A unique feature of the mathematical model is that it includes the feedback between pore-fluid pressure and the evolution of the solid grain volume fraction, a process that regulates flow resistance. This feature endows the model with the ability to represent the transition from a stationary mass to a dynamic flow. With traditional approaches, slope stability analysis and flow simulation are treated separately, and the latter models are often initialized with force balances that are unrealistically far from equilibrium. Additionally, our new model relies on relatively few dimensionless parameters that are functions of well-known material properties constrained by physical data (eg. hydraulic permeability, pore-fluid viscosity, debris compressibility, Coulomb friction coefficient, etc.). We have developed numerical methods and software for accurately solving the model equations. By employing adaptive mesh refinement (AMR), the software can efficiently resolve an evolving debris flow as it advances through irregular topography, without needing terrain-fit computational meshes. The AMR algorithms utilize multiple levels of grid resolutions, so that computationally inexpensive coarse grids can be used where the flow is absent, and much higher resolution grids evolve with the flow. The reduction in computational cost, due to AMR, makes very large-scale problems tractable on personal computers. Model accuracy can be tested by comparison of numerical predictions and empirical data. These comparisons utilize controlled experiments conducted at the USGS debris-flow flume, which provide detailed data about flow mobilization and dynamics. Additionally, we have simulated historical large-scale debris flows, such as the (≈50 million m^3) debris flow that originated on Mt. Meager, British Columbia in 2010. This flow took a very complex route through highly variable topography and provides a valuable benchmark for testing. Maps of the debris flow deposit and data from seismic stations provide evidence regarding flow initiation, transit times and deposition. Our simulations reproduce many of the complex patterns of the event, such as run-out geometry and extent, and the large-scale nature of the flow and the complex topographical features demonstrate the utility of AMR in flow simulations.
Gravitational Wakes Sizes from Multiple Cassini Radio Occultations of Saturn's Rings
NASA Astrophysics Data System (ADS)
Marouf, E. A.; Wong, K. K.; French, R. G.; Rappaport, N. J.; McGhee, C. A.; Anabtawi, A.
2016-12-01
Voyager and Cassini radio occultation extinction and forward scattering observations of Saturn's C-Ring and Cassini Division imply power law particle size distributions extending from few millimeters to several meters with power law index in the 2.8 to 3.2 range, depending on the specific ring feature. We extend size determination to the elongated and canted particle clusters (gravitational wakes) known to permeate Saturn's A- and B-Rings. We use multiple Cassini radio occultation observations over a range of ring opening angle B and wake viewing angle α to constrain the mean wake width W and thickness/height H, and average ring area coverage fraction. The rings are modeled as randomly blocked diffraction screen in the plane normal to the incidence direction. Collective particle shadows define the blocked area. The screen's transmittance is binary: blocked or unblocked. Wakes are modeled as thin layer of elliptical cylinders populated by random but uniformly distributed spherical particles. The cylinders can be immersed in a "classical" layer of spatially uniformly distributed particles. Numerical simulations of model diffraction patterns reveal two distinct components: cylindrical and spherical. The first dominates at small scattering angles and originates from specific locations within the footprint of the spacecraft antenna on the rings. The second dominates at large scattering angles and originates from the full footprint. We interpret Cassini extinction and scattering observations in the light of the simulation results. We compute and remove contribution of the spherical component to observed scattered signal spectra assuming known particle size distribution. A large residual spectral component is interpreted as contribution of cylindrical (wake) diffraction. Its angular width determines a cylindrical shadow width that depends on the wake parameters (W,H) and the viewing geometry (α,B). Its strength constrains the mean fractional area covered (optical depth), hence constrains the mean wakes spacing. Self-consistent (W,H) are estimated using least-square fit to results from multiple occultations. Example results for observed scattering by several inner A-Ring features suggest particle clusters (wakes) that are few tens of meters wide and several meters thick.
Investigations of Desert Dust and Smoke in the North Atlantic in Support of the TOMS Instrument
NASA Technical Reports Server (NTRS)
Toon, Owen B.
2005-01-01
During the initial period of the work we concentrated on Saharan dust storms and published a sequence of papers (Colarco et a1 2002,2003a,b, Toon, 2004). The U.S. Air Force liked the dust model so well that they appropriated it for operational dust storm forecasting (Barnum et al., 2004). The Air Force has used it for about 5 yrs in the Middle East where dust storms cause significant operational problems. The student working on this project, Peter Colarco, has graduated and is now a civil servant at Goddard where he continues to interact with the TOMS team. This work helped constrain the optical properties of dust at TOMS wavelengths, which is useful for climate simulations and for TOMS retrievals of dust properties such as optical depth. We also used TOMS data to constrain the sources of dust in Africa and the Middle East, to determine the actual paths taken by Saharan dust storms, to learn more about the mechanics of variations in the optical depths, and to learn more about the mechanisms controlling the altitudes of the dust. During the last two years we have been working on smoke from fires. Black carbon aerosols are one of the leading factors in radiative forcing. The US Climate Change Science Program calls this area out for specific study. It has been suggested by Jim Hansen, and Mark Jacobsen among others, that by controlling emissions of black carbon we might reduce greenhouse radiative forcing in a relatively painless manner. However, we need a greatly improved understanding of the amount of black carbon in the atmosphere, where it is located, where it comes from, how it is mixed with other particles, what its actual optical properties are, and how it evolves. In order to learn about these issues we are using a numerical model of smoke. We have applied this model to the SAFARI field program data, and used the TOMS satellite observations in that period (Sept. 2000). Our goal is to constrain source function estimates for black carbon, and smoke optical properties.
NASA Astrophysics Data System (ADS)
Featherstone, Nicholas
2017-05-01
Our understanding of the interior dynamics that give rise to a stellar dynamo draws heavily from investigations of similar dynamics in the solar context. Unfortunately, an outstanding gap persists in solar dynamo theory. Convection, an indispensable component of the dynamo, occurs in the midst of rotation, and yet we know little about how the influence of that rotation manifests across the broad range of convective scales present in the Sun. We are nevertheless well aware that the interaction of rotation and convection profoundly impacts many aspects of the dynamo, including the meridional circulation, the differential rotation, and the helicity of turbulent EMF. The rotational constraint felt by solar convection ultimately hinges on the characteristic amplitude of deep convective flow speeds, and such flows are difficult to measure helioseismically. Those measurements of deep convective power which do exist disagree by orders of magnitude, and until this disagreement is resolved, we are left with the results of models and those less ambiguous measurements derived from surface observations of solar convection. I will present numerical results from a series of nonrotating and rotating convection simulations conducted in full 3-D spherical geometry. This presentation will focus on how convective spectra differ between the rotating and non-rotating models and how that behavior changes as simulations are pushed toward more turbulent and/or more rotationally-constrained regimes. I will discuss how the surface signature of rotationally-constrained interior convection might naturally lead to observable signatures in the surface convective pattern, such as supergranulation and a dearth of giant cells.